All about POOLS | Proxmox + Ceph Hyperconverged Cluster fäncy Configurations for RBD

Описание к видео All about POOLS | Proxmox + Ceph Hyperconverged Cluster fäncy Configurations for RBD

In this video, I expand on the last video of my hyper-converged Proxmox + Ceph cluster to create more custom pool layouts than Proxmox's GUI allows. This includes setting the storage class (HDD / SSD / NVME), failure domain, and even erasure coding of pools. All of this is then setup as a storage location in Proxmox for RBD (RADOS Block Device), so we can store VM disks on it.

After all of this, I now have the flexibility to assign VM disks to HDDs or SSDs, and use erasure coding to get 66% storage efficiency instead of 33% (doubling my usable capacity for the same disks!). With more nodes and disks, I could improve both the storage efficiency and failure resilience of my cluster, but with only the small number of disks I have, I opted to go for a basic 2+1 erasure code.

Blog post for this video (tbh not a whole lot there):
https://www.apalrd.net/posts/2022/clu...

My Discord Server, where you can chat about any of this:
  / discord  

If you find my content useful and would like to support me, feel free to here: https://ko-fi.com/apalrd

This video is part of my Hyperconverged Cluster Megaproject:
   • Hyper-Converged Cluster Megaproject  

Find an HP Microserver Gen8 like mine on eBay (maybe you'll get lucky idk):
https://ebay.us/1l1HI1

Timestamps:
00:00 - Introduction
01:11 - Cluster Overview
02:16 - Ceph Web Manager
03:10 - Failure Domains
04:24 - Custom CRUSH Ruleset
06:18 - Storage Efficiency & Erasure Codes
08:00 - Creating Erasure Coded Pools
12:35 - Results

Some links to products may be affiliate links, which may earn a commission for me.

Комментарии

Информация по комментариям в разработке