$250 Proxmox Cluster gets HYPER-CONVERGED with Ceph! Basic Ceph, RADOS, and RBD for Proxmox VMs

Описание к видео $250 Proxmox Cluster gets HYPER-CONVERGED with Ceph! Basic Ceph, RADOS, and RBD for Proxmox VMs

I previously setup a Proxmox high availability cluster on my $35 Dell Wyse 5060 thin clients. Now, I'm improving this cluster to make it *hyperconverged*. It's a huge buzzword in the industry now, and basically, it combines storage and compute in the same nodes, with each node having some compute and some storage, and clustering both the storage and compute. In traditional clustering you have a storage system (SAN) and compute system (virtualization cluster / kubernetes / ...), so merging the SAN into the compute nodes means all of the nodes are identical and network traffic is, in aggregate, going from all nodes to all nodes without a bottleneck between the compute half and SAN half.

Today I am limiting this tutorial to only the features provided through the Proxmox GUI for Ceph, and only for RBD (RADOS Block Device) storage (Not CephFS). Ceph is a BIG topic for BIG data, but I'm planning on covering erasure coded RBD pools followed by CephFS in the future. But be sure to let me know if there's anything specific you'd like to see.

Merging your storage and compute can make sense, even in the homelab, if we are concerned with single point failures. I currently rely on TrueNAS for my storage needs, but any maintenance to the TrueNAS server will kick the Proxmox cluster offline. The Proxmox cluster can handle a failed node (or a node down for maintenance), but with shared storage on TrueNAS, we don't get that same level of failure tolerance on the storage side, so we are still a single point of failure away from losing functionality. I could add storage to every Proxmox node and use ZFS replication to keep the VM disks in sync, but then I either need to give in to having a copy of all VMs on all nodes, or individually pick two nodes for each VM and replicate the disk to those two (and create all of the corresponding HA groups so they don't get migrated somewhere else).

With Ceph, I can let Ceph deal with storage balancing on the back end, and know that VM disks are truly stored without a single point of failure. Any node in the cluster can access any VM disk, and as the cluster expands beyond 3 nodes I am only storing the VM 3 times. With erasure coding, I can get this down to 1.5 times or less, but that's a topic for a future video.

As a bonus, I can use CephFS to store files used by the VMs, and the VMs can mount the cluster filesystem themselves if they need to, getting the same level of redundancy while sharing the data with multiple servers, or gateways to NFS/SMB. Of course, that's also a topic for a future video.

Link to the blog post:
https://www.apalrd.net/posts/2022/clu...

Cost accounting:
I actually spent $99 on the three thin clients (as shown in a previous video). I spent another $25 each for 8G DDR3L SODIMMs to upgrade the thin clients to 12G each (1 8G stick + the 4G stick they came with). And I spent $16 each on the flash drives. Total is $222, so round up to $250 to cover shipping and taxes.

My Discord server:
  / discord  

If you find my content useful and would like to support me, feel free to here: https://ko-fi.com/apalrd

Thumbnails:
00:00 - Introduction
00:35 - Hardware
02:13 - Ceph Installation
06:15 - Ceph Dashboard
08:06 - Object Storage Daemons
16:02 - Basic Pool Creation
18:04 - Basic VM Storage
19:34 - Degraded Operation
21:50 - Conclusions

#Ceph
#Proxmox
#BigData
#Homelab
#Linux
#Virtualization
#Cluster


Proxmox is a trademark of Proxmox Server Solutions GmbH
Ceph is a trademark of Red Hat Inc

Комментарии

Информация по комментариям в разработке