3 Node Hyperconverged Proxmox cluster: Failure testing, Ceph performance, 10Gb mesh network

Описание к видео 3 Node Hyperconverged Proxmox cluster: Failure testing, Ceph performance, 10Gb mesh network

Why stop at 1 server? This videos goes over Proxmox clusters, what they can do, and how failure is handled.

Thanks to QSFPTEK for providing the network cables and transceivers used in this video. These products are available in the links below:
10G SFP+ DAC: https://www.qsfptek.com/product/34865...
SFP-10G-T: https://www.qsfptek.com/product/10009...
SFP+-10G-SR: https://www.qsfptek.com/product/30953...
OM3: https://www.qsfptek.com/product/99661...

Let me know if you have any ideas that I can do with this cluster. I'd love to try more software and hardware configurations. This video also skipped many of the details when setting up this cluster. Let me know if you want me to go into more details with any parts of this video.

00:00 Intro
00:32 Hardware overview
00:57 Networking hardware setup
03:06 Software overview
03:30 Ceph overview
05:06 Network configuration
6:30 Advantages of a cluster
7:10 Ceph performance
8:45 Failure testing
9:00 Ceph drive failure
9:28 network link failure
10:00 Node failure
11:06 Conclusion

Full Node Specs:
Node0:
EMC Isilon x200
2x L5630
24GB DDR3
4x500GB SSDs
Node1:
DIY server with Intel S2600CP motherboard
2x E5 2680 v2
64GB
5x Sun F40 SSDs(20x 100GB SSD spresented to the OS)
Node2:
Asus LGA 2011v3 1u server
1x E5 2643 v4
128GB DDR4
4x500GB SSDs

Комментарии

Информация по комментариям в разработке