
8
HPC configuration with HP BladeSystem solutions
Figure 7 shows a full-bandwidth, fat-tree configuration of HP BladeSystem c-Class components
providing 576 nodes in a cluster. Each c7000 enclosure includes an HP 4x QDR InfiniBand Switch
Blade, with 16 downlinks for server blade connection and 16 QSFP uplinks for fabric connectivity.
Sixteen 36-port QDR InfiniBand switches provide spine-level fabric connectivity.
Figure 7. HP BladeSystem c-Class 576-node cluster configuration using BL280c blades
HP c7000 Enclosure #1
server blades
36-Port QDR
HP c7000 Enclosure #2
16 HP BL280c G6
server blades
HP c7000 Enclosure #36
16 HP BL280c G6
server blades
HP QDR IB
Switch Blade
HP QDR IB
Switch Blade
HP QDR IB
Switch Blade
Total nodes 576 (1 per blade)
Racks required for servers Nine 42U
(assumes four c7000 enclosures per rack)
Interconnect 1:1 full bandwidth (non-blocking),
3 switch hops maximum, fabric redundancy
36-Port QDR
36-Port QDR
IB Switch #16
Commentaires sur ces manuels