HP Cluster Platform Cabling Tables Livre blanc Page 11

  • Télécharger
  • Ajouter à mon manuel
  • Imprimer
  • Page
    / 12
  • Table des matières
  • MARQUE LIVRES
  • Noté. / 5. Basé sur avis des utilisateurs
Vue de la page 10
11
Conclusion
You should base your decision to use Ethernet or InfiniBand on performance and cost requirements.
We are committed to supporting both InfiniBand and Ethernet infrastructures. We want to help you
choose the most cost-effective fabric solution for your environment.
InfiniBand is the best choice for HPC clusters requiring scalability from hundreds to thousands of
nodes. While you can apply zero-copy (RDMA) protocols to TCP/IP networks such as Ethernet, RDMA
is a core capability of InfiniBand architecture. Flow control and congestion avoidance are native to
InfiniBand.
InfiniBand also includes support for fat-tree and other mesh topologies that allow simultaneous
connections across multiple links. This lets the InfiniBand fabric scale as you connect more nodes and
links.
Parallel computing applications that involve a high degree of message passing between nodes benefit
significantly from InfiniBand. Data centers worldwide have deployed DDR for years and are quickly
adopting QDR. HP BladeSystem c-Class clusters and similar rack-mounted clusters support IB DDR and
QDR HCAs and switches.
Vue de la page 10
1 2 ... 6 7 8 9 10 11 12

Commentaires sur ces manuels

Pas de commentaire