Hp StorageWorks Scalable File Share Manuel d'utilisateur Page 51

  • Télécharger
  • Ajouter à mon manuel
  • Imprimer
  • Page
    / 58
  • Table des matières
  • MARQUE LIVRES
  • Noté. / 5. Basé sur avis des utilisateurs
Vue de la page 50
Figure A-6 Multi-Client Throughput Scaling
In general, Lustre scales quite well with additional OSS servers if the workload is evenly
distributed over the OSTs and the load on the metadata server remains reasonable.
Neither the stripe size nor the I/O size had much effect on throughput when each client wrote
to or read from its own OST. Changing the stripe count for each file did have an effect as shown
in Figure A-7.
Figure A-7 Multi-Client Throughput and File Stripe Count
Here, 16 clients wrote or read 16 files of 16GB each. The first bars on this chart represent the same
data as the points on the right side of the previous graph. In the five cases, the stripe count of
the file ranged from 1 to 16. Since the number of clients equaled the number of OSTs, this count
was also the number of clients that shared each OST.
Figure A-7 shows that write throughput can improve slightly with increased stripe count, up to
a point. However, read throughput is clearly best when each stream has its own OST.
A.3 Throughput Scaling 51
Vue de la page 50
1 2 ... 46 47 48 49 50 51 52 53 54 55 56 57 58

Commentaires sur ces manuels

Pas de commentaire