• The following command runs a.out w ith four ranks, two ranks per node, ranks are block
allocated, and two nodes are used:
$ m
pirun -srun -n4 ./a.out
hos
t1 rank1
hos
t1 rank2
hos
t2 rank3
host
2 rank4
• The following com mand runs a.out with six ranks (oversubscribed), three ranks per node,
ranks are block allocated, and two nodes are used:
$ mpirun -srun -n6 -O -N2 -m block ./a.out
host1 rank1
host1 rank2
host1 rank3
host2 rank4
host2 rank5
host2 rank6
• The following e xam ple runs a.out with six ranks (ov ersub scrib ed), three rank s per no de,
ranks are cyclically allocated, and two nodes used:
$ mpirun -srun -n6 -O -N2 -m cyclic ./a.out
host1 rank1
host2 rank2
host1 rank3
host2 rank4
host1 rank5
host2 rank6
8.3.3.2 Creating Subshells and Launching Jobsteps
Other forms of usage include allocating the nodes you wish to use, which creates a subshell.
Then jobsteps can be launched within that subshell until the subshell is exited.
The following commands demonstrate how to create a subshell and launch jobstep s.
• This command allocates six nodes and creates a subshell:
$ mpirun -srun -A -N6
• This command allocates four ranks on four nodes cyclically. A block was requested in
this command.
$ mpirun -srun -n4 -m block ./a.out
host1 rank1
host2 rank2
host3 rank3
host4 rank4
• This c om m and allocates four ranks on two nodes, block ed. Note that this was forced to
happen within the allo cation by using oversubscription:
$ mpirun -srun -n4 -N2 -O -m cyclic ./a.out
host1 rank1
host1 rank2
host2 rank3
host2 rank4
: MPI_Init: cyclic node allocation not supported for
ranks>#ofnodes
: MPI_Init: Cannot set srun startup protocol
8.3.3.3 System Interconnect Selection
This section provides examples of how to perform sy stem interconnect sel ection.
Using HP-MPI 8-5
Commentaires sur ces manuels