
Example 8-1 displays how to perform a system interconnect selection.
Example 8-1: Performing System Interconnect Selection
% export MPI_IC_ORDER="elan:TCP:gm:itapi"
% export MPIRUN_SYSTEM_OPTIONS="-subnet 192.168.1.1"
% export MPIRUN_OPTIONS="-prot"
% mpirun -srun -n4 ./a.out
The command line for the above will appear to mpirun as:
$ mpirun -subnet 192.168.1.1 -prot -srun -n4 ./a.out
The system interconnect decision will look for the presence of Elan and use it if found.
Otherwise, TC P/IP will be used an d the communication path will be on the subnet
192.168.1.*.
Example 8-2 illustrates using TCP/IP over Gigabit Ethernet, assum ing Gigabit Ethernet is
installed and 192.168.1.1 corresponds to the Ethernet interface with Gigabit Eth ernet. Note
the implicit use of -subnet 192.168.1.1 is required to effectively get TCP/IP over the
proper subnet.
Example 8-2: Using TCP/IP over Gigabit Ethernet
% export MPI_IC_ORDER="elan:TCP:gm:itapi"
% export MPIRUN_SYSTEM_OPTIONS="-subnet 192.168.1.1"
% mpirun -prot -TCP -srun -n4 ./a.out
Example 8-3 illustrates using TCP/IP over Elan4, assuming Elan4 is installed and configured.
The subnet informatio n is omitted, Elan4 is installed and configured, and TCP/IP by means of
-TCP is explicitly requested.
Example 8-3: Using TCP/IP over Elan4
% export MPI_IC_ORDER="elan:TCP:gm:itapi"
% export MPIRUN_SYSTEM_OPTIONS=" "
% $MPI_ROOT/bin/mpirun -prot -TCP -srun -n4 ./a.out
This show s in th e "protocol map" that TC P is being used, but it is TCP over Elan4.
8.3.4 Using LSF and HP-MPI
HP-MPI jobs can be submitted using LSF. LSF uses the SLU RM srun launching mechanism.
Because of this, HP-MPI jobs need to specify the -srun option when LSF is used. This
section provides a brief overview of using LSF with HP-MPI in the HP XC environment.
A full description of usin g LSF with HP XC is provided in Chapter 7. In addition, for your
convenience, the HP XC docum entatio n CD contains HP XC LSF manuals from Platform
Computing.
In Example 8-4, LSF is used to create an allocation of two processors and -srun is used
to attach t o it.
Example 8-4: Allocating and Attaching Processors
$ bsub -I -n2 $MPI_ROOT/bin/mpirun -srun ./a.out 1
In Example 8-5, LSF creates an allocation of t welve processors and -srun uses one CPU per
node (six nod es). The example assum es two CPUs per node.
8-6 Using HP-MPI
Commentaires sur ces manuels