for the purpose of determining how much memory to pin for RDMA message transfers on
InfiniBand and Myrinet GM. The value determined by HP-MPI can be displayed using the -dd
option. If HP-MPI specifies an incorrect value for physical memory, this environ ment variable
can be used to specify the value explicitly:
% export MPI_PHYSICAL_MEMORY=1048576
The above example specifies that the system has 1GB of physical memory.
8.9.5 MPI_PIN_PERCENTAGE
MPI_PIN_PERCENTAGE communicates the maximum percentage of physical memory (see
MPI_PHYSICAL_MEMORY above) that can be pinned at any time. The defau lt is 20%.
% export MPI_PIN_PERCENTAGE=30
The above example permits the HP-MPI librarytopin(lockinmemory)upto30%of
physical mem ory. The pinned memory is shared between ranks of the host that were started
as part of the same mpirun invocation. Running multiple MPI applications on the same
host can cum ulatively cause more than one application’s MPI_PIN_PERCENTAGE to be
pinned. Increasing MPI_PIN_PERCENTAGE can improve communication performance for
communication intensive applications in which nodes send and receive multiple large messages
at a time, such as is comm on with collective operations. Increasing MPI_PIN_PERCENTAGE
allows more large messages to be progressed in parallel using RDMA transfers, however
pinning too much of physical memory may negatively impact computation performance.
MPI_PIN_PERCENTAGE and MPI_PHYSICAL_MEMORY are ign ored unless InfiniBand
or Myrinet G M is in use.
8.9.6 MPI_PAGE_ALIGN_MEM
MPI_PAGE_ALIGN_MEM causes the HP-MPI library to page align and page pad memory.
% export MPI_PAGE_ALIGN_MEM=1
For more information on when this setting should be used, refer to the “Work-arounds” section
of the HP-MPI V2.1 for XC4000 and XC6000 Clusters R elease Notes.
8.9.7 MPI_MAX_WINDOW
MPI_MAX_WINDOW is used for o ne-sided applications. It specifies the max imum number of
windows a rank can have at the same time. It tells HP-MPI to allocate enough table entries.
The default is 5.
% export MPI_MAX_WINDOW=10
8.9.8 MPI_ELANLOCK
By default, HP-MP
I only provides e xclusive window locks via Elan lock when using the Elan
system interconn
ect. In order to use HP-MPI shared windo w locks, you must turn off Elan lock
and use window loc
ks via shared memory. In this way, both exclusive and shared locks are
from shared mem o
ry. ToturnoffElanlocks:
% export MPI_ELANLOCK=0
8.9.9 MPI_USE_LIBELAN
By default when Elan is in u se, the HP-MPI library uses E lan’s native collective operations
for performing MPI_Bcast and MPI_Barrier operations on MPI_COMM_WORLD sized
communicators. This behavior can be changed by setting MPI_USE_LIBELAN to “false”or
“0”, in which case these operations will be im plemen ted using point-to-point Elan m essag e s.
Using HP-MPI 8-11
Commentaires sur ces manuels