Hp XC System 3.x Software Manuel d'utilisateur Page 98

  • Télécharger
  • Ajouter à mon manuel
  • Imprimer
  • Page
    / 133
  • Table des matières
  • MARQUE LIVRES
  • Noté. / 5. Basé sur avis des utilisateurs
Vue de la page 97
You can simplify this by first setting the SLURM_JOBID environment variable to the SLURM JOBID in the
environment, as follows:
$ export SLURM_JOBID=150
$ srun hostname
n1
n2
n3
n4
Note:
Be sure to unset the SLURM_JOBID when you are finished with the allocation, to prevent a previous
SLURM JOBID from interfering with future jobs:
$ unset SLURM_JOBID
The following examples illustrate launching interactive MPI jobs. They use the hellompi job script
introduced in Section 5.3.2 (page 50). Example 10-7
Example 10-7 Launching an Interactive MPI Job
$ mpirun -srun --jobid=150 hellompi
Hello! I'm rank 0 of 4 on n1
Hello! I'm rank 1 of 4 on n2
Hello! I'm rank 2 of 4 on n3
Hello! I'm rank 3 of 4 on n4
Example 10-8 uses the -n 8 option to launch on all cores in the allocation.
Example 10-8 Launching an Interactive MPI Job on All Cores in the Allocation
This example assumes 2 cores per node.
$ mpirun -srun --jobid=150 -n8 hellompi
Hello! I'm rank 0 of 8 on n1
Hello! I'm rank 1 of 8 on n1
Hello! I'm rank 2 of 8 on n2
Hello! I'm rank 3 of 8 on n2
Hello! I'm rank 4 of 8 on n3
Hello! I'm rank 5 of 8 on n3
Hello! I'm rank 6 of 8 on n4
Hello! I'm rank 7 of 8 on n4
Alternatively, you can use the following:
$ export SLURM_JOBID=150
$ export SLURM_NPROCS=8
$ mpirun -srun hellompi
Hello! I'm rank 0 of 8 on n1
Hello! I'm rank 1 of 8 on n1
Hello! I'm rank 2 of 8 on n2
Hello! I'm rank 3 of 8 on n2
Hello! I'm rank 4 of 8 on n3
Hello! I'm rank 5 of 8 on n3
Hello! I'm rank 6 of 8 on n4
Hello! I'm rank 7 of 8 on n4
Use ssh to launch a Totalview debugger session, assuming that TotalView is installed and licensed and
that ssh X forwarding is properly configured:
98 Using LSF-HPC
Vue de la page 97
1 2 ... 93 94 95 96 97 98 99 100 101 102 103 ... 132 133

Commentaires sur ces manuels

Pas de commentaire