Hp XC System 2.x Software Manuel d'utilisateur Page 36

  • Télécharger
  • Ajouter à mon manuel
  • Imprimer
  • Page
    / 154
  • Table des matières
  • MARQUE LIVRES
  • Noté. / 5. Basé sur avis des utilisateurs
Vue de la page 35
Example 2-3: Submitting a Non-MPI Parallel Job to Run One Task per Node
$ bsub -n4 -ext "SLURM[nodes=4]" -I srun hostname
Job <22> is submitted to default queue <normal>
<<Waiting for dispatch ...>>
<<Starting on lsfhost.localdomain>>
n1
n2
n3
n4
2.3.5.3 Submitting an M PI Job
Submitting MPI jobs is discussed in detail in S ection 7.4.5. T he bsub command format t o
submit a job to H P-MPI by m eans of mpirun command is:
bsub -n num-procs [bsub-options] mpirun [mpirun-options] [-srun
[srun-options]]mpi-jobname [job-options]
The -srun command is required by the mpirun command to run jobs in the LSF partition.
The -n nu m-p rocs parameter specifies the number o f processo rs the job requests. -n num-procs
is required for parallel jobs. A ny SLURM srun options that are included are jo b specific, not
allocation-specific.
Using SLURM Options in MPI Jobs with the LSF External Scheduler
An important option that can be included in subm itting HP-MPI jobs is LSF’s external scheduler
option. The L S F external scheduler provides addition a l capabilities at the job lev el and queue
level by allowing the inclusion of sev eral SLURM options in the LSF command line. For
example, it can be used to submit a job to run one task per node, or to submit a job to run
on specific nodes. This option is discussed in detail in Section 7.4.2. An example of its use
is provided in this secti on.
Consider an HP XC configuratio n where lsfhost.localdomain is the LSF execution host
and nodes n[1-10] are compute no des in the LSF partition. A ll nod es contain t wo processors,
providing 20 processors for use by LSF jobs.
Example2-4:RunninganMPIJobwithLSF
$ bsub -n4 -I mpirun -srun ./hello_world
Job <24> is submitted to default queue <normal>.
<<Waiting for dispatch ...>>
<<Starting on lsfhost.localdomain>>
Hello world!
Hello world! I’m 1 of 4 on host1
Hello world! I’m 3 of 4 on host2
Hello world! I’m 0 of 4 on host1
Hello world! I’m 2 of 4 on host2
Example 2-5: Running an MPI Job with LSF Using the External Scheduler Option
$ bsub -n4 -ext "SLURM [nodes=4]" -I mpirun -srun ./hello_world
Job <27> is submitted to default queue <normal>.
<<Waiting for dispatch ...>>
<<Starting on lsfhost.localdomain>>
Hello world!
Hello world! I’m 1 of 4 on host1
2-10 Using the System
Vue de la page 35
1 2 ... 31 32 33 34 35 36 37 38 39 40 41 ... 153 154

Commentaires sur ces manuels

Pas de commentaire