Hp XC System 2.x Software Manuel d'utilisateur Page 35

  • Télécharger
  • Ajouter à mon manuel
  • Imprimer
  • Page
    / 154
  • Table des matières
  • MARQUE LIVRES
  • Noté. / 5. Basé sur avis des utilisateurs
Vue de la page 34
2.3.5.2 Submitting a Non-MPI Parallel Job
Submitting non-MPI parallel jobs is disc ussedindetailinSection7.4.4.TheLSFbsub
command format t o submit a simple non-MPI parallel job is:
bsub -n num-procs [bsub-options] srun [srun-options] executable
[executable-options]
The bsub commandsubmitsthejobtoLSF-HPC.
The -n num-procs parameter s pecifies the n um ber of processors requested for the job. This
parameter is required for parallel jobs.
The i nclu sion of the SLURM srun command is required in the LSF-HPC command line to
distribute the tasks on the allocated compute nodes in the LSF partition.
The executable parameter is the name of an executable file or command.
Consider an HP XC configuration w here lsfhost.localdomain is the LSF -HPC
execution host and nodes n[1-10] are compute nodes in the SLURM lsf part ition. All
nodes contain two processors, providin g 20 processors for use by LSF-HPC jobs. The following
example shows one way to submit a non-MPI parallel job on this system:
Example 2-2: Submitting a Non-MPI Parallel Job
$ bsub -n4 -I srun hostname
Job <21> is submitted to default queue <normal>
<<Waiting for dispatch ...>>
<<Starting on lsfhost.localdomain>>
n1
n1
n2
n2
In the above example, th e job output shows that the job srun hostname was launched
from the LSF execution host lsfhost.localdomain, and that it ran on fou r processors
from the allotted nodes n1 and n2.
Refer to S ection 7.4.4 for an explanation of the options used in this command, and for full
information about submitting a parallel job.
Using SLURM Options with the LSF External Scheduler
An important option that can be included in submitting parallel jobs is LSF-HPC’s external
scheduler option. The LSF-H PC external SLURM scheduler provides additional capabilities at
the job an d queue levels by allowing the inclusion o f several SLURM options in th e LSF-HPC
command line. For example, it can be used to submit a job to run one task per node, o r to
submit a job to run on only specified no des.
The format for this option is:
-ext "SLURM[slurm-arguments]"
The slurm-arguments can consist of one or m ore srun allocationoptions(inlongformat).
Refer to Section 7.4.2 for additional information about using the LSF-HPC external scheduler.
The Platform Computing LSF documentation provide more information on general external
scheduler support. Also see the lsf_diff
(1) manpage for information on the specific srun
options available in the external SLURM scheduler.
The f ollowing example uses the external SLURM scheduler to submit one task per node (on
SMP nodes):
Using the System 2-9
Vue de la page 34
1 2 ... 30 31 32 33 34 35 36 37 38 39 40 ... 153 154

Commentaires sur ces manuels

Pas de commentaire