How to run a hybrid MPI and openMP job

This page describes the compilation and execution of a hybrid MPI/MP program using the SLURM scheduler. This page is not a tutorial about how to make MPI or MP programs in C, and assumes that the user is familiar with those concepts. An example of "mixing" openMPI with openMP is given to illustrate the way hybrid MPI/multi-thread job can be handled with SLURM.

Example openMPI/openMP program in C: mixed_hello.c
#include <stdio.h>
#include "mpi.h"
#include <omp.h>

int main(int argc, char *argv[]) {
  int numprocs, rank, namelen;
  char processor_name[MPI_MAX_PROCESSOR_NAME];
  int iam = 0, np = 1;

  MPI_Init(&argc, &argv);
  MPI_Comm_size(MPI_COMM_WORLD, &numprocs);
  MPI_Comm_rank(MPI_COMM_WORLD, &rank);
  MPI_Get_processor_name(processor_name, &namelen);

  #pragma omp parallel default(shared) private(iam, np)
  {
    np = omp_get_num_threads();
    iam = omp_get_thread_num();
    printf("Hello from thread %d out of %d from process %d out of %d on %s\n",
           iam, np, rank, numprocs, processor_name);
  }

  MPI_Finalize();
}
Compiling and liking the hybrid MPI/MP program mixed_hello.c

Compiling and linking can be done in one go, as shown below

 $ module load openmpi.gcc/4.0.3 gcc/9.1.1
 $ mpicc -fopenmp mixed_hello.c -o mixed_hello
SLURM batch script, job_MPI_hybrid.slurm to run the hybrid MPI/MP job

The following script will request resources and run a MPI job that spawns 8 processes (or MPI ranks) and 2 threads for each processes. The number of threads per process is passed to openMP using the SLURM variable SLURM_CPUS_PER_TASK.

#!/bin/bash

#SBATCH --job-name=calpi_MPI
#SBATCH --mail-user=%u@oist.jp
#SBATCH --partition=compute
#SBATCH --ntasks=8
#SBATCH --cpus-per-task=2
#SBATCH --mem-per-cpu=200m

export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
module load openmpi.gcc/4.0.3

srun --mpi=pmix ./mixed_hello

Alternatively, the last line might replaced by mpirun -np ${SLURM_NTASKS} ./mixed_hello, (in that case the number of processes in the MPI job is passed the the mpirun command through the SLURM variable SLURM_NTASKS). However, the use of srun is recommended for better integration with the SLURM scheduler.

Execution of the MPI job using the SLURM batch script

Below the SLURM batch script is submitted to the SLURM scheduler, and the output is printed using the cat command. Notice that for each process all its threads run in the same node.


 $ sbatch job_MPI_hybrid.slurm
Submitted batch job 545323

 $ cat slurm-545323.out 
Hello from thread 0 out of 2 from process 5 out of 8 on sango11019
Hello from thread 1 out of 2 from process 5 out of 8 on sango11019
Hello from thread 1 out of 2 from process 3 out of 8 on sango11018
Hello from thread 1 out of 2 from process 2 out of 8 on sango11017
Hello from thread 1 out of 2 from process 4 out of 8 on sango11019
Hello from thread 0 out of 2 from process 1 out of 8 on sango11016
Hello from thread 1 out of 2 from process 1 out of 8 on sango11016
Hello from thread 1 out of 2 from process 6 out of 8 on sango11020
Hello from thread 0 out of 2 from process 6 out of 8 on sango11020
Hello from thread 0 out of 2 from process 7 out of 8 on sango11020
Hello from thread 1 out of 2 from process 7 out of 8 on sango11020
Hello from thread 0 out of 2 from process 3 out of 8 on sango11018
Hello from thread 0 out of 2 from process 2 out of 8 on sango11017
Hello from thread 1 out of 2 from process 0 out of 8 on sango11016
Hello from thread 0 out of 2 from process 0 out of 8 on sango11016
Hello from thread 0 out of 2 from process 4 out of 8 on sango11019