On the Singularity website you can find some notes on using OpenMPI to run MPI applications. However, there’s very little information available on using IntelMPI (which is the MPI that NSC recommends for most applications). Luckily, IntelMPI also works in Singularity.
Compared to “simple” standalone containers, making MPI work takes a little more effort. You will need to install interconnect (Infinniband, Omni-Path) libraries in the container that somewhat matches what the actual compute node is using. You then need to launch one instance of Singularity per MPI rank (e.g “mpiexec.hydra -bootstrap slurm singularity myimage.sif ~/mympiapp”).
In theory, the overhead of using Singularity should be almost zero.
We have tested a very communication-intensive MPI application and seen no performance impact from using Singularity.
If you see worse performance than you expected when using Singularity, please let us know.
Guides, documentation and FAQ.
Applying for projects and login accounts.