MPI libraries for parallel applications

The Message Passing Interface (MPI) is the typical way to parallelize applications on clusters, so that they can run on many compute nodes simultaneously. An overview of MPI is available on Wikipedia. The MPI libraries we have on the clusters are mostly tested with C/C++ and Fortran, but bindings for many other programming languages can usually be found on the internet. For example, for Python, you can use the mpi4py module.

Available MPI Libraries on NSC's systems

NSC mainly provides Intel MPI and OpenMPI. Other MPI libraries might be also available on specific systems, or as part of a commericial software package, but they are not actively maintained for general production use. Here is a list of MPI libraries installed on Triolith:

  • Intel MPI. Versions 4.0.X and 4.1.X are available as of Apr. 2014. There is usually a module called impi/[version] for each version. Intel has official documention online, and the Triolith specific details of our installations are covered in the Triolith software guide.
  • OpenMPI. Versions 1.4.X and 1.6.X are provided as of Apr. 2014. The modules are called openmpi/[version]. The installations that lack modules are generally experimental, detailed descriptions can be found at OpenMPI. The OpenMPI documentation might be useful.
  • MVAPICH2. We have tested various versions of MVAPICH 1.9 and 2.0 internally and they appear to work, but there are no modules and official support in terms of e.g. compilers wrapper and mpprun. Contact if you need to use MVAPICH2. There are also user guides available online

One way to see which versions of Intel MPI and OpenMPI that are installed on a cluster is by running "module avail" (look for "impi" and "openmpi"), e.g.:

$ module avail|grep ^impi
impi/4.0.3.008                                       2013/03/13 14:47:43
impi/4.1.0.024                                       2013/03/13 14:49:53
impi/4.1.0.027                                       2013/03/13 14:51:46
impi/4.1.0.030                                       2013/03/13 14:53:57
impi/4.1.1.036                                       2013/10/18 15:40:01
impi/4.1.3.048                                       2014/02/19 12:18:19
impi/recommendation                      default     2013/11/14  9:17:58

Which MPI library do you recommend?

NSC suggests Intel MPI as the first choice, as it has shown the best performance for most applications. In particular, Intel MPI binds MPI processes to core automatically (and correctly, in most cases).

If your application experiences unexpected crashes with Intel MPI, try OpenMPI, or explore the I_MPI_COMPATIBILITY environment variable to disable optimizations that depend on MPI-2.2 compliant behavior.

Configuration and Compatibility with Compilers

We recommend the first-time user to start from NSC's default build environment, by loading the module:

module load build-environment/nsc-recommended

which currently loads Intel MPI version along with default compiler and math library. If the user is experienced and hopes to use another MPI library, simply loading that specific MPI installation will replace the above-specified one. For example, to switch from using the default Intel MPI to OpenMPI 1.6.4:

$ module load openmpi/1.6.4-build1 
Unloading conflicting module 'impi/4.0.3.008' before proceeding

All versions of Intel MPI libraries are compatible with both Intel and GNU compilers. NSC's MPI wrapper automatically detects which compiler is being used and links the right MPI library at time of compilation.

On the contrary, individual OpenMPI or MVAPICH2 installations are usually compatible with only a specific compiler, so there could be several installations of the same MPI for different compilers. Some of the available combinations are documented on NSC's software page. In some cases, it is also possible to decide the intended usage of an MPI installation based on the directory path. The path to an MPI library is typically:

/software/mpi/MPI_VENDOR/MPI_VERSION/COMPILER_VERSION/...

where the convention of 'COMPILER_VERSION' is the compiler's name + version, e.g., i1312 representing Intel compiler version 13.1.2 and g472 representing GNU compiler version 4.7.2.

Building an Executable

MPI-parallelised code is usually compiled by calling special compiler wrappers command provided by the MPI library. The table below lists the wrappers that are commonly found:

Language Compiler Command
C gcc mpicc
C icc mpiicc
C++ g++ mpicxx
C++ icpc mpiicpc
Fortran90+ gfortran mpif90
Fortran90+ ifort mpiifort
Fortran77 gfortran mpif77
Fortran77 ifort mpiifort

Compile on Bi

On Bi you can compile MPI-based codes either by a call to the MPI compilation wrapper (e.g., mpiicc) or by an NSC-specific way of adding the MPI flag to a non-MPI compilation command (e.g., icc -Nmpi). That is,

mpiicc -o mpihello -O2 mpihello.c

and

icc -Nmpi -o mpihello -O2 mpihello.c

should be identical. Note that NSC's compiler wrapper will be called indirectly in a call to MPI compilation wrapper, since mpicc calls the C compiler, which by itself calls the linker wrapper.

Compile on other clusters

On the other clusters at NSC, we do not recommend using mpicc et al. directly. Instead, please use NSC's special MPI compilation flag:

icc -Nmpi -o mpi -O2 mpihello.c

This compile and link your program with the currently loaded MPI module.

Running the Application

In general, you launch your parallel executable by calling the mpirun command or equivalent. On all NSC's systems, we have a utility that simplifies MPI launching called mpprun.

We highly recommend users to launch an MPI application through mpprun, since:

  1. You do not need to specify how many ranks to start.
  2. It is capable of loading the right job environment (e.g., compiler, MPI library, math library) whose information is extracted from the extra tag in the binary
  3. Mpprun correctly assigns CPU cores in case application uses less cores-per-node for running memory-intensive applications (whereas IntelMPI's mpirun launcher is erronous in that situation).

Useful Tips

Intel MPI

  • To use 8 byte integer, you shall add -ilp64 as a compilation flag or as a global option at runtime. It may accompany the 8-byte integer compilation flag from the compiler.
  • Extra runtime information of e.g. processor binding can be observed by setting the I_MPI_DEBUG environment variable. Debug level 4 or higher will provide enough information. To print out these information, the executable is launched as follows:
mpprun --pass="-genv I_MPI_DEBUG 5" $(EXE)
  • Binding of MPI processes to specific processor cores can be controlled by the -binding flag. It can also be manually handled by changing the I_MPI_PIN_PROCESSOR_LIST variable. Details are found on IntelMPI website.

OpenMPI

  • Detailed configuration on installation is found by ompi_info command, including the information on processor affinity.
  • Processor binding at OpenMPI are manually handled by using the extra argument at runtime. Unfortunately, the exact way to do this varies depending on the exact version of OpenMPI. NSC experimented based on OpenMPI version 1.4 and the performance was usually better if the processor binding is enable. A good choice is usually --bind-to-core --bycore if you are using all cores on a node. That is enabled by launching the application as follows:
mpprun --pass="--bind-to-core --bycore" $(EXE)

User Area

User support

Guides, documentation and FAQ.

Getting access

Applying for projects and login accounts.

System status

Everything OK!

No reported problems

Self-service

SUPR
NSC Express