We provide some utilities that make it easier to compile, link and run programs at NSC systems. These are the build environment modules and the parallel job launcher mpprun
. The purpose is to help the handling of different mpirun-type programs and library paths. When a build environment is loaded, different dependencies and tools may also become available as modules.
To build software at NSC you should normally first load a build environment module. This gives you access to compilers, an mpi library, and some core numerical libraries (for Intel build environments, Intel MKL). If you try to run the system gcc (e.g., simply issuing gcc
or make
) without first loading a build environment module you get an error message that points you to loading an appropriate module. NSC suggests using the recommended version of the Intel build environment, but there also is a gcc-based one if you need it.
To load the intel build environment, first do as follows to find out the recommended version:
[rar@tetralith2 ~]$ module load buildenv-intel
*************** NO MODULE LOADED *****************
***** Please also specify desired version ********
This is a notice to inform you that NSC has deprecated the use
of default modules for this software. You will now instead need
to specify version number/name as well.
The recommended version of build-env intel is 2023a-eb, i.e., add it with
module load buildenv-intel/2023a-eb
Lmod has detected the following error: No module loaded.
While processing the following module(s):
Module fullname Module Filename
--------------- ---------------
buildenv-intel/recommendation /software/sse2/tetralith_el9/modules/buildenv-intel/recommendation.lua
From reading that message, one now knows to load the recommended version as:
[rar@tetralith2 ~]$ module load buildenv-intel/2023a-eb
This command gives access to compilers and libraries for building software. For Intel compilers it is recommended to use:
For C: mpiicc
(note the double i signifies the mpi-wrapped icc compiler.) or for non-MPI software icc
.
For C++: mpiicpc
or for non-MPI software icpc
.
For Fortran: mpiifort
or for non-MPI software ifort
.
(Note: there is normally no harm in using the mpi-wrapped compilers even for non-MPI software.)
Let us say that you have an mpi c program called hello_world_mpi.c
with no complicated dependencies. You can build this as follows:
[rar@tetralith2 ~]$ module load buildenv-intel/2023a-eb
[rar@tetralith2 ~]$ mpiicc -o hello_world_mpi hello_world_mpi.c
[rar@tetralith2 ~]$ ls
hello_world_mpi hello_world_mpi.c
If you are using the buildenv-gcc build environment, e.g.
[rar@tetralith2 ~]$ module load buildenv-gcc/2022a-eb
The relevant compilers are:
For C: mpicc
or for non-MPI software gcc
.
For C++: mpic++
or for non-MPI software g++
.
For Fortran: mpifort
or for non-MPI software gfortan
.
Note: If you wish to switch between intel and gcc buildenvs it is important to module purge
before loading the new buildenv. This helps to ensure a clean switch of environments.
Many software packages require satisfaction of a number of dependency libraries before they can be built. On Tetralith and Sigma we have build environments that supply a large number of such dependency libraries (see info below). On other systems there are some select dependency libraries and tools available to be found as usual with module avail
. However, in most cases you will have to build the needed libraries yourself using the provided compilers.
On Tetralith and Sigma NSC provides two types of build environment modules:
Build environments ending in -bare
only come with access to a bare minimum of dependency libraries (for the Intel compilers this is typically what is distributed alongside the compilers by Intel), everything else you need to build yourself.
Build environements ending in -eb
have been created using the EasyBuild buildtool. Upon loading these modules, a subsequent module avail
will show a number of modules for dependencies and tools that can significantly help to build software.
NSC recommends using an -eb
build environment.
The following example shows the loading of an -eb
build environment. First load the module. Notice the message that points out that upon loading this module you get access to dependency libraries.
[rar@tetralith1 ~]$ module load buildenv-intel/2018a-eb
***************************************************
You have loaded a buildenv module
***************************************************
The buldenv-intel module makes available:
- Compilers: icc, ifort, etc.
- Mpi library with mpi-wrapped compilers: intel mpi with mpicc, mpifort, etc.
- Numerical libraries: intel MKL
It also makes a set of dependency library modules available via
the regular module command. Just do:
module avail
to see what is available.
NOTE: You should never load build environments inside submitted jobs.
(with the single exception of when using supercomputer time to compile code.)
Then check what dependency libraries you have access to:
[rar@tetralith1 ~]$ module avail
---- /software/sse/easybuild/prefix/modules/all/MPI/intel/2018.1.163-GCC-6.4.0-2.28/impi/2018.1.163 -----
ABINIT/8.8.2-nsc1 SCOTCH/6.0.4-nsc1
Amber/16-AmberTools-17-patchlevel-8-12-nsc1 Siesta/4.0.1-nsc1
Boost/1.66.0-Python-2.7.14-nsc1 Siesta/4.1-b3-nsc1
CDO/1.9.2-nsc1 UDUNITS/2.2.25-nsc1
CGAL/4.11-Python-2.7.14-nsc1 X11/20171023-nsc1 (D)
...
------------------- /software/sse/easybuild/prefix/modules/all/Compiler/GCCcore/6.4.0 -------------------
CMake/3.9.1-nsc1 Python/2.7.14-bare-nsc1 gperf/3.1-nsc1
CMake/3.9.4-nsc1 SQLite/3.21.0-nsc1 intltool/0.51.0-Perl-5.26.0-nsc1
CMake/3.9.5-nsc1 Szip/2.1.1-nsc1 libdrm/2.4.88-nsc1
CMake/3.10.0-nsc1 Tcl/8.6.8-nsc1 libffi/3.2.1-nsc1
...
--------------------------------- /home/rar/EasyBuild/modules/all/Core ----------------------------------
Anaconda2/5.0.1-nsc1 (D) gettext/0.19.8.1-nsc1 ncurses/6.0-nsc1
...
---------------------------- /software/sse/easybuild/prefix/modules/all/Core ----------------------------
Anaconda2/5.0.1-nsc1 GCC/6.4.0-2.28 (D) intel/2015a
Anaconda3/5.0.1-nsc1 GCCcore/6.4.0 (L) intel/2018a (L,D)
EasyBuild/3.5.3-nsc17d8ce4 (L) foss/2015a ncurses/6.0-nsc1
Eigen/3.3.4-nsc1 foss/2018a (D)
...
----------------------------------------- /software/sse/modules -----------------------------------------
ABINIT/recommendation (D)
ABINIT/8.8.2-nsc1-intel-2018a-eb
Amber/recommendation (D)
...
Where:
L: Module is loaded
D: Default Module
Use "module spider" to find all possible modules.
Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys".
Many of the sections in the above output of the module avail
commands were not visible until you loaded the buildenv module. Now that the module is loaded, you access these as usual with the appropriate module load
commands.
For example, if you have a program called use_matheval.c
that requires linking with the Matheval library using -lmatheval
, you can do so by first loading the buildenv module as shown above, and then do this:
[rar@tetralith1 ~]$ module load libmatheval/1.1.11-nsc1
[rar@tetralith1 ~]$ mpiicc -o use_matheval use_matheval.c -lmatheval
In case the software that you compile has makefiles (or equivalent) which require you to specify paths to dependencies, the module makes those paths available as environment variables. The easiest way to discover this is by the module show
command, where you can see exactly what variables are being set.
Here we investigate the libmatheval module:
[rar@tetralith1 ~]$ module show libmatheval/1.1.11-nsc1
------------------------------------------------------------------------------------
/software/sse/easybuild/prefix/modules/all/Compiler/GCCcore/6.4.0/libmatheval/1.1.11-nsc1.lua:
------------------------------------------------------------------------------------
help([[
Description
===========
GNU libmatheval is a library (callable from C and Fortran) to parse
and evaluate symbolic expressions input as text.
More information
================
- Homepage: http://www.gnu.org/software/libmatheval/
]])
whatis("Description: GNU libmatheval is a library (callable from C and Fortran) to parse
and evaluate symbolic expressions input as text.")
whatis("Homepage: http://www.gnu.org/software/libmatheval/")
conflict("libmatheval")
prepend_path("CPATH","/software/sse/easybuild/prefix/software/libmatheval/1.1.11-GCCcore-6.4.0-nsc1/include")
prepend_path("LIBRARY_PATH","/software/sse/easybuild/prefix/software/libmatheval/1.1.11-GCCcore-6.4.0-nsc1/lib")
prepend_path("PKG_CONFIG_PATH","/software/sse/easybuild/prefix/software/libmatheval/1.1.11-GCCcore-6.4.0-nsc1/lib/pkgconfig")
setenv("EBROOTLIBMATHEVAL","/software/sse/easybuild/prefix/software/libmatheval/1.1.11-GCCcore-6.4.0-nsc1")
setenv("EBVERSIONLIBMATHEVAL","1.1.11")
setenv("EBDEVELLIBMATHEVAL","/software/sse/easybuild/prefix/software/libmatheval/1.1.11-GCCcore-6.4.0-nsc1/easybuild/Compiler-GCCcore-6.4.0-libmatheval-1.1.11-nsc1-easybuild-devel")
Note how the environment variable $EBROOTLIBMATHEVAL
is made to points to the root of libmatheval. Here one finds, e.g., $EBROOTLIBMATHEVAL/lib
to be the path of the compiled library files (.so
and .a
), and $EBROOTLIBMATHEVAL/include
are the include files. As you can see, these paths are also added to the $LIBRARY_PATH
and $CPATH
environment variables, so they get picked up automatically during compile time. This is why in the example above we did not have to add -L "$EBROOTMATHEVAL/lib"
, as that path was already present in $LIBRARY_PATH
. Also, note that the modules do not set $LD_LIBRARY_PATH
.
In case you desire interactive access to test and debug your compiled program, that can be done as interactive node jobs. More detailed information is available in the section about running applications.
For example, to reserve one interactive development node (i.e., 32 cores on Tetralith) and run your own compiled program mympiprogran
, do:
[rar@tetralith1 ~]$ interactive -N1 --exclusive --reservation=now -t 1:00:00
Waiting for JOBID 38222 to start
[rar@n76 ~]$ mpprun ./use_matheval
(... program output ...)
[rar@n76 ~]$ exit
[rar@tetralith1 ~]$
When your compiled program is ready to be used, you can do so with a submit script as if it was any other type of NSC software. Note that you should not load any build environment or dependency library modules in your submit script. If you have compiled your program according to the instructions above, that should not be necessary, all needed libraries should be found without doing so.
(Note: while NSC provided dependency libraries are handled by the compiler wrappers, the default behavior is to not do anything with libraries that you have compiled yourself, i.e., under your /home
or /proj
directories. However, you can change this behavior by setting the HPC_LD_FLAG
environment variable. See below for more details on how the compiler wrappers work).
Here is an example submit script:
#!/bin/bash
#
#SBATCH -J myjobname
#SBATCH -t 00:30:00
#SBATCH -n 64
#
mpprun ./mympiapp
Note:
Use the mpprun
command rather than mpirun
or similar. The mpprun inspects your program and runs the appropriate mpirun-type command for you. More details about the mpprun command is available below.
The submit script should not contain anything like module load openmpi/1.5.4 mkl/11.1
or module load libmatheval/1.1.11-nsc1
. (This is frequently necessary on other supercomputer centers.)
NSC uses a wrapper for the linker (ld) when compiling software. The main feature of the wrapper is to embed paths to libraries in the binary using the RPATH
feature. This means the libraries will be found at runtime without having to set the LD_LIBRARY_PATH
environment variable or load modules.
By default, the wrapper will include the path to every library you use from the /software/
directory, but not libraries from your own home directory. The motivation is that we assume that libraries that you have installed yourself are used for active software development, and you might want to exert full control of these. You can change this behavior by setting the HPC_LD_FLAG
environment variable (see below).
The linker wrapper is enabled for all our build environments. In most cases, the wrapper is transparent to users. Its behavior can be configured using a few environment variables:
HPC_LD_FLAG
0 no rpathing, skip the linker wrapper completely
1 rpath libraries in the /software/ (default)
2 rpath libraries in all folders except /usr /lib /lib64 /tmp /opt
example:
export HPC_LD_FLAG=0
gcc mpitest.c ## for gcc compiler
The program will be compiled with mpi but no rpathing will
be done
HPC_LD_EXTRA_LIBPATH
A colon separated list of additional paths for rpathing. If
HPC_LD_FLAG is set to 0 this flag will be ignored.
HPC_COMPILER_VERBOSE
"false": no degug printout (default)
"ERR": print debug info on stderr during linking
"LOG": append debug info info to the file `hpc_compiler_wrapper.log` in your home directory. (Useful when using compilation tools that may hide output.)
anything else: print debug info on stdout.
NSC provides the MPI job launching tool called mpprun
. We strongly recommend that you use mpprun instead of mpirun or similar to start an MPI job. The command mpprun should be available by default when you log in to a cluster.
As mentioned before, the main benefit of mpprun is that it handles differences in how to correctly start an MPI program automatically. If an MPI binary is built according to NSC recommendations, then mpprun can detect the correct MPI library, and the corresponding command to start an MPI job, e.g.mpirun
or mpiexec.hydra
. What happens then is that mpprun calls the native MPI launcher from within itself. If the OpenMP environment variable controlling the number of threads is unset when launching an MPI application with mpprun, mpprun will by default set OMP_NUM_THREADS=1. mpprun also writes some useful information about the job in the system log file. For the full list of mpprun options use mpprun --help
.
Here is example job script using mpprun. It runs an MPI application on two nodes on Tetralith (64 cores):
#!/bin/bash
#
#SBATCH -J myjobname
#SBATCH -t 00:30:00
#SBATCH -n 64
#
mpprun ./mympiapp
mpprun also works in interactive sessions. Below, we ask for an interactive session on two nodes on Tetralith and test an MPI program:
[kronberg@tetralith1 mpi] interactive -n64 --exclusive -t 00:10:00 --reservation=now
Waiting for JOBID 77079 to start
...
[kronberg@n1137 mpi]$ mpprun mpitest_c
mpprun INFO: Starting impi run on 2 nodes (64 ranks)...
Hello, world, I am 16 of 64
[...]
Hello, world, I am 31 of 64
Hello, world, I am 6 of 64
[kronberg@n1137 mpi]$ mpprun mpitest_c_openmpi
nberg@n1137 mpi]$
[kronberg@n1137 mpi]$ exit
[screen is terminating]
Connection to n1137 closed.
[kronberg@tetralith1 mpi]$
For all the mpprun
options, refer to:
[weiol@tetralith2 ~]$ mpprun --help
usage: mpprun [-h] [--version] [-n NRANKS] [--launcher LAUNCHER] [--handler HANDLER] [--compat {el7}] [--pass EXTRA_LAUNCH_ARGS] [-c CPUS_PER_TASK] [-d] [-v] [-q] [-i] [--allinfo] executable ...
This is a helper program to figure out what MPI launcher to use, with which arguments, in what environment (including which HPC modules to load) when launching a binary or script that uses MPI.
positional arguments:
executable The binary or script to execute.
arguments Arguments to pass to the executable.
optional arguments:
-h, --help show this help message and exit
--version show program's version number and exit
-n NRANKS, --nranks NRANKS, --np NRANKS
Specify the number of MPI tasks to run. (For compatibility, "-np" also works)
--launcher LAUNCHER Specify a preferred underlying MPI launcher to use. Use "native" for the launcher used to build the software with (if it can be determined).
--handler HANDLER Specify a preferred launcher handler to use
--compat {el7} Run executable via a compatibility wrapper, e.g., use the argument --compat el7 to run in an environment that mimics the Tetralith el7 system.
--pass EXTRA_LAUNCH_ARGS
Pass options to the underlying MPI launcher (also requires --launcher). Note: one MUST use the form --pass="--example" to handle usual arguments that starts with one or more dashes.
-c CPUS_PER_TASK, --cpus-per-task CPUS_PER_TASK
Configure how to use CPU cores and threads. The specified amount CPU cores is allocated per MPI process. The default is the value of the environment variable SLURM_CPUS_PER_TASK (or 1 if unset), which means that it is set by the SLURM job configuration
(e.g., the "-c" flag to sbatch). Environment variables that control the number of threads, e.g., OMP_NUM_THREADS, if unset, will be set to the same value (i.e., one OpenMP thread per core).
-d, --debug Produce full tracebacks on error
-v, --verbose Increase verbosity of output
-q, --quiet Decrease verbosity of output
-i, --info Only inspect the executable and show available launchers/options.
--allinfo Same as --info, but list *all* launchers (e.g., including those for other systems and compatibility layers).
Note that the previous NSC compiler wrapper flags -Nhelp
, -Nmkl
, -Nmpi
and -Nverbose
are not available. A dumptag
utility to inspect build information encoded inside a binary is no longer supported.
(This segment is mostly for advanced users, and applies to Tetralith and Sigma only.)
For users seeking help of a more automated build tool to build complex software, EasyBuild is available on Tetralith and Sigma. The EasyBuild website contains more information about EasyBuild.
To use EasyBuild, load the easybuild buildtool module, e.g.,:
[rar@tetralith2 ~]$ module load buildtool-easybuild/4.8.0-hpce082752a2
This makes the command eb
available. At NSC the default configuration of EasyBuild is to build software in your home directory under the subdirectory EasyBuild
. This can, however, be easily changed by modifying $EASYBUILD_PREFIX
after loading the easybuild module. Note that loading the easybuild module also sets a number of environment variables on the form $EASYBUILD_*
that affects the behavior of EasyBuild (you can see them by module show buildtool-easybuild/<version>
). The NSC setup is made to make easybuild able to use existing centrally built EasyBuild packages at NSC. However, note that NSC EasyBuild modules frequently uses different names than the standard ones, which means that you need to edit .eb
files to point to the correct package names. (The reason NSC names differ is that we typically add an -nsc(number) (or -hpc) suffix to versions to be able to upgrade builds without removing old versions, something that is necessary to not break existing compiled software when libraries are accessed using rpath.)
You can find prepared .eb
files in the easybuild-easyconfigs repository.
Guides, documentation and FAQ.
Applying for projects and login accounts.