GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. It is primarily designed for biochemical molecules like proteins, lipids and nucleic acids.
The GROMACS installations on Tetralith and Sigma are compiled with support for
MPI+OpenMP execution and are intended to be launched with the NSC mpprun
launch wrapper. Some installations have been patched and compiled with PLUMED
support, and have corresponding modules tagged with the “PLUMED” label.
To see which versions are installed use module avail
module avail gromacs
/software/sse/modules:
... snip ...
GROMACS/2018.1-PLUMED-nsc2-gcc-2018a-eb
GROMACS/2018.1-nsc2-gcc-2018a-eb
GROMACS/2018.4-PLUMED-nsc1-gcc-7.3.0-bare
GROMACS/2018.4-nsc1-gcc-7.3.0-bare
GROMACS/2019.2-nsc1-gcc-7.3.0-bare
GROMACS/2019.6-nsc1-gcc-7.3.0-bare
Load the GROMACS module corresponding to the version you want to use. For instance
module load GROMACS/2019.6-nsc1-gcc-7.3.0-bare
Versions older than 5.0.4 will not be covered in these instructions since they deviate significantly and are very old by now.
A minimum batch script for running a standard GROMACS mdrun
looks something
like this:
#!/bin/bash
#SBATCH -n 128
#SBATCH -t 4:00:00
#SBATCH -J jobname
#SBATCH -A SNIC-xxx-yyy
# Start with a clean environment
module purge
# Substitute with your required GMX module
module add GROMACS/2019.6-nsc1-gcc-7.3.0-bare
mpprun gmx_mpi mdrun <additional GMX job specification options>
Note that you should edit the jobname, account number, number of tasks and
requested walltime to your liking before submitting. This should be the default
way of running GROMACS mdrun
at NSC, and only if you know you need to use
OpenMP in addition to MPI (or instead of as the case may be) should you consider
hybrid MPI+OpenMP runs.
If you need to run the hybrid, MPI+OpenMP version of GROMACS you should in
addition specify to SLURM how many CPU cores should be used per rank using the
-c
option to SLURM. Match this value when launching gmx_mpi
using the
-ntomp
option and set the OMP_NUM_THREADS in accord. Also request GROMACS to
pin ranks and threads with the -pin on
option. An example batch script using
four Tetralith nodes as above for this is
#!/bin/bash
#SBATCH -n 64
#SBATCH -c 2
#SBATCH -t 4:00:00
#SBATCH -J jobname
#SBATCH -A SNIC-xxx-yyy
# Start with a clean environment
module purge
# Substitute with your required GMX module
module add GROMACS/2019.6-nsc1-gcc-7.3.0-bare
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}
mpprun --pass="-cpus-per-rank ${SLURM_CPUS_PER_TASK}" gmx_mpi mdrun \
-ntomp ${SLURM_CPUS_PER_TASK} -pin on \
<additional GMX job specification options>
For most normal uses of GROMACS, running GROMACS in hybrid mode is (lightly) detrimental to performance, which is not to say there aren’t use cases for hybrid runs. For performance reasons, it is recommended to keep the number of CPU cores per rank as low as possible and avoid using more than 8, even though this is certainly possible (getting successively worse performance, most likely).
Running a PLUMED-patched GROMACS using PLUMED input may require a great deal
more care getting reasonable performance from the run. It is advised to check
that your pinning options doesn’t oversubscribe the hardware resources. Log in
to the nodes of the job (jobsh nXXX
) and check pinning with hwloc-ps -t
and
resource utilisation.
The GPU-enabled GROMACS installations at NSC are currently thread-MPI only, and can thus only be run on a single GPU-equipped node on Tetralith (or Sigma). The reason for this is that the scalability, i.e. performance increase when adding more nodes/GPUs, of GPU-enabled GROMACS on Tetralith using standard MPI is excessively poor or non-existent because the interconnect is not set up as GPUDirect capable (for technical reasons).
The GPU-enabled GROMACS installations can be distinguished by their being built with the “gcccuda” toolchain, and you can list them with module -t avail gcccuda | grep GROMACS
. Since they are tMPI-parallel, the binary is called gmx
and it need not be launched with mpprun
.
If you for some reason need to compile GROMACS yourself, here are some very
basic build instructions which should give you a working, well-performing
installation which can be launched with the mpprun
launch utility:
module load buildenv-gcc/7.3.0-bare CMake/3.15.2
mkdir temp_gmx_build_dir && cd temp_gmx_build_dir
cmake \
-D CMAKE_C_COMPILER=mpicc \
-D CMAKE_CXX_COMPILER=mpicxx \
-D GMX_OPENMP=ON \
-D GMX_MPI=ON \
-D GMX_BUILD_OWN_FFTW=ON \
-D GMX_BUILD_UNITTESTS=ON \
-D BUILD_TESTING=ON \
/path/to/unpacked/gromacs/source
make -j4
make tests # Builds the tests
make test # Runs the tests
After you have built and tested, you can verify that your build can be launched
with mpprun
with the dumptag
utility. For instance
$ dumptag bin/gmx_mpi
File name /some/path/to/bin/gmx_mpi
NSC tags ---------------------------------
Build date 200317
Build time 122236
Built with MPI openmpi 3_1_2__gcc__7_3_0__bare
Linked with gcc 7_3_0__bare
Tag version 6
------------------------------------------
If dumptag
does not report (or reports wrongly) which MPI it was built with,
something has gone wrong with the build, and mpprun
will not be able to launch
it correctly. With respect to performance, compilers and MPI: GROMACS has so far
performed best using GCC and OpenMPI on Tetralith, and dramatic changes in this
regard are not expected for various reasons. Stick to these build tools is our
recommendation.
If you are interested in how NSC builds and tests our GROMACS installations,
see e.g. the file /software/sse/manual/GROMACS/2019.6/g73/nsc1/build.txt
on
Tetralith/Sigma.
Guides, documentation and FAQ.
Applying for projects and login accounts.