AMBER

“Amber” refers to two things: a set of molecular mechanical force fields for the simulation of biomolecules (which are in the public domain, and are used in a variety of simulation programs); and a package of molecular simulation programs which includes source code and demos. This installation contains both of the above.

The Amber 16 user guide is available as a pdf file here

Please contact NSC Support if you have any questions or problems.

Installations on NSC Systems

Tetralith and Sigma

AMBER is available via the module system on Tetralith and Sigma. For more information about available versions, please see the Tetralith and Sigma Software list

How to run

All executables are located in a path available by loading an Amber module file, e.g. “module load Amber/16-nsc1-intel-2018a-eb” will make available all Amber 16 binaries. How to run these are covered in the Amber user guide, except the executables suffixed “.run” which are NSC specific runscripts intended to launch MPI parallel Amber executables with a minimum of fuss, see the below job script for an example, i.e. they take care of properly launching their corresponding Amber MPI binary.

Example batch script:

#!/bin/bash

#SBATCH --time=10:00:00                #Requested walltime. 10h in this case.
#SBATCH --nodes=2 --exclusive          #Number of compute nodes to allocate
#SBATCH --account=liu-2012-00060-20    #Account string for the project that you wish to account the job to

# Set the working dir to wherever you submit the job from. This example batch
# script assumes that you have your job input files in this directory
WRKDIR=$(pwd)

job=jobname
module load Amber/16-nsc1-intel-2018a-eb # For instance. Use any amber version you want if there are many.

# If you run qmmm with an external interface, be sure to load that
# module also. If you interface to Gaussian, please uncomment and edit as appropriate the
# line below:
#module load Gaussian/09.E.01-avx-nsc1-bdist

# To use the node local disk, which may be very beneficial to
# performance, copy pertinent files there and change directory
# to it (uncomment and edit if this is what you want):
#cp ${WRKDIR}/{mdin,prmtop,inpcrd,restrt} ${SNIC_TMP}/ && cd ${SNIC_TMP}/

#Trap SIGTERM and copy the trajectory file (and other files) if the job hits the walltime limit
trap 'if [ -f ${SNIC_TMP}/* ]; then cp ${SNIC_TMP}/* ${WRKDIR}/; else echo "No run files found"; fi; echo "SIGTERM was trapped"' SIGTERM

# Run Amber. All Amber executables which are MPI capable have
# corresponding launcher scripts with suffix ".run". To check which
# executables are MPI compiled do a
# "ls $AMBERHOME/bin/*.run"
#
# To run an Amber MPI executable, uncomment and edit the line below to
# suit your needs. This is a generic example:
#pmemd.MPI.run -i mdin -o mdout -p prmtop -c inpcrd -r restrt

# Copy pertinent files to your working directory when simulation
# finishes if you ran un the node local disk. Uncomment and edit:
#cp ${SNIC_TMP}/* ${WRKDIR}/

exit 0
#END OF SCRIPT

Note that you must edit at least the jobname, account string, walltime and number of requested nodes in the above before submitting!

MMPBSA.py.MPI

The MMPBSA.py.MPI application must be launched in a different way to the other Amber MPI applications. NSC recommends the following steps (within a run script) for launching MMPBSA.py.MPI:

module purge
module load buildenv-intel/2018a-eb
module load Amber/16-nsc1-intel-2018a-eb
mpiexec.hydra -bootstrap slurm MMPBSA.py.MPI

Disclaimer

NSC takes no responsibility for the correctness of results produced with the binaries! Hence, always evaluate the binaries against known results for the systems and properties you are investigating before using the binaries for production jobs.


User Area

User support

Guides, documentation and FAQ.

Getting access

Applying for projects and login accounts.

System status

Everything OK!

No reported problems

Self-service

SUPR
NSC Express