The CPMD code is a parallelized plane wave / pseudopotential implementation of Density Functional Theory, particularly designed for ab-initio molecular dynamics.
The CPMD installations at NSC are generally done by Johan Raber (raber@nsc.liu.se).
In order to use CPMD you need a CPMD license. This is (as of April 2013) free for non profit purposes.
In order to use the CPMD binaries provided by NSC, you need to inform NSC that you have a valid CPMD license. You can do so by sending an email to support@nsc.liu.se stating that you have a valid CPMD license. We will then give you access to our CPMD binaries.
You will need to provide a valid CPMD license to get access to the CPMD binary. Getting such a license typically require a registration at www.cpmd.org. E-mail support@nsc.liu.se to be added to the correct unix group, thereby getting access.
Provided you have a valid CPMD input file, here’s how you can run it from an interactive session:
@tetralith: $ interactive -N2 --exclusive -t 00:30:00 #This drops you in a shell on an allocated node
@node: $ module load CPMD/4.1-nsc1-intel-2018a-eb
@node: $ export CPMD_FILEPATH=${SNIC_TMP}
@node: $ cpmd.run input > output # Optionally add a path to pseudo potential libraries after the input file
Running it in batch mode is very similar and an example is available below. For optimal performance it is advised that you run your job on the node local scratch disks most conveniently by setting at runtime
export CPMD_FILEPATH=${SNIC_TMP}
as indicated in the example above. You can alternatively specify an entry in your input file using the FILEPATH directive, but this is very much less convenient and is discouraged unless you have an absolute need to set it in the input file (hard to imagine why though).
Putting all large output files under ${SNIC_TMP} in a job will prompt the need to save them at the end of the job. Otherwise they will be removed by the cleanup script run between jobs. An example of how to accomplish this is shown in the example batch job script below.
Provided PP_LIBRARY_PATH is unset, the run script will set the path to the standard pseudo potential library of CPMD from the time of the respective CPMD release. CPMD does not accept several search paths in this variable so if you want to use pseudo potentials in different locations you will need to explicitly copy them to a third directory (for instance your working directory) and modify PP_LIBRARY_PATH accodingly. For instance:
export PP_LIBRARY_PATH=$(pwd) # If you have your PP's in your working directory
A batch script for running CPMD at NSC may look like this:
#!/bin/bash
#SBATCH -N 2
#SBATCH -t 4:00:00
#SBATCH -J jobname
#SBATCH --exclusive
#SBATCH -A SNIC-xxx-yy
WD=$(pwd)
module load CPMD/4.1-nsc1-intel-2018a-eb
#Trap SIGTERM and copy the larger files from $CPMD_FILEPATH if the job hits the walltime limit
trap 'cp ${CPMD_FILEPATH}/* ${WD}/; echo "SIGTERM was trapped"' SIGTERM
#export OMP_NUM_THREADS=2 # Uncomment if wanted
(cpmd.run input) > output &
wait
exit_status=$?
cp ${CPMD_FILEPATH}/* ${WD}/
if (( $exit_status != 0 )); then
echo "CPMD exited with error status $exit_status"
else
echo "CPMD exited successfully."
fi
exit $exit_status
(Note that you should at least edit the jobname and the account number before submitting.)
The runscript (cpmd.run) is capable of accepting hybrid MPI/OpenMP jobs if you set the OMP_NUM_THREADS environment variable higher than one, e.g.
export OMP_NUM_THREADS=2 # Advisory: Don't go beyond two unless you really have to
The runscript automatically adjust the parameters needed to launch the parallel job optimally by inferring them from the environment variables in a job environment, so there should normally be no need to make your own launcher.
Viewing the output from CPMD can be done with the GUIs VMD, Avogadro or Gabedit available via the module system. For Avogadro and Gabedit you may need to do some post-processing on some output file(s) and produce so-called cube files, see the CPMD home page. Should you wish to produce ray-traced images and movies from the Avogadro or Gabedit interfaces you will also need to load a povray module. You will additionally need to run with X forwarding or the VNC solution currently installed. NSC recommends that you use the VNC solution.
Guides, documentation and FAQ.
Applying for projects and login accounts.