These settings worked OK for the provided example EX01-3D_Si_vasp
and is specific for use together with VASP
. For this case, Python2
was used.
Read the documentation and README
file. Check which version of USPEX you are going to install. Other versions might need to be setup in a different way.
Here we assume the use of Python2
. There is a list of required packages needed, see the README
file, including
numpy scipy spglib pysqlite ase matplotlib
To include these dependencies, there is an already prepared Python virtual environment which you can copy from:
/software/sse/manual/USPEX/10.3/py2_uspex
To use this virtual environment, you can set e.g. (2 lines)
export PATH=/path/to/the/installation/USPEX_v10.3/py2_uspex/bin:$PATH
export PYTHONPATH=/path/to/the/installation/USPEX_v10.3/py2_uspex/lib/python2.7/site-packages:$PYTHONPATH
The paths needs to be adjusted according to the installation, confirm that the correct Python is called, e.g. by
which python
Make sure that no other Python modules or environments are active, since it may interfere with the USPEX runs. For instance, if modules are loaded, they can be cleared by the command module purge
, also check .bashrc
.
Note that there are parts bundled with USPEX, namely pmpath.py
(for which “pylada” needs to be installed) which uses Python3
. In such case you might need to set up a separate virtual environment, refer to README
. For tests with Python3
, instead include the pysqlite3
package.
The steps for how the “py2_uspex” virtual environment was created. On Tetralith, a suitable Python2
module was loaded together with a build environment (might not be needed):
module load Python/2.7.15-env-nsc1-gcc-2018a-eb
module load buildenv-gcc/2018a-eb
Thereafter, create a virtual environment called “py2_uspex”
virtualenv --system-site-packages py2_uspex
Here ‘numpy’ etc. are made available from the module. Activate the virtual environment and install remaining needed packages
. py2_uspex/bin/activate
pip install spglib==1.15.1
pip install pysqlite
pip install ase==3.17.0
This ASE
version is the last one with Python2
support. You can check what packages are installed, e.g. with
pip list
to logout from the environment type “deactivate”.
First, I changed permission on the files
chmod u+x install.sh USPEX_MATLABruntime.install
Install with
./install.sh
Select the non-graphical installation, accept the questions and provide a path where you want the installation to be by giving the full path, e.g.
/path/to/the/installation/USPEX_v10.3/inst01
Note that USPEX will put some parameters needed for its run at the end of
~/.bashrc
so, if you reinstall it, or install several versions, you might need to clean up .bashrc
.
Assuming the above installation place, go to the Submission
folder and create a backup of the two files:
cd inst01/application/archive/src/Submission
cp submitJob_local.py submitJob_local.py.orig
cp checkStatus_local.py checkStatus_local.py.orig
Now copy patched versions of the two files for use with the Slurm job scheduler on Tetralith/Sigma:
cp /software/sse/manual/USPEX/10.3/submitJob_local.py .
cp /software/sse/manual/USPEX/10.3/checkStatus_local.py .
Changes made in submitJob_local.py
:
RUN_FILENAME = 'myrun'
JOB_NAME = 'USPEX-{}'.format(index)
# Step 1
myrun_content = '''#!/bin/bash
#SBATCH -A snic20XX-X-XX
#SBATCH -J {}
#SBATCH -n 4
#SBATCH -t 05:00:00
{}
'''.format(JOB_NAME, commnadExecutable)
with open(RUN_FILENAME, 'wb') as fp:
fp.write(myrun_content)
# Step 2
# It will output some message on the screen like '2350873.nano.cfn.bnl.local'
output = unicode(check_output('sbatch {}'.format(RUN_FILENAME), shell=True, universal_newlines=True))
# Step 3
# Here we parse job ID from the output of previous command
jobNumber = int(output.split(' ')[3])
print(str(jobNumber))
return jobNumber
Here, change to your own project id in the line #SBATCH -A snic20XX-X-XX
. Note that each job will run on 4 cores -n 4
for a maximum walltime of 5h -t 05:00:00
. Substitute with what works best for your study.
Changes made in checkStatus_local.py
:
# Step 1
output = check_output('sacct -X -n --format=State -j {}'.format(jobID), shell=True, universal_newlines=True)
# process = subprocess.Popen(['qstat', str(jobID)], stdout=subprocess.PIPE)
# output, err = process.communicate()
# Step 2
doneOr = True
if 'RUNNING' in output or 'PENDING' in output or 'COMPLETING' in output or 'CONFIGURING' in output:
doneOr = False
if doneOr:
for file in glob.glob('USPEX*'):
os.remove(file) # to remove the log file
print(str(doneOr))
return doneOr
Note that for NSC sacct
is used rather than squeue
, due to accessibilty reasons (‘squeue’ gives error when a finished job isn’t in the memory, might differ for different setups of Slurm).
Here we will copy the EX01-3D_Si_vasp
example to a new folder for the test
cd /path/to/the/installation/USPEX_v10.3/inst01/application/archive/examples
cp -r EX01-3D_Si_vasp EX01-3D_Si_vasp_test1
cd EX01-3D_Si_vasp_test1
Due to our settings in submitJob_local.py
, we need to take care of loading an appropriate VASP module and running VASP in the USPEX input file INPUT.TXT
, e.g.:
% commandExecutable
ml VASP/5.4.4.16052018-nsc1-intel-2018a-eb; mpprun vasp_std > log
% EndExecutable
Note the NSC specific mpprun
instead of mpirun
. Alternatively, the module can be set in the job template in submitJob_local.py
instead.
To start the USPEX process in the background on the login node:
nohup ./EX01-3D_Si_vasp.sh &
in this way it will not stop when the terminal is closed. Take care to remove the process after USPEX is finished, e.g. look for it with “ps -ef | grep username” and close with “kill -9 processid”. Alternatively, you could start the process directly on a terminal within screen or tmux , and close it after it has finished. |
With the present Slurm configuration at NSC clusters, it’s not recommended to send several hundreds or thousands of short jobs to the queue. In this case, some other solution needs to be found. Here one can look into the possibility of bunching many short runs into a single Slurm job.
Guides, documentation and FAQ.
Applying for projects and login accounts.