These settings worked OK for the provided example EX01-3D_Si_vasp
and is specific for use together with VASP
. For 10.4, Python3
is used.
Read the documentation and README
file. Check which version of USPEX you are going to install. Other versions might need to be setup in a different way.
tar xf USPEX-10.5.tar.gz
cd USPEX_v10.5
“xf” instead of “xfz” since file doesn’t seem to be compressed.
Use Python3
. There is a list of required packages needed, see the README
file, including:
numpy scipy spglib pysqlite3 ase matplotlib
To include these dependencies, one can create a Python virtual environment. Here below this procedure will be demonstrated.
On Tetralith, load suitable Python3
module together with build environment and SQLite (possibly not necessary):
module load Python/3.6.7-env-nsc1-gcc-2018a-eb
module load buildenv-gcc/2018a-eb
module load SQLite/3.13.0-nsc1
the instructions mentions that an installation of sqlite3 is needed, so therefore it’s also loaded. To not include user installed Python dependencies from .local (if you have), set
export PYTHONNOUSERSITE=1
Also make sure that PYTHONPATH doesn’t include some other installations (e.g. “env |
grep PYTHONPATH”). Create a virtual environment called e.g. “py3_uspex”: |
virtualenv --system-site-packages py3_uspex
Here ‘numpy’ etc. are made available from the module. Activate the virtual environment and install the remaining needed packages:
. py3_uspex/bin/activate
pip install spglib
pip install pysqlite3
pip install ase
pip install torch
You can check what packages are installed, e.g. with:
pip list
to logout from the environment type “deactivate”.
To use this environment together with USPEX, the easiest is to login to activate it when starting USPEX. Alternatively, you can set up appropriate paths when you’re going to use it
export PATH=/path/to/the/installation/USPEX_v10.5/py3_uspex/bin:$PATH
export PYTHONPATH=/path/to/the/installation/USPEX_v10.5/py3_uspex/lib/python3.6/site-packages:$PYTHONPATH
The paths needs to be adjusted according to your installation, confirm that the correct Python is called, e.g. by:
which python
Make sure that no other Python modules or environments are active in the terminal, since it may interfere with the USPEX runs. For instance, if modules are loaded, they can be cleared by the command module purge
, also check .bashrc
.
Note Here the use of pmpath.py
(for which “pylada” needs to be installed) isn’t considered.
First, change permission on the files (if needed)
chmod u+x install.sh USPEX_MATLABruntime.install
Install with
./install.sh
Select the non-graphical installation, option (2) with terminal, accept the questions and provide a path where you want the installation to be by giving the full path, e.g.
/path/to/the/installation/USPEX_v10.5/inst01
Note that USPEX will put some parameters needed for its run at the end of
~/.bashrc
so, if you reinstall it, or install several versions, you might need to clean up .bashrc
. If you’re going to start it from the same terminal later on, you need to
source ~/.bashrc
Alternatively, put the commands in a separate file which you source before using USPEX.
Assuming the above installation place, go to the Submission
folder and create a backup of the two files:
cd inst01/application/archive/src/Submission
cp submitJob_local.py submitJob_local.py.orig
cp checkStatus_local.py checkStatus_local.py.orig
Now copy patched versions of the two files for use with the Slurm job scheduler on Tetralith/Sigma:
cp /software/sse/manual/USPEX/10.5/submitJob_local.py .
cp /software/sse/manual/USPEX/10.5/checkStatus_local.py .
Some small changes made in submitJob_local.py
:
# Step 1
myrun_content = '''#!/bin/bash
#SBATCH -A snic20XX-X-XX
#SBATCH -J {}
#SBATCH -n 4
#SBATCH -t 05:00:00
{}
'''.format(JOB_NAME, commnadExecutable)
Here, change to your own project id in the line #SBATCH -A snic20XX-X-XX
. Note that each job will run on 4 cores -n 4
for a maximum walltime of 5h -t 05:00:00
. Substitute with what works best for your study.
Some small changes made in checkStatus_local.py
(“check_output” in step 1 and “if RUNNING …” in step 2):
# Step 1
try:
output = check_output('sacct -X -n --format=State -j {}'.format(jobID), shell=True, universal_newlines=True)
except:
output = "JOBDONE"
# process = subprocess.Popen(['qstat', str(jobID)], stdout=subprocess.PIPE)
# output, err = process.communicate()
# Step 2
doneOr = True
if 'RUNNING' in output or 'PENDING' in output or 'COMPLETING' in output or 'CONFIGURING' in output:
doneOr = False
if doneOr:
for file in glob.glob('USPEX*'):
os.remove(file) # to remove the log file
print(str(doneOr))
return doneOr
Note that for NSC sacct
is used rather than squeue
, due to accessibilty reasons (‘squeue’ gives error when a finished job isn’t in the memory, might differ for different setups of Slurm).
Before using USPEX, make sure to load the associated Python environment and source the USPEX parameters (if not already activated via .bashrc).
Here we will copy the EX01-3D_Si_vasp
example to a new folder for the test
cd /path/to/the/installation/USPEX_v10.5/inst01/application/archive/examples
tar xfz EX01-3D_Si_vasp.tgz
cp -r EX01-3D_Si_vasp EX01-3D_Si_vasp_test1
cd EX01-3D_Si_vasp_test1
Due to our settings in submitJob_local.py
, we need to take care of loading an appropriate VASP module and running VASP in the USPEX input file INPUT.TXT
, e.g.
% commandExecutable
ml VASP/5.4.4.16052018-nsc1-intel-2018a-eb; mpprun vasp_std > log
% EndExecutable
Note the NSC specific mpprun
instead of mpirun
. Alternatively, the module can be set in the job template in submitJob_local.py
instead.
To start the USPEX process in the background on the login node:
nohup ./EX01-3D_Si_vasp.sh &
in this way it will not stop when the terminal is closed. Take care to remove the process after USPEX is finished, e.g. look for it with “ps -ef | grep username” and close with “kill -9 processid”. Alternatively, you could start the process directly on a terminal within screen or tmux , and close it after it has finished. |
With the present Slurm configuration at NSC clusters, it’s not recommended to send several hundreds or thousands of short jobs to the queue. In this case, some other solution needs to be found. Here one can look into the possibility of bunching many short runs into a single Slurm job.
Guides, documentation and FAQ.
Applying for projects and login accounts.