Dalton/LSDalton Installations on Tetralith & Sigma

The Dalton suite consists of two separate executables, Dalton and LSDalton. The Dalton code is a powerful tool for a wide range of molecular properties at different levels of theory, whereas LSDalton is a linear-scaling HF and DFT code suitable for large molecular systems, now also with some CCSD capabilites.

Official homepage: www.daltonprogram.org

How to run

Dalton:

ml Dalton/2016.2-nsc1-intel-2018a-eb 

LSDalton:

ml LSDalton/1.0-nsc1-intel-2018a-eb

On Tetralith, most compute nodes have 96 GB memory, so if you use all 32 cores, then you can allocate up to ~3000 MB per MPI rank.

Example script 1: two nodes, 64 MPI ranks, ~3 GB per rank

#!/bin/bash
#SBATCH -J jobname
#SBATCH -A SNIC-xxx-yyy
#SBATCH -n 64    
#SBATCH -N 2
#SBATCH -t 00:30:00
  
export DALTON_TMPDIR=/scratch/local

dalton  -noappend -D -N  $SLURM_NTASKS dalinp{.dal} [molinp{.mol} [potinp{.pot}] [pcmsolver{.pcm}]]  

Example script 2: two nodes, 16 MPI ranks, ~12 GB per rank

#!/bin/bash
#SBATCH -J jobname
#SBATCH -A SNIC-xxx-yyy
#SBATCH -N 2
#SBATCH --ntasks-per-node=8
#SBATCH -t 00:30:00
  
export DALTON_TMPDIR=/scratch/local

dalton -noappend -D -N $((SLURM_JOB_NUM_NODES*SLURM_NTASKS_PER_NODE)) dalinp{.dal} [molinp{.mol} [potinp{.pot}] [pcmsolver{.pcm}]]  

Note:

  1. Change SNIC-xxx-yyy to your account.
  2. Remember to add “-noappend” option. Otherwise DALTON_TMPDIR will be appended, but the new folder(s) do not exit.

User Area

User support

Guides, documentation and FAQ.

Getting access

Applying for projects and login accounts.

System status

Everything OK!

No reported problems

Self-service

SUPR
NSC Express