BioMAX scripts at online and offline cluster

BioMAX offers two login possibilities using thinlinc

  • online cluster clu0-fe-1.maxiv.lu.se with 24 cores per compute node and hyperthreaded to handle 48 tasks per compute node
  • offline cluster offline-fe1.maxiv.lu.se with 20 cores per compute node and hyperthreaded to handle 40 tasks per compute node

Please find Data handling and Processing at BioMAX and some local presto documentation under Manual data reduction. BioMAX is equipped with an Eiger 16M detector and GlobalPhasing made a useful list of beamline specific settings around the world however here we discuss settings and scripts suitable for BioMAX online and offline clusters.

Evaluating diffraction on Eiger detectors

ALBULA and ADXV are software used to monitor crystal X-ray diffraction during screening and first characterization. ALBULA, developed by Eiger detector manufacturers, have no issues on reading the HFD5 format. For ADXV, we developed an adxv_eiger.sh script to read all metadata from the master file when using ADXV as:

module load adxv
adxv_eiger.sh protein_x001_1_master.h5

The adxv_eiger.sh script has to be runnable e.g. by chmod 755 adxv_eiger.sh.

XDSAPP3, autoPROC and DIALS at MAX IV offline cluster

The native.script and anomalous.script below is intended for native/anomalous data processing with XDSAPP, autoPROC and xia2/DIALS at the MAX IV offline cluster. XDSAPP runs at two compute nodes while xia2/DIALS and autoPROC use a single compute node each. In contrast to XDSAPP3, that benefit from several nodes, xia2/DIALS and autoPROC contain many serial subroutines and use of more than a single core does not save much wall-clock time. To run native.script copy-paste it and make it runnable by chmod 755 native.script, create the output directory and execute ./native.script /path/file_master.h5 /path/output-directory 1 3600 and please note the output directory must exist or otherwise the scripts does not run.

A few parameters in native.script/anomalous.script are BioMAX specific such as goniometer.axes=0,1,0 and since every compute node at MAXIV offline cluster have 20 cores that are hyperthreaded to handle 40 tasks - sinfo -N -o “%N %c”, JOBS x PROCESSORS should equal 40 in autoPROC using a single compute node and 80 in XDSAPP3 using two compute nodes.

The native.script and anomalous.script below is very similar in fact there are three changes only

  • exchange –fried=true for –fried=false in xdsapp
  • exchange -noANO for -ANO in autoproc
  • add a new row atom=X for xia2/dials

in the native script below, and viola you have a script for anomalous data processing.

native.script for offline-fe1 cluster handling 40 tasks per compute node

#!/bin/sh -eu

#
# Argument list:
# $1: indata
# $2: utdata
# $3: fromrange 
# $4: torange  
#

indir=`realpath -es "$1"`
outdir=`realpath -es "$2"`

xdsapp="\
module load XDSAPP3
xdsapp3 --cmd --dir=$outdir/xdsapp -i $indir --fried=true -j 8 -c 10 --range=$3\ $4 
"

autoproc="\
module load autoPROC
process -h5 $indir \
    -noANO \
    autoPROC_XdsKeyword_LIB=\$EBROOTDURIN/lib/durin-plugin.so \
    autoPROC_XdsKeyword_MAXIMUM_NUMBER_OF_JOBS=4 \
    autoPROC_XdsKeyword_MAXIMUM_NUMBER_OF_PROCESSORS=10 \
    autoPROC_XdsKeyword_DATA_RANGE=$3\ $4 \
    autoPROC_XdsKeyword_SPOT_RANGE=$3\ $4 \
    -d $outdir/autoproc 
"

dials="\
module load DIALS
cd $outdir/dials
xia2 \
    pipeline=dials failover=true \
    image=$indir:$3:$4 \
    multiprocessing.mode=serial \
    multiprocessing.njob=1 \
    multiprocessing.nproc=auto
"

# autoproc bails out if its outdir basename exists; don't make it
mkdir "$outdir/xdsapp" "$outdir/dials"

#echo "$xdsapp"
#echo "$autoproc"
#echo "$dials"

sbatch -N2 --exclusive --ntasks-per-node=40 -J XDSAPP -o "$outdir/xdsapp.out" --wrap="$xdsapp"
sbatch -N1 --exclusive --ntasks-per-node=40 -J autoPROC -o "$outdir/autoproc.out" --wrap="$autoproc"
sbatch -N1 --exclusive --ntasks-per-node=40 -J DIALS -o "$outdir/dials.out" --wrap="$dials"

anomalous.script for offline-fe1 cluster handling 40 tasks per compute node

#!/bin/sh -eu

#
# Argument list:
# $1: indata
# $2: utdata
# $3: fromrange 
# $4: torange  
#

indir=`realpath -es "$1"`
outdir=`realpath -es "$2"`

xdsapp="\
module load XDSAPP3
xdsapp3 --cmd --dir=$outdir/xdsapp -i $indir --fried=false -j 8 -c 10 --range=$3\ $4 
"

autoproc="\
module load autoPROC
process -h5 $indir \
    -ANO \
    autoPROC_XdsKeyword_LIB=\$EBROOTDURIN/lib/durin-plugin.so \
    autoPROC_XdsKeyword_MAXIMUM_NUMBER_OF_JOBS=4 \
    autoPROC_XdsKeyword_MAXIMUM_NUMBER_OF_PROCESSORS=10 \
    autoPROC_XdsKeyword_DATA_RANGE=$3\ $4 \
    autoPROC_XdsKeyword_SPOT_RANGE=$3\ $4 \
    -d $outdir/autoproc 
"

dials="\
module load DIALS
cd $outdir/dials
xia2 \
    atom=X \
    pipeline=dials failover=true \
    image=$indir:$3:$4 \
    multiprocessing.mode=serial \
    multiprocessing.njob=1 \
    multiprocessing.nproc=auto
"

# autoproc bails out if its outdir basename exists; don't make it
mkdir "$outdir/xdsapp" "$outdir/dials"

#echo "$xdsapp"
#echo "$autoproc"
#echo "$dials"

sbatch -N2 --exclusive --ntasks-per-node=40 -J XDSAPP -o "$outdir/xdsapp.out" --wrap="$xdsapp"
sbatch -N1 --exclusive --ntasks-per-node=40 -J autoPROC -o "$outdir/autoproc.out" --wrap="$autoproc"
sbatch -N1 --exclusive --ntasks-per-node=40 -J DIALS -o "$outdir/dials.out" --wrap="$dials"

XDS and its derivatives

ROTATION_AXIS parameter

The crystal rotation axis is a key parameter for XDS and its associated softwares XDSAPP, XDSGUI and autoPROC.

Software Keyword Input file or script
XDS ROTATION_AXIS= 0 1 0 XDS.INP
XDSGUI ROTATION_AXIS= 0 1 0 XDS.INP
XDSAPP –cmd –raxis=”0 1 0” xdsapp.script
autoPROC autoPROC_XdsKeyword_ROTATION_AXIS=”0 1 0” process.script

Table1. Rotation axis keywords for various MX data processing softwares

Lines for XDS.INP in XDSGUI

To run with HDF5 containers XDSGUI require the following steps

  1. Launch XDSGUI from the PReSTO Menu
  2. Use the slider to select 24 cores online cluster, and 20 cores at offline cluster
  3. Select “Folder with XDS configuration and output file” in XDSGUI
  4. Load the master container and press “generate_XDS.INP” button
  5. Add these three lines to XDS.INP

     **online cluster clu0-fe-1.maxiv.lu.se**
     MAXIMUM_NUMBER_OF_JOBS= 4
     MAXIMUM_NUMBER_OF_PROCESSORS= 6
     LIB=/sw/pkg/presto/e/9.0/software/Durin/2019v1-foss-2021a/lib/durin-plugin.so
        
     **offline cluster offline-fe1.maxiv.lu.se**
     MAXIMUM_NUMBER_OF_JOBS= 4
     MAXIMUM_NUMBER_OF_PROCESSORS= 5
     LIB=/sw/pkg/presto/e/9.0/software/Durin/2019v1-foss-2021a/lib/durin-plugin.so
    
  6. Now press “Run XDS” and off you go

More on DIALS at BioMAX

DIALS is software in rapid development for integration of MX diffraction data. DIALS is most often used via xia2 -dials option and may also be run in a step-wise manner following this tutorial and when using dials.image_viewer or dials.reciprocal_latttice_viewer vglrun should be used for optimal experience at the BioMAX cluster like

  • vglrun dials.image_viewer datablock.json strong.pickle
  • vglrun dials.reciprocal_lattice_viewer experiments.json indexed.pickle

Simplified script for xia2 -dials only with BioMAX data

#!/bin/sh -eu
#SBATCH -t 1:00:00
#SBATCH -N 1 --exclusive
module load DIALS
xia2 \
pipeline=dials failover=true \
image=/home/marmoc2/thau/thau1-natA10_2_master.h5 \
multiprocessing.mode=serial \
multiprocessing.njob=1 \
multiprocessing.nproc=auto \

Simplified script for xia2 -3dii (i.e. XDS) only with BioMAX data

#!/bin/sh -eu
#SBATCH -t 1:00:00
#SBATCH -N 1 --exclusive
module load DIALS
xia2 \
pipeline=3dii failover=true \
image=/home/marmoc2/thau/thau1-natA10_2_master.h5 \
multiprocessing.mode=serial \
multiprocessing.njob=1 \
multiprocessing.nproc=auto \

LUNARC Cosmos data processing

Data from Swedish academics will be automatically transfer to LUNARC Cosmos and available in /projects/maxiv/visitors/biomax/proposalNr

If you cannot access this folder at Cosmos please use the NAISS support form or email support@lunarc.lu.se

PReSTO access at LUNARC Cosmos

Use the “id” command at LUNARC and check if you have access to presto group.

For instance:

$ id
uid=16804(mochma) gid=16800(liu-tora) groups=16800(liu-tora),24500(presto),30400(liu-mamo),350003(x20180077),350212(max4xp_350212),351408(max4xp_20220149),20193561(sto20193561),796200020(prestoadm)

If you do not have access to presto, review the MX-licenses according to access presto and email support@lunarc.lu.se


User Area

User support

Guides, documentation and FAQ.

Getting access

Applying for projects and login accounts.

System status

Everything OK!

No reported problems

Self-service

SUPR
NSC Express