Here we list issues and features of MX software available in the PReSTO installation at NSC Tetralith and LUNARC Aurora being relevant to be aware of as a user of these installations.
Sometimes DIALS terminate with an out-of-memory statement like
Processing sweep SWEEP1 failed: dials.integrate subprocess failed with exitcode 1: see /native/1600/dials/DEFAULT/NATIVE/SWEEP1/integrate/12_dials.integrate_INTEGRATE.log for more details Error: no Integrater implementations assigned for scaling Please send the contents of xia2.txt, xia2-error.txt and xia2-debug.txt to: firstname.lastname@example.org slurmstepd: error: Detected 2 oom-kill event(s) in step 37678.batch cgroup. Some of your processes may have been killed by the cgroup out-of-memory handler.
DIALS software authors explain the issue/BUG in detail, and the current workaround is to use half of the available cores at various compute nodes i.e.
For instance when using BioMAX online cluster with native.script change two lines of code into:
multiprocessing.nproc=24 sbatch -N1 --exclusive --ntasks-per-node=24 -J DIALS -o "$outdir/dials.out" --wrap="$dials"
Queued jobs can use a single compute node only due to limitations in the code that needs to be redesigned in order for phenix to run on several nodes. A user can allocate more than a single compute node, however only a single node will be used for computing and the other nodes will simply be a waste of compute time. Eventually the improper multi-node phenix job will crash, however sometimes the job will finish and waste significant amounts of compute time that might be unnoticed to newcomers running phenix. Jobs can be submitted to the compute nodes directly from the phenix GUI running at the login node, however keep in mind only using a single node with 19 cores at Aurora or 31 cores at Tetralith and adapt your allocation times accordingly.
When login to BioMAX offline cluster the Log output window is blank during place model, however job is indeed running as more easily seen when reaching rosetta rebuild stage.
During Place model a blank Log output window is shown caused by logfile not created when GUI read it. This happen on BioMAX offline cluster running nfs filesystem in /home/DUOusername
By using squeue -u DUOusername one can see that a job exists and by ssh offline-cn1 followed by top -u DUOusername that a process is running.
Here the job reached rosetta rebuild stage and many parallel processes are running in top terminal window
The Eiger compatible pre-release of XDSAPP that we call version 2.99.1 is missing a few parameters compared to the latest official XDSAPP release version 2.0. For instance the _--ccstar or (CC*)_ parameter used for data truncation according to Karplus and Diederichs 2012 is not active in the command-line version of XDSAPP used in PReSTO today (November 2017). _CC*_ will almost certainly be available in the next official XDSAPP release, however for now --ccstar generates an error message when available in XDSAPP sbatch scripts.
When saving movies in PyMOL 2.1.0 there are three encoders available named mpeg_encode/convert/ffmpeg
mpeg_encode and convert is working in PReSTO installation - With mpeg_encode one can save MPEG1 movies for PowerPoint - With convert one can save animated GIFs for web browsers like firefox ffmpeg option does NOT work in PReSTO installation (yet). We are missing the component in ffmpeg required for the "MPEG 4" and Quicktime alternatives. Right now the names used in PyMOL is misleading. Both alternatives are trying to create a video in the format "H.264/MPEG-4 AVC"
meanwhile there are movie converters online for quicktime format for instance. A good thing with PyMOL 2.1.0 is that rayTracing is not required for simple movies that look fine without rayTracing!