Here we list issues and features of MX software available in the PReSTO installation at NSC Tetralith and LUNARC Aurora being relevant to be aware of as a user of these installations.
To evaluate the outcome of a buster refinement run a user might run buster-report as buster-report -d run1 -dreport run1report_ where run1 is the directory of the first buster run and run1_report is the directory of the report generated by buster-report. On Tetralith the index.html output from buster-report look ugly in the Konqueror browser available, while index.html looks ok in the Firefox browser available at Aurora. To compensate for the ugly appearance of index.html in Konqueror browser at Tetralith, we install Latex so that buster-report generates a report.pdf file looking ok at Tetralith. At Aurora we did not installed Latex since index.html look ok in Firefox and for more buster-report options please read the buster-report help file in the terminal window by:
module load BUSTER buster-report -h (generate a help file in the terminal window)
Queued jobs can use a single compute node only due to limitations in the code that needs to be redesigned in order for phenix to run on several nodes. A user can allocate more than a single compute node, however only a single node will be used for computing and the other nodes will simply be a waste of compute time. Eventually the improper multi-node phenix job will crash, however sometimes the job will finish and waste significant amounts of compute time that might be unnoticed to newcomers running phenix. Jobs can be submitted to the compute nodes directly from the phenix GUI running at the login node, however keep in mind only using a single node with 19 cores at Aurora or 31 cores at Tetralith and adapt your allocation times accordingly.
When login to BioMAX offline cluster the Log output window is blank during place model, however job is indeed running as more easily seen when reaching rosetta rebuild stage.
During Place model a blank Log output window is shown caused by logfile not created when GUI read it. This happen on BioMAX offline cluster running nfs filesystem in /home/DUOusername
By using squeue -u DUOusername one can see that a job exists and by ssh offline-cn1 followed by top -u DUOusername that a process is running.
Here the job reached rosetta rebuild stage and many parallel processes are running in top terminal window
The Eiger compatible pre-release of XDSAPP that we call version 2.99.1 is missing a few parameters compared to the latest official XDSAPP release version 2.0. For instance the _--ccstar or (CC*)_ parameter used for data truncation according to Karplus and Diederichs 2012 is not active in the command-line version of XDSAPP used in PReSTO today (November 2017). _CC*_ will almost certainly be available in the next official XDSAPP release, however for now --ccstar generates an error message when available in XDSAPP sbatch scripts.
When saving movies in PyMOL 2.1.0 there are three encoders available named mpeg_encode/convert/ffmpeg
mpeg_encode and convert is working in PReSTO installation - With mpeg_encode one can save MPEG1 movies for PowerPoint - With convert one can save animated GIFs for web browsers like firefox ffmpeg option does NOT work in PReSTO installation (yet). We are missing the component in ffmpeg required for the "MPEG 4" and Quicktime alternatives. Right now the names used in PyMOL is misleading. Both alternatives are trying to create a video in the format "H.264/MPEG-4 AVC"
meanwhile there are movie converters online for quicktime format for instance. A good thing with PyMOL 2.1.0 is that rayTracing is not required for simple movies that look fine without rayTracing!