Run cryoSPARC on Berzelius

A CryoSPARC workshop arranged in April 2024 resulted in getting started with CryoSPARC on Berzelius tutorial.

We need to setup a file called .cryosparc-license with the license information:

[torbenr@berzelius001 ~]$ cat << EOF > .cryosparc-license
> cryoSPARC-license-key
> your-email-address
> EOF
**NOTE1:** A cryosparc user will be created for you automatically when starting cryosparc. This user is personal, and no other users should be created by you. Doing so could break Terms of Service for the cluster.
**NOTE2:** Make sure to replace "cryoSPARC-license-key" with your actual cryoSPARC license key, and replace "your-email-address" with your actual email address!
**NOTE3:** You only need to setup this file the first time you want to run cryoSPARC!

As of cryosparc 4.2.1 and forwards cryosparc is run on a login node (berzelius1 or berzelius2), from where jobs are scheduled to run on compute nodes. There are a few adaptions made to cryosparc to get it to run OK on berzelius, described below. For general information about cryosparc, look at the official documentation at https://guide.cryosparc.com/

Go to the directory where you have (or want to have) your cryoSPARC jobs:

[karho@berzelius002 ~]$ cd /proj/nsc/users/karho/cryosparc_datadir

You can check for available cryosparc installations with module avail

[karho@berzelius002 cryosparc_datadir]$ module avail cryosparc

Or you could load the current default cryosparc module by

[karho@berzelius002 cryosparc_datadir]$ module load cryosparc

From here we will then start cryosparc in our folder:

[karho@berzelius002 cryosparc_datadir]$ cryosparc
...
app: started
app_api: started
-----------------------------------------------------
CryoSPARC master started.
From this machine, access cryoSPARC and cryoSPARC Live at
http://localhost:39049

Then you can connect to the web interface with a web browser using the URL printed above. Login using the email and license string from the .cryosparc-license file.

**NOTE:** The port number will *not* always be 39000(default) or 39049. This port number can already be in use by another application, such as another instance of cryoSPARC and then the number for your instance will be different!

cryoSPARC login page

cryoSPARC web GUI

submit variables

cryoSPARC variables

As of cryosparc 4.6.2 we’ve added variables in our cryosparc lanes. You can use these variables when queueing a job to change the default values. These variables will not be visible unless you get updated files from the cryoSPARC 4.6.2 installation. If you need to, make sure you are running cryosparc 4.6.2 and do the procedure described below on how to update lanes.

lanes

Cryosparc on berzelius has four preconfigured lanes. mig, thin, safe, fat. Which lane is best to use depends on your job requirements. Our recommendation is to start with mig, unless you know the job requirements are higher than what is available in the mig-reservation.

If cryosparc is started from a completely new folder these lanes will be added automatically, and you should be able to queue jobs to them. If cryosparc is started from an older directory this does not happen automatically.

module load cryosparc
cd /path/to/your/cryosparc/folder
cryosparc start
cryosparc copylanes

installed plugins

topaz and 3dflex are installed.

particle caching

Cryosparc can cache particles to a given folder on compute nodes, which will improve performance. By default 2 threads will be used for caching, this can be insufficient for larger datasets. The number of threads used can be increased by adding a sensible number of threads in worker_config.sh:

export CRYOSPARC_CACHE_NUM_THREADS=16

access the web-GUI from a local computer

karho@laptop:~$ ssh -N -L 39049:localhost:39049 karho@berzelius1.nsc.liu.se
then point your local browser to localhost:39049 and log in to cryosparc there.

Known problems

The web GUI might report that the GPU is not being used even if it is. This could be safely ignored unless you have other indications it is true. It is not possible to run cryosparc interactively on a compute node.


User Area

User support

Guides, documentation and FAQ.

Getting access

Applying for projects and login accounts.

System status

Everything OK!

No reported problems

Self-service

SUPR
NSC Express