A CryoSPARC workshop arranged in April 2024 resulted in getting started with CryoSPARC on Berzelius tutorial.
We need to setup a file called .cryosparc-license
with the license information:
[torbenr@berzelius001 ~]$ cat << EOF > .cryosparc-license
> cryoSPARC-license-key
> your-email-address
> EOF
As of cryosparc 4.2.1 and forwards cryosparc is run on a login node (berzelius1 or berzelius2), from where jobs are scheduled to run on compute nodes. There are a few adaptions made to cryosparc to get it to run OK on berzelius, described below. For general information about cryosparc, look at the official documentation at https://guide.cryosparc.com/
Go to the directory where you have (or want to have) your cryoSPARC jobs:
[karho@berzelius002 ~]$ cd /proj/nsc/users/karho/cryosparc_datadir
You can check for available cryosparc installations with module avail
[karho@berzelius002 cryosparc_datadir]$ module avail cryosparc
Or you could load the current default cryosparc module by
[karho@berzelius002 cryosparc_datadir]$ module load cryosparc
From here we will then start cryosparc in our folder:
[karho@berzelius002 cryosparc_datadir]$ cryosparc
...
app: started
app_api: started
-----------------------------------------------------
CryoSPARC master started.
From this machine, access cryoSPARC and cryoSPARC Live at
http://localhost:39049
Then you can connect to the web interface with a web browser using the URL printed above.
Login using the email and license string from the .cryosparc-license
file.
Cryosparc on berzelius has three preconfigured lanes as of cryosparc 4.4.1; fat, thin, and mig. Which lane you should use depends on the job, our recommendation is to use the mig lane unless you’re certain your job needs more resources and will get over the GPU power limit.
If cryosparc is started from a completely new folder these lanes will be added automatically, and you should be able to queue jobs to them. If cryosparc is started from an older directory this does not happen automatically. You can run the commands below for each lane from your cryosparc folder to add them:
module load cryosparc
cd /path/to/your/cryosparc/folder
cryosparc copylanes
cryosparc start
cd lane_mig
cryosparc cluster connect
For advanced users, you might want to create your own lane to get a different configuration, then create a new folder lane_something, ideally copy the files from another lane and edit as you please, before adding it as instructed above.
topaz and 3dflex are installed.
Cryosparc can cache particles to a given folder on compute nodes, which will improve performance. By default 2 threads will be used for caching, this can be insufficient for larger datasets. The number of threads used can be increased by adding a sensible number of threads in worker_config.sh:
export CRYOSPARC_CACHE_NUM_THREADS=16
karho@laptop:~$ ssh -N -L 39049:localhost:39049 karho@berzelius1.nsc.liu.se
then point your local browser to localhost:39049 and log in to cryosparc there.
There are some startup problems with cryosparc currently. If you encounter problems and need help, please contact support. The web GUI might report that the GPU is not being used even if it is. This could be safely ignored unless you have other indications it is true.
Guides, documentation and FAQ.
Applying for projects and login accounts.