There are nodes reserved for short test and development jobs with Rocky Linux 9.
If you need to test jobs on CentOS 7, use the regular compute nodes.
The limits are the same as before, max 1h long jobs and max 64 cores per user.
To use the Rocky Linux 9 test nodes, use --reservation=now
. The
reservation name will be “now” even after January 8th.
(“now” as in “I need nodes now, with short or no queue time”)
Please note that you will need to use the latest Thinlinc client (v4.15 or later) if you want to use Thinlinc on Tetralith. You can download the most recent client from https://www.cendio.com/.
The version number of your Thinlinc client is displayed in the upper right corner of the “Thinlinc Client” window (where you enter your password, etc.).
If you use a too old client you will get the error message “Server unexpectedly closed connection”, “couldn’t set up a secure tunnel to Thinclinc agent” or something similar.
Jobs submitted from the CentOS 7 login node will (unless you override it by specifying a reservation) run on CentOS 7 compute nodes.
Jobs submitted from the Rocky 9 login node will (unless you override it by specifying a reservation) run on Rocky 9 compute nodes.
Initially, Rocky 9 jobs will be submitted to a reservation named “el9” and CentOS 7 jobs will not use a reservation. Mid-December we will swap this around (meaning CentOS 7 jobs will be submitted to an “el7” reservation and Rocky 9 jobs will not use a reservation).
The above means that for most users, things will “just work”.
However, if you for some reason submit a job to the “el9” reservation from a login or compute node running EL7, the job will likely fail. The same applies in the opposite direction. This is due to how Slurm works - it copies all environment variables and restores then when the job starts, and some environment variables (e.g PATH) are not compatible between the two parts of the system. If you need to submit jobs to both parts of the system from a single location, contact NSC Support for advice.
nsc-boost-timelimit
and nsc-boost-reservation
will not work in the
EL7 part of Tetralith.
nsc-boost-priority
should work normally.
This is due to how we separate EL7 from EL9 compute nodes using a reservation. Boost-tools was not written to handle a situation where “normal” jobs run in a reservation, and unfortunately we did not have time to re-write those parts of nsc-boost-tools.
If you need a job’s Timelimit changed or a reservation created, please use the old method - contact NSC Support
Guides, documentation and FAQ.
Applying for projects and login accounts.