Each user has a home directory. If you are sm_abcde
, your home
directory will be /home/sm_abcde
. If you need to refer to your home
directory in a script, use $HOME
instead of hardcoding the name.
You home directory is backed up to tape once per day. Use it for reasonable amounts of important data that cannot be recreated easily. If you need something restored from a backup, please contact smhi-support@nsc.liu.se.
User quotas are used on the home file system. The default is 100 GiB but can be increased if you need it (contact smhi-support@nsc.liu.se).
You can check in /home/diskinfo
(generated each half-hour) for usage
and quota information.
Note: Do not use the home directory for job run/work directories or similar. Use Accumulus storage for that, or even the node-local scratch storage (for things local to a node).
Freja and Bi shared home directories (but now Bi is turned off). If there will be multiple SMHI FoU resources available at the same time in the future, you can expect them to share home directories too.
Most of the time, yes. The system uses “snapshots” (a read-only point-in-time view of the file system that can be used to restore files from).
Snapshots are taken at certain intervals and kept for a time. The exact intervals and storage time depends mainly on the amount of available storage to use for snapshots. At the time of writing snapshots are taken every hour (kept for slightly over 24h), every day (kept 8 days) and every month (kept for 95 days).
Snapshots are available only on /home.
To recover deleted files from a snapshot (or check the contents of a
file as is was at an earlier time), check in $HOME/.zfs/snapshot
.
There you will find one directory per available snapshot containing
all files in your home directory as they where when the snapshot was
taken.
To “undelete” a file, simply copy it to a location outside the .zfs
directory (e.g cp .zfs/snapshot/20211208T1500/.bashrc my_old_bashrc
).
Files created and deleted in the time between when two snapshots were taken cannot be restored.
Files that were deleted too long ago (before the currently oldest snapshot was taken) cannot be restored from snapshots.
Note that snapshots are not guaranteed to exist and are not a backup. If the /home storage runs out of space they will be deleted and if the server suffers a catastrophic failure that will also affect the snapshots. Actual backups are taken to tape once per day.
Oops, I have deleted the file “/home/sm_abcde/test.c”, I would like to get it back.
Show which daily snapshots have the file:
[sm_abcde@bi ~]$ ls -1 .zfs/snapshot/*/test.c
.zfs/snapshot/20211029T0000/sm_abcde/test.c
.zfs/snapshot/20211030T0000/sm_abcde/test.c
.zfs/snapshot/20211031T0000/sm_abcde/test.c
.zfs/snapshot/20211101T0000/sm_abcde/test.c
Lets restore the latest version of the file:
[sm_abcde@bi ~]$ cp .zfs/snapshot/20211101T0000/sm_abcde/test.c .
Oops, I have very recently mangled the code in “test.c” so that no longer works, I would like to get the working version back.
List the available recent snapshots of that file and what the respective checksum are:
[sm_abcde@bi ~]$ md5sum .zfs/snapshot/*/test.c
f119c865306c35e64eb00f65d7279664 .zfs/snapshot/20211101T0600/test.c
f119c865306c35e64eb00f65d7279664 .zfs/snapshot/20211101T0700/test.c
f119c865306c35e64eb00f65d7279664 .zfs/snapshot/20211101T0800/test.c
f119c865306c35e64eb00f65d7279664 .zfs/snapshot/20211101T0900/test.c
0086eab58e556408fcb6858e6a0cf52a .zfs/snapshot/20211101T1000/test.c
0086eab58e556408fcb6858e6a0cf52a .zfs/snapshot/20211101T1100/test.c
Looks like there are only two versions of that file, and that the change was introduced after the 09:00 snapshot. Let’s restore that:
[sm_abcde@bi ~]$ cp .zfs/snapshot/20211101T0900/test.c .
The bulk of the storage available on SMHI FoU resources belongs to the Accumulus storage system, with several generations of servers running the Lustre distributed file system software.
Depending on the group you belong to, you may have access to subdirectories
on different Accumlus file systems. If you are user sm_abcde
, use a command
like ls -ld /nobackup/*/sm_abcde
to show the directories available to you.
As indicated by the “nobackup” part of the name, there are no backups of these file systems.
There are no user quotas on the file systems, but group quotas are used
on some of them. Have a look at the /nobackup/*/diskinfo
files to see
current information.
The Accumulus file systems are available on Freja (all files systems) and Tetralith (selected file systems, contact NSC if you cannot find your Accumulus data on Tetralith).
They will be available on future SMHI FoU resources too.
Each node has a scratch file system that can be used for temporary storage on that node while your job is running. The data will be deleted when the job finishes.
On Freja, you have approximately 860 GiB available per
node. You need to use the subdirectory created for you by the system
and pointed to by the $SNIC_TMP
environment variable.
The /software
file system contains software installed by NSC. Users
cannot write to that file system.
Most of the software is made available through the
“module” system and can be listed using module
avail
. Some libraries may not have modules associated with them, so
you might find it useful to browse the /software/apps
directory for
them.
You can find documentation for installed software here.
Guides, documentation and FAQ.
Applying for projects and login accounts.