We consider the storage system to be very reliable. It is based on the same proven technology (IBM Spectrum Scale (formerly GPFS)) as the previous system (which we consider to have been very reliable and ran from 2007 to 2014 without major problems), but has been improved in several ways, e.g:
Data on the system is protected against multiple disk failures using “8+2 Reed-Solomon” or better (i.e two disks out of a group of 10 can fail without affecting access to data). Combined with the short rebuild times after a disk failure we consider the risk of losing data due to disk failures to be very low.
We also use “snapshots” to protect against you (or NSC) accidentally deleting files.
However, we do not protect you against all types of failures. Some events can lead to loss of data, e.g
After considering the value of our user’s data (which often can be recreated by re-running compute jobs), the cost of making off-site backups (which could protect against most disasters and some software bugs, some mistakes and some intrusions) and the low risk of data loss due to the above risks, we have decided only to perform limited tape backups of home directories (weekly) and do no tape backups of project storage.
Put differently: for a fixed amount of money available for storage, we bought hard drives, not backup tapes.
If your data is very valuable or irreplaceable, we recommend that you keep copies outside NSC. If you cannot store that data at your home university, we can recommend Swestore.
Guides, documentation and FAQ.
Applying for projects and login accounts.