This system is in production. The SMHI iRODS system is for archiving and sharing internal and external data.
For support please use smhi-support@nsc.liu.se.
You need an account and a yubikey before you can store data in the system. Please send a mail to smhi-support@nsc.liu.se in order to get an account and a Yubikey authentication device.
The SMHI iRODS system consists of a collection of virtual machines which are hosted on a DELL PowerEdge R730xd, with 32GB physical memory and Xeon Haswell E5-2600 processor with 8 cores. There are 12x4TB disks in the system accessed via a PERC H710 RAID card. We use Xen as the virtualization software.
The virtual machines are defined as below:
Name Memory GB Core OS GB Data TB Role
smhi2-irods 16 4 1 2 iRODS iCAT server
smhi2-irods-www 4 1 1 1 iRODS web access server
smhi2-sr-001 4 1 1 32 iRODS storage server
The disks are attached to the storage server virtual machine, all the data has been copied from the old system.
We are running iRODS 4.0.3 with our custom changes and plan to upgrade to the recently announced version 4.1.5 soon.
Metadata is stored in a Postgres (V9.3) database on the iCAT server.
The userids should work the same way as with the old system. Only Yubikey access is supported on the iDrop web server.
The default resource to be used is ‘sr001p1’. It is backed with 16TB disk space.
One way to access iRODS is via the command line interface, which should be familiar for Unix users. There are pre-built packages available from the iRODS web site for most Unix/Linux distributions. These are called iCommands and look like ils, imv, ichmod, iget and iput for manipulating files.
To be able to use these we need a configuration file which will be described below and then we need to use the ‘iinit’ command to authenticate with the iRODS server. Then we can use the rest of the iRODS commands. They are documented at the iRODS web site https://docs.irods.org/4.1.1/icommands/user/. Using ‘-h’ as the argument to the commands will print a short help summary.
You can transfer files to and from the cluster using the commands ‘iput’/’iget’ or ‘irsync’.
In the examples below, the triolith.nsc.liu.se login node in the Triolith cluster is used. If you are running on another cluster, substitute the appropriate login node name for that cluster in the commands.
There should be an environment file ‘.irodsEnv’ in the ‘.irods’ subdirectory of the home directory ($HOME/.irods/.irodsEnv) which contains information where and how to access the iRODS metadata (iCAT) server.
It looks like (placeholders are in <>):
# iRODS personal configuration file.
# iRODS server host name:
irodsHost 'smhi2-irods.nsc.liu.se'
# iRODS server port number:
irodsPort 1247
# Default storage resource name:
irodsDefResource 'sr001p1'
# Home directory in iRODS:
irodsHome '/smhi2/home/<username>'
# Current directory in iRODS:
irodsCwd '/smhi2/home/<username>'
# Account name:
irodsUserName '<username>'
# Zone:
irodsZone 'smhi2'
irodsAuthScheme 'PAM'
Create the file with your favourite editor.
For the new version 4.1.8 the configuration file format had been changed to .json. An example is given as below.
{
"irods_host": "smhi2-irods.nsc.liu.se",
"irods_port": 1247,
"irods_default_resource": "sr001p1",
"irods_home": "/smhi2/home/username",
"irods_cwd": "/smhi2/home/username",
"irods_user_name": "username",
"irods_zone_name": "smhi2",
"irods_client_server_negotiation": "request_server_negotiation",
"irods_client_server_policy": "CS_NEG_REFUSE",
"irods_encryption_key_size": 32,
"irods_encryption_salt_size": 8,
"irods_encryption_num_hash_rounds": 16,
"irods_encryption_algorithm": "AES-256-CBC",
"irods_default_hash_scheme": "SHA256",
"irods_match_hash_policy": "compatible",
"irods_server_control_plane_port": 1248,
"irods_server_control_plane_key": "TEMPORARY__32byte_ctrl_plane_key",
"irods_server_control_plane_encryption_num_hash_rounds": 16,
"irods_server_control_plane_encryption_algorithm": "AES-256-CBC",
"irods_maximum_size_for_single_buffer_in_megabytes": 32,
"irods_default_number_of_transfer_threads": 4,
"irods_transfer_buffer_size_for_parallel_transfer_in_megabytes": 4,
"irods_authentication_scheme": "PAM"
}
The iRODS icommands client is loaded through the module system.
Log in to SNIC cluster
Execute: module load irods iinit
To be able to use the Yubikeys you have to have a correct iCommands environment file as described above. The authentication scheme to be used should be specified as PAM.
Insert the yubikey in an available USB-slot in your computer.
Type iinit
Touch the conductive surface on the yubikey to send a one-time password to the system.
$ iinit
Enter your current PAM (system) password:
$ ils
/smhi2/home/<USERNAME>
After that we can use the usual iCommands for 8 hours.
The easiest way to use the iRODS system is via the iCommands command line interface. There are pre-built binary packages available for downloading from the iRODS web site for the major Linux distributions.
It is also possible to build them from the source distribution. Download the source tarball and from the packaging subdirectory run
./build.sh -r --run-in-place icommands
The iCommands executables will be built under the bin subdirectory, adding that to the PATH we can use the locally built copy of the iRODS commands in case the binary package install is not feasible.
For uploading the files to iRODS we can use:
iput -k -K -N 0 -T -X rest --retries 3 --lfrestart lfrest -r localdir irodsdir
or
irsync -s -r localdir i:remotedir
To check the progress:
iquest "select sum(DATA_SIZE) where COLL_NAME like '/zone/directory%'"
to see the space used. The command iquest is very useful, iquest -h gives a few example queries.
A short summary for the iCommands is at http://docs.snic.se/wiki/SweStore/iRODS_icommand. There is a web interface available at smhi2-irods-www.nsc.liu.se. You can log in using the Yubikey and the files are presented via a web interface.
There is also the iDrop java client which is available from https://github.com/DICE-UNC/idrop.
There is a summary of the iRODS clients at http://irods.org/post/irods-user-interfaces.
Guides, documentation and FAQ.
Applying for projects and login accounts.