Tetralith

Tetralith is NSC's largest HPC cluster. It replaced NSC's previous HPC cluster Triolith in 2018. Tetralith is funded by SNIC and uses for research by Swedish research groups. Access to Tetralith is granted by SNIC.

Tetralith consists of 1908 compute nodes each with two Intel Xeon Gold 6130 cpus with 16 CPU cores each, giving a total of 61056 CPU cores. The performance of the complete system is around 3 Pflop/s (LINPACK Rmax).

In June 2019, Tetralith was placed [74th on the TOP500 List}(https://www.top500.org/list/2019/06/).

Hardware and software environment

There are 1832 "thin" nodes with 96 GiB of primary memory (RAM) and 60 "fat" nodes with 384 GiB. Each compute node has a local SSD disk where applications can store temporary files (approximately 200GiB per thin node, 900GiB per fat node).

All Tetralith nodes are interconnected with a 100 Gbps Intel Omni-Path network which is also used to connect to the existing disk storage. The Omni-Path network is similar to the FDR Infiniband network in Triolith (e.g still a fat-tree topology).

The hardware was delivered by ClusterVision B.V.

The servers used are Intel HNS2600BPB compute nodes, hosted in the 2U Intel H2204XXLRE chassis and equipped with Intel Xeon Gold 6130 for a total of 32 CPU cores per compute node.

(no)GPU nodes on Tetralith

NSC will soon add GPUs to a number of Tetralith computer nodes. We estimate these will become available to users in the first half of 2020.

GPU nodes are available on other SNIC systems, e.g Kebnekaise and Tegner.

Additional information

Tetralith migration guide

Tetralith getting started guide

Tetralith applications

Module system

Disk storage

Link to Tetralith press release


User Area

User support

Guides, documentation and FAQ.

Getting access

Applying for projects and login accounts.

System status

Everything OK!

No reported problems

Self-service

SUPR
NSC Express