OpenFOAM is a free, open source software for computational fluid dynamics (CFD).
Official homepage of OpenFOAM trademark holders: www.openfoam.com (versions named YYMM, e.g. 1806).
Homepage of OpenFOAM Foundation: www.openfoam.org (versions numbered N.n, e.g. 5.0).
OpenFOAM is available via the module system on Tetralith and Sigma. For more information about available versions, please see the Tetralith and Sigma Software list.
OpenFoam.org | |||
---|---|---|---|
Version | NSC Module | Compilation | Integer Size |
2.3.1 | OpenFOAM/2.3.1-opt-int32-hpc1-intel-2023a-eb | Optimized | 32-Bit |
3.0.1 | OpenFOAM/3.0.1-opt-int32-hpc2-intel-2023a-eb | Optimized | 32-Bit |
6 | OpenFOAM/6-opt-int32-hpc2-intel-2023a-eb | Optimized | 32-Bit |
6 | OpenFOAM/6-opt-int64-hpc2-intel-2023a-eb | Optimized | 64-Bit |
7 | OpenFOAM/7-opt-int32-hpc2-intel-2023a-eb | Optimized | 32-Bit |
7 | OpenFOAM/7-opt-int64-hpc2-intel-2023a-eb | Optimized | 64-Bit |
8 | OpenFOAM/8-opt-int32-hpc1-intel-2023a-eb | Optimized | 32-Bit |
9 | OpenFOAM/9-opt-int32-hpc1-intel-2023a-eb | Optimized | 32-Bit |
10 | OpenFOAM/10-opt-int32-hpc1-intel-2023a-eb | Optimized | 32-Bit |
OpenFoam.org | |
---|---|
Version | NSC Installation Path |
2.3.1 | /software/sse2/tetralith_el9/manual/OpenFOAM/OpenFOAM.org/2.3.1/intel-2023a-eb/hpc1/Opt/Int32 |
3.0.1 | /software/sse2/tetralith_el9/manual/OpenFOAM/OpenFOAM.org/3.0.1/intel-2023a-eb/hpc2/Opt/Int32 |
6 | /software/sse2/tetralith_el9/manual/OpenFOAM/OpenFOAM.org/6/intel-2023a-eb/hpc2/Opt/Int32 |
6 | /software/sse2/tetralith_el9/manual/OpenFOAM/OpenFOAM.org/6/intel-2023a-eb/hpc2/Opt/Int64 |
7 | /software/sse2/tetralith_el9/manual/OpenFOAM/OpenFOAM.org/7/intel-2023a-eb/hpc2/Opt/Int32 |
7 | /software/sse2/tetralith_el9/manual/OpenFOAM/OpenFOAM.org/7/intel-2023a-eb/hpc2/Opt/Int64 |
8 | /software/sse2/tetralith_el9/manual/OpenFOAM/OpenFOAM.org/8/intel-2023a-eb/nsc1/Opt/Int32 |
9 | /software/sse2/tetralith_el9/manual/OpenFOAM/OpenFOAM.org/9/intel-2023a-eb/nsc1/Opt/Int32 |
10 | /software/sse2/tetralith_el9/manual/OpenFOAM/OpenFOAM.org/10/intel-2023a-eb/nsc1/Opt/Int32 |
OpenFoam.com | |||
---|---|---|---|
Version | NSC Module | Compilation | Integer Size |
2106 | OpenFOAM/2106-220610-opt-int32-hpc1-intel-2023a-eb | Optimized | 32-Bit |
2112 | OpenFOAM/2112-220610-opt-int32-hpc1-intel-2023a-eb | Optimized | 32-Bit |
2306 | OpenFOAM/2306-opt-int32-hpc1-intel-2023a-eb | Optimized | 32-Bit |
OpenFoam.com | |
---|---|
Version | NSC Installation Path |
2106 | /software/sse2/tetralith_el9/manual/OpenFOAM/OpenFOAM.com/2106-220610/intel-2023a-eb/hpc1/Opt/Int32 |
2112 | /software/sse2/tetralith_el9/manual/OpenFOAM/OpenFOAM.com/2112-220610/intel-2023a-eb/hpc1/Opt/Int32 |
2306 | /software/sse2/tetralith_el9/manual/OpenFOAM/OpenFOAM.com/2306/intel-2023a-eb/nsc1/Opt/Int32 |
Resource | Description |
---|---|
Håkan Nilsson, Chalmers | CFD with OpenSource Software |
Håkan Nilsson, Chalmers | Tips and Tricks to install OpenFoam |
openfoamwiki.net | Unofficial OpenFOAM Wiki |
CFD Online | <a href="https://www.cfd-online.com/Forums/openfoam/" target=ß"_blank">CFD Online OpenFOAM Forum</a> |
HPC2N | Example of how to compile your own OpenFOAM application |
T. Holzmann | Collection of OpenFOAM Tutorials |
Free OpenFOAM Book | The OpenFOAM Technology Primer |
Load the OpenFOAM module corresponding to the version that you want to use, e.g
module load OpenFOAM/10-opt-int32-hpc1-intel-2023a-eb
Next, you have to source the OpenFOAM bashrc file to set the OpenFOAM environment variables:
source $FOAM_BASHRC
Now you can start an OpenFOAM program, e.g
interFoam <options>
To execute parallel programs, use the command mpprun, e.g.
mpprun interFoam <options>
At NSC, mpprun is used instead of the standard mpirun. mpprun automatically picks up the number of tasks from the slurm environment.
Typically, you will submit your job using a slurm batch script. Example using 4 CPU cores, damBreak testcase: $FOAM_TUTORIALS/multiphase/interFoam/laminar/damBreak/
#!/bin/bash
#
#SBATCH -n 4
#SBATCH -t 00:20:00
#SBATCH -J jobname
#SBATCH -A SNIC-xxx-yyy
module load OpenFOAM/10-opt-int32-hpc1-intel-2023a-eb
source $FOAM_BASHRC
blockMesh -case damBreak
setFields -case damBreak
decomposePar -case damBreak
mpprun interFoam -parallel -case damBreak &> result.out
OpenFOAM allows to add user defined applications to OpenFOAM. In order to compile these extra libraries, one has to use the same compiler that was used to compile OpenFOAM itself. The correct build environment is determined by the naming of the OpenFOAM module. For example, the module OpenFOAM/10-opt-int32-hpc1-intel-2023a-eb refers to the build environment: buildenv-intel/2023a-eb. The basic steps to prepare your compilation are as following:
These are just the very basic steps. It may be required to set additional variables, depending on the application that you want to compile.
User defined applications are typically installed in the directory $WM_PROJECT_USER_DIR. It refers to an OpenFOAM directory in your $HOME directory. For larger applications, the $HOME directory may be not the ideal place to store your data. Consider to use the directory /proj. If you want to change an OpenFOAM directory path, you can simply copy the $FOAM_BASHRC file (bashrc in the OpenFOAM directory etc), and make the changes that suit your requirements in your local copy of bashrc. In this case, you also have to source your local copy of bashrc.
More information how to compile your own OpenFOAM application can be found e.g. at HPC2N: Example of how to compile your own OpenFOAM application, and in the official documentation, e.g. OpenFOAM User Guide: Compiling applications & libraries
OpenFOAM.com includes several community packages. Since OpenFOAM-v1712, community contributions are included using the git submodule system. Amongst others, it includes the grid generation software cfmesh. Since version v2006, it seems that the community packages are not automatically installed anymore. If you install a version from OpenFOAM.com by yourself and cfmesh is missing, then you can install it the following way:
Use the following commands to obtain the source files for cfmesh
git submodule init
git submodule update
-prefix=openfoam
, cfmesh will be installed in an OpenFOAM directory in your home directory. In the directory “modules”, you also find a file README.md, that provides more details about the build locations and the -prefix
options.OpenFOAM creates a large number of output files, particularly when running OpenFOAM in parallel. To reduce the number of files, a new file format was introduced in OpenFOAM 7 and OpenFOAM v1712. It is called collated file format. We refer to the official documentation for more details:
One should be aware, that the collated file format does not seem to be supprted by ParaView. We describe further details on this subject in the next section.
There are several ways to read your OpenFOAM output data with Paraview:
paraFoam
In the OpenFOAM userguide, the script paraFoam is mentioned to read your OpenFOAM data with Paraview. To use paraFoam, you first have to load the ParaView module as well as the OpenFOAM module. <div style="line-height:100%;">
</div>
Important: It is important to place the *.foam file in the correct directory. It must be present in the main directory of your case. On this directory level, you will typically find directories such as constant or system as well. That means, the *.foam file should not be placed in the same directory as the individual output files (e.g. the data for U, alpha.water, p). Otherwise it will not work.
foamToVTK
Another alternative is to convert the OpenFOAM data into the VTK data format, which can be read by Paraview. This can be done via the command foamToVTK. It will create a directory VTK, where all the converted data is stored. In the subdirectories you will find output files in the VTK format, namely *.vtk files. The disadvantage of this approach is, that the data is stored in two different formats, such that the amount of data is roughly doubled. <div style="line-height:100%;">
</div>
Decomposed Data
If you run OpenFOAM in parallel, you typically have directories for each processor, that contain data for each time step. The directories are named processor0, processor1, etc. For a parallel case using 4 partitions, the directory structure of your case looks as following:
0 Allrun case.foam constant processor0 processor1 processor2 processor3 system
Notice, that the file case.foam only has to be located in the top directory, but not within each processor directory. To open the decomposed data within ParaView, go to the section Properties (case.foam) and choose the option: Case Type > Decomposed Case.
Reconstructed Data
You can also reconstruct the decomposed data into one dataset for the complete domain. OpenFOAM provides the command
reconstructPar
to reconstruct the global domain, including the mesh and solution data. The case is reconstructed by merging the sets of time directories from each processor* directory into a single set of time directories. All time directories will be saved in the top directory of the case. To load the reconstructed data with ParaView, go to the section Properties (case.foam) and choose the option: Case Type > Reconstructed Case.
Collated File Format
In newer versions of OpenFOAM, you have the option to save the data in a more compact way. It is called collated data format, which has been introduced in OpenFOAM 7 and OpenFOAM v1712. The data is not saved in subdirectories for each processor anymore, which helps to reduce the number of files. The data is saved in a directory called processors<number of partitions>. Example: Using 4 partitions, the directory structure looks as following:
0 Allrun case.foam constant processors4 system
It seems, that the collated data format is not supported by ParaView. That means, you cannot directly load the decomposed data with ParaView. Either one has to reconstruct the data (reconstructPar), or one has to convert the data into the standard uncollated format.
How to convert collated to uncollated file format
Existing files can be converted from collated to uncollated data format (and vice versa) using the OpenFOAM utility foamFormatConvert. It can be executed as following:
mpprun foamFormatConvert -parallel -fileHandler uncollated, or
mpprun foamFormatConvert -parallel -fileHandler collated
For further details, we refer to the OpenFOAM documentation, Parallel I/O. <div style="line-height:150%;">
</div>
Important: To convert the decomposed data, you have to use the same number of CPU cores (tasks), as you used for your simulation. foamFormatConvert reads the number of partitions from the OpenFOAM file decomposeParDict. For example, if your simulation used 128 CPU cores, then you also have run foamFormatConvert with 128 cores. Particularly when using a high number of cores, you should carefully choose the file format that suits your needs. The way you post-process your data, e.g. with ParaView, has to be taken into account as well.
OpenFOAM offers several utility programs to convert meshes from different grid formats into the OpenFOAM format. Typically, these converters only run in serial, but not in parallel. To convert a mesh from Fluent to the OpenFOAM format, you can use the program fluent3DMeshToFoam
. There is also a utility fluentMeshToFoam
, that seems to be an older version for 2D meshes.
To convert a Fluent mesh to OpenFOAM format, you have create an OpenFOAM case, e.g. in a directory case_dir. If the Fluent mesh is located in the directory case_dir, you can simply run the converter: fluent3DMeshToFoam fluent_mesh.msh. If the Fluent mesh file is saved in a different directory, you have to specify the case directory using the option: -case <path_to_case_dir>
Fluent mesh is located in case_dir
fluent3DMeshToFoam fluent_mesh.msh
Fluent mesh in located in a different directory than case_dir
fluent3DMeshToFoam fluent_mesh.msh -case <path_to_case_dir>
fluent3DMeshToFoam
has problems to convert large meshes, when using an OpenFOAM version that has been compiled with Intel compilers. For example, when using OpenFOAM/7-nsc1-intel-2018b-eb, it was possibe to convert a Fluent mesh with 45 million cells. But it was not possible to convert a mesh with 145 million cells. In this case, a program error occured near the end of the conversion. The same problem was observed for other versions of OpenFOAM. The reason for this failure is not clear. We found, that OpenFOAM versions that are compiled with the GCC compiler do not have this problem. Using OpenFOAM/7-nsc1-gcc-2018a-eb-opt, we were able to convert a Fluent mesh with 145 million cells.
To run the mesh converter, it is important to allocate sufficient memory. The memory requirement of fluent3DMeshToFoam
is about 2.5 times the file size of the Fluent mesh on hard disk. For example, a 13GB mesh file needs about 32.5GB main memory. A mesh with a file size of 41GB requires about 103-108GB main memory. The file size can be determined using the linux command: ls -lh
The available memory of Tetralith compute nodes is summarized in the following table:
Node Type | Memory (RAM) | Available nodes on Tetralith |
---|---|---|
Standard | 96 GB | 1832 |
Fat | 384 GB | 60 |
If the mesh fits into 96GB of memory, then you should use a standard node. For larger meshes, you have to use a fat node. For example, a Fluent mesh file that takes about 41GB on hard disk, needs about 103-108GB of RAM using the converter fluent3DMeshToFoam
. In this case, one has to use a fat node.
interactive -n1 --mem=30000 --time=02:00:00 # 1 core, standard node, 30GB memory, time=2h
interactive -n1 --exclusive --time=01:00:00 # 1 core, standard node, exclusive=96GB memory, time=1h
interactive -n1 -C fat --mem=200000 --time=03:00:00 # 1 core, fat node, 200GB memory, time=3h
interactive -n1 -C fat --exclusive --time=03:00:00 # 1 core, fat node, exclusive=384GB memory, time=3h
Getting access to a fat node may take a bit longer waiting time, due to the limited number of fat nodes. In this case, we recommend to convert the Fluent mesh using a slurm batch script. As already mentioned before, there are problems with the conversion of large meshes when using an OpenFOAM version that was compiled with Intel compilers. To convert a Fluent mesh with fluent3DMeshToFoam
, using an OpenFOAM version that was compiled with GCC seems to be the safer option. Example how to convert a Fluent mesh within a slurm batch script:
#!/bin/bash
#SBATCH -A <your_account>
#SBATCH -n 1
#SBATCH -t 03:00:00
#SBATCH -J converter
#SBATCH --exclusive
#SBATCH -C fat
module load OpenFOAM/7-nsc1-gcc-2018a-eb-opt
source $FOAM_BASHRC
fluent3DMeshToFoam channel_169Mcells.msh
Here, the Fluent mesh channel_169Mcells.msh is located inside the OpenFOAM case directory. You have to adjust your account accordingly.
It is generally possible to run OpenFOAM within a Singularity container. This can be convenient, if a Singularity or docker image already exists that contains the required OpenFOAM version. For more information about Singularity, we refer to the following NSC webpage: https://www.nsc.liu.se/support/singularity/ We only support Singularity, but not docker. One can build a Singularity image from a docker image.
Several OpenFOAM versions can be found on docker hub: https://hub.docker.com/, where you can search for “openfoam”. As an example, we use the version openfoam/openfoam7-paraview56, which should be listed as you search for “openfoam” on https://hub.docker.com/.
For the OpenFOAM version of your choice, you find the following information on the right hand side of the webpage for the selected version:
Docker Pull Command
docker pull openfoam/openfoam7-paraview56
Do not execute this command! We only need to know the source of this version, that is “openfoam/openfoam7-paraview56” in this case.
Next, we want to build a Singularity image on Tetralith/Sigma from the image that is available on docker hub:
singularity build <image_name.sif> docker://<source>
Example:
mkdir <your_proj_directory>/OPENFOAM_7_SINGULARITY
cd <your_proj_directory>/OPENFOAM_7_SINGULARITY
singularity build openfoam7.sif docker://openfoam/openfoam7-paraview56
To interactively access the Singularity container, you have to create an interactive shell within the container:
singularity shell <container image>
Example: singularity shell openfoam7.sif
You can see that you are inside the container, as the command line prompt will change to: “Singularity <container image>:” To exit the container, simply type “exit”.
To run OpenFOAM within the Singularity container, we have to know the following details how OpenFOAM is installed within the container:
2) If you want to run OpenFOAM in parallel, you have to know which MPI version is used within the Singularity container. The MPI version can be identified using “mpirun -version”, which is available after sourcing the bashrc file:
source <bashrc_file>
mpirun -version
Example, openfoam/openfoam7-paraview56:
source /opt/openfoam7/etc/bashrc
mpirun -version
Output: mpirun (Open MPI) 2.1.1
More details of the specific Open MPI installation can be obtained via the command: "ompi_info -a"
As an example, these details apply for the two following OpenFOAM versions:
docker source | bashrc path | MPI version | GCC |
---|---|---|---|
openfoam/openfoam7-paraview56 | /opt/openfoam7/etc/bashrc | Open MPI 2.1.1 | 7.3 |
openfoamplus/of_v1812_centos73 | /opt/OpenFOAM/OpenFOAM-v1812/etc/bashrc | Open MPI 1.10.4 | 4.8 |
At this stage, we have found the exact location of the bashrc file, the MPI version and the GCC version within the container. We exit the Singularity container by typing “exit”
Any command within the Singularity container can be executed via the Singularity sub-command “exec”
singularity exec <container image > <command>
In order to execute OpenFOAM commands, we first have to source the bashrc file within the container. Since we want to start an OpenFOAM command from outside of the container, we have to source the bashrc file and call the OpenFOAM command within the same “exec” command line. Otherwise, we cannot set the environmental variables of the bashrc file properly. This is done as following:
singularity exec <container image> bash -c "source <bashrc> && <OpenFOAM command>"
Example, openfoam/openfoam7-paraview56:
singularity exec openfoam7.sif bash -c "source /opt/openfoam7/etc/bashrc && interFoam -case damBreak"
Here, <container image> is the full path to the Singularity container image, <bashrc> is the full path to the bashrc file within the container, <OpenFOAM command> is the OpenFOAM command that you want to execute within the container.
The standard way to execute MPI applications with Singularity containers is to run the native mpirun command from the host (Tetralith/Sigma), which will start Singularity containers and ultimately MPI ranks within the containers.
General way to execute a parallel OpenFOAM application:
mpirun singularity exec <container image> bash -c "source <bashrc> && <OpenFOAM command>"
To our experience, it is mandatory that the MPI version of the host (Tetralith/Sigma) and the MPI version within the container are EXACTLY the same. This is the reason, why we first identified the MPI version within the container, using mpirun -version. In case of the docker image “openfoam/openfoam7-paraview56”, the MPI version in the container is Open MPI 2.1.1. That means, on Tetralith/Sigma we have to use an “mpirun” which belongs to Open MPI 2.1.1. Any other version, for example Open MPI 2.1.2, will not work. In this case, the application will complain about two different versions and finally crash. Even when using identical MPI versions, the process to start parallel tasks may still fail. The MPI within the container must contain all necessary MPI components that are needed on the host site. Depending how MPI was configured/compiled within the Singularity container, important components may be missing (e.g. components from the Modular Component Architecture, MCA). It depends on the individual Singularity image if it will harmonize with the environment on Tetralith/Sigma.
You find different Open MPI versions on Tetralith/Sigma in the following directory
/software/sse/manual/openmpi/
There, you have to identify the mpirun command, which resides in the corresponding “bin” sub-directory. You find sub-directories for different gcc compilers, e.g. g48=GCC 4.8, g73=GCC 7.3. If possible, it is recommended to use the compiler version, which matches the compiler version within the Singularity image.
For example, the mpirun command for the following docker images are as follows:
docker source | mpirun on Tetralith/Sigma |
---|---|
openfoam/openfoam7-paraview56 | /software/sse/manual/openmpi/2.1.1/g73/nsc1/bin/mpirun |
openfoamplus/of_v1812_centos73 | /software/sse/manual/openmpi/1.10.4/g48/nsc1/bin/mpirun |
You have to use such a specific version of “mpirun” on Tetralith/Sigma to start your Singularity container in parallel. Nevertheless, there may be still conflicts if the MPI version within the container and the MPI version on Tetralith/Sigma is compiled differently.
Example, openfoam/openfoam7-paraview56, Open MPI 2.1.1:
/software/sse/manual/openmpi/2.1.1/g73/nsc1/bin/mpirun singularity exec openfoam7.sif bash -c "source /opt/openfoam7/etc/bashrc && interFoam -parallel -case damBreak"
interactive -n1
mkdir <your_proj_directory>/OPENFOAM_7_SINGULARITY
cd <your_proj_directory>/OPENFOAM_7_SINGULARITY
singularity build openfoam7.sif docker://openfoam/openfoam7-paraview56
singularity shell openfoam7.sif
source /opt/openfoam7/etc/bashrc
mpirun -version # Output: mpirun (Open MPI) 2.1.1
# copy damBreak tutorial example to Tetralith/Sigma
cp -R $FOAM_TUTORIALS/multiphase/interFoam/laminar/damBreak/ <your_proj_directory>/OPENFOAM_7_SINGULARITY
exit #exit from Singularity shell
exit #exit from interactive session
where <your_proj_directory> is your project directory under /proj. For simplicity, we copy the example data of damBreak in the same directory as the Singularity image.
# In case of an interactive session:
interactive -n4
# In case of a slurm batch script:
#!/bin/bash
#
#SBATCH -n 4
#SBATCH -t 00:20:00
#SBATCH -J jobname
#SBATCH -A SNIC-xxx-yyy
# --- END OF SLURM BATCH SCRIPT HEADER
cd <your_proj_directory>/OPENFOAM_7_SINGULARITY/damBreak
# full path to Singularity image
FOAM_SINGULARITY_IMAGE=<your_proj_directory>/OPENFOAM_7_SINGULARITY/openfoam7.sif
# Location of the bashrc file within the Singularity container
FOAM_BASHRC=/opt/openfoam7/etc/bashrc
# mpirun command on Tetralith/Sigma, according to correct MPI version
FOAM_MPI_RUN=/software/sse/manual/openmpi/2.1.1/g73/nsc1/bin/mpirun
# execute blockMesh (serial)
singularity exec $FOAM_SINGULARITY_IMAGE bash -c "source $FOAM_BASHRC && blockMesh -case damBreak"
# execute setFields (serial)
singularity exec $FOAM_SINGULARITY_IMAGE bash -c "source $FOAM_BASHRC && setFields -case damBreak"
# execute decomposePar (serial)
singularity exec $FOAM_SINGULARITY_IMAGE bash -c "source $FOAM_BASHRC && decomposePar -case damBreak"
# execute interFoam in parallel
$FOAM_MPI_RUN singularity exec $FOAM_SINGULARITY_IMAGE bash -c "source $FOAM_BASHRC && interFoam -parallel -case damBreak &> result.out"
Even if the Open MPI version on Tetralith/Sigma and the Open MPI version within the Singularty container are of the same version number, there still may be problems. Depending how MPI was configured/compiled within the Singularity container, important components may be missing (e.g. components from the Modular Component Architecture, MCA). As mentioned earlier, the Singularity image docker://openfoamplus/of_v1812_centos73 works without problems. But the image docker://openfoamplus/of_v1806_centos73 does not work, although both versions are using Open MPI 1.10.4. In version 1806, some MCA components are missing, compared to version 1812.
In this case we can get version 1806 to work, binding the host MPI related files to the container at launch time. That means, we create link (binding) between directories within the container and the corresponding directories on Tetralith. This way, the container picks the directories directly from Tetralith, and not from the container itself. This can be done at runtime, when we execute a Singularity command. This approach makes it flexible to be applied on different platforms and does not require any changes of the Singularity image.
The concept of directory binding is also decribed in the following publication: http://www.hpc-europa.eu/public_documents. Document D12.3 - Using container technologies to improve portability of applications in HPC (30/04/2019).
Binding of directories is accomplished using the Singularity option -B. Examples for the Singularity commands “exec” and “shell” are as follows:
singularity exec -B "<directory on Tetralith>:<directory in container>" <container image> <command>
singularity shell -B "<directory on Tetralith>:<directory in container>" <container image>
The binding option has the general form:
Where “source” is the directory on Tetralith, and “target” is the corresponding directory in the Singularity image, that we want to redirect (bind) to a directory on Tetralith. If the naming of the source directory and the target directory are identical, one only needs to mention the source directory. The option -B “source” is identical to -B “source:source”. Typically, this short form is used when dealing with directories such as /lib64 or /etc, which are at the same location in the directory structure on Tetralith as well as in the Singularity image. Several bindings can be specified at once, where each binding is separated by a comma.
interactive -n1
mkdir <your_proj_directory>/OPENFOAM_1806_SINGULARITY
cd <your_proj_directory>/OPENFOAM_1806_SINGULARITY
singularity build openfoam1806.sif docker://openfoamplus/of_v1806_centos73
singularity shell openfoam1806.sif
source /opt/OpenFOAM/OpenFOAM-v1806/etc/bashrc
gcc -v # Output: gcc version 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC)
which mpirun # Output: /opt/OpenFOAM/ThirdParty-v1806/platforms/linux64Gcc/openmpi-1.10.4/bin/mpirun
# copy damBreak tutorial example to Tetralith/Sigma
cp -R $FOAM_TUTORIALS/multiphase/interFoam/laminar/damBreak/ <your_proj_directory>/OPENFOAM_1806_SINGULARITY
exit #exit from Singularity shell
exit #exit from interactive session
From the interactive Singularity shell, we find the following information about the Singularity image:
# In case of an interactive session:
interactive -n4
# In case of a slurm batch script:
#!/bin/bash
#
#SBATCH -n 4
#SBATCH -t 00:20:00
#SBATCH -J jobname
#SBATCH -A SNIC-xxx-yyy
# --- END OF SLURM BATCH SCRIPT HEADER
cd <your_proj_directory>/OPENFOAM_1806_SINGULARITY/damBreak
# full path to Singularity image
FOAM_SINGULARITY_IMAGE=<your_proj_directory>/OPENFOAM_1806_SINGULARITY/openfoam1806.sif
# Location of the bashrc file within the Singularity container
FOAM_BASHRC=/opt/OpenFOAM/OpenFOAM-v1806/etc/bashrc
# mpirun command on Tetralith/Sigma, according to correct MPI version
FOAM_MPI_RUN=/software/sse/manual/openmpi/1.10.4/g48/nsc1/bin/mpirun
# Set the openmpi directory on Tetralith and in the Singularity container
OMPI_DIR_TETRALITH=/software/sse/manual/openmpi/1.10.4/g48/nsc1
OMPI_DIR_CONTAINER=/opt/OpenFOAM/ThirdParty-v1806/platforms/linux64Gcc/openmpi-1.10.4
# Besides the Open MPI directory, we also have to bind /lib64 and /etc
BIND_DIRS="$OMPI_DIR_TETRALITH:$OMPI_DIR_CONTAINER,/lib64,/etc"
# execute blockMesh (serial)
singularity exec $FOAM_SINGULARITY_IMAGE bash -c "source $FOAM_BASHRC && blockMesh -case damBreak"
# execute setFields (serial)
singularity exec $FOAM_SINGULARITY_IMAGE bash -c "source $FOAM_BASHRC && setFields -case damBreak"
# execute decomposePar (serial)
singularity exec $FOAM_SINGULARITY_IMAGE bash -c "source $FOAM_BASHRC && decomposePar -case damBreak"
# execute interFoam in parallel. Use the binding option -B.
$FOAM_MPI_RUN singularity exec -B $BIND_DIRS $FOAM_SINGULARITY_IMAGE bash -c "source $FOAM_BASHRC && interFoam -parallel -case damBreak &> result.out"
Guides, documentation and FAQ.
Applying for projects and login accounts.