MATLAB Installations on Tetralith & Sigma

Licensing

Academic is allowed to use licensed programs, provided such access and use is solely for the purpose of academic course work and teaching, noncommercial academic research, and personal use which is not for any commercial or other organizational use.

If you work at a non academic organization or need toolboxes not included in the Linköping University license but have your own license, please contact support@nsc.liu.se and we will help you find out if you can use MATLAB at NSC using that license.

Available versions and modules

Login to Tetralith or Sigma and check the module system to list installed versions:

[pemun@tetralith1 ~]$ module avail MATLAB

---------------------------------------- /software/sse/modules ----------------------------------------
   MATLAB/recommendation (D)    MATLAB/2023a-bdist    MATLAB/2023b-bdist

  Where:
   D:  Default Module

Use "module spider" to find all possible modules.
Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys".

Use the MATLAB command "ver" at a MATLAB prompt on Tetralith or Sigma to list installed toolboxes for the used version of MATLAB.

>> ver

How to use MATLAB

MATLAB can be used either to perform interactive work or as batch jobs submitted to the batch queue system Slurm. Interactive work requiring limited resources, computationally and memory wise, can be performed on login nodes. More demanding interactive work should be performed within an interactive job on a compute node. The preferred way to start batch jobs is from within an interactive MATLAB session using MDCS. The old method to start a number of independent MATLAB jobs from within a Slurm batch job still works.

Interactive use

With the MATLAB graphical user interface

The recomended way to use MATLAB with its graphical user interface (GUI) for non demanding, computationally and memory wise, work is to login to Tetralith or Sigma with ThinLinc, load an appropriate MATLAB module and start MATLAB with vglrun for hardware accelerated OpenGL support. For more information on how to use ThinLinc, please see the page Running graphical applications.

[pemun@tetralith1 ~]$ module load MATLAB/2023b-bdist
[pemun@tetralith1 ~]$ vglrun matlab -nosoftwareopengl

Please, only use this method for non demanding work as the login nodes are shared among users. MATLAB will run on a single computational thread on login nodes in order not to disturbe other users to much.

Use the "interactive" command to request a compute node for more demanding interactive usage. Unfortunatelly graphics will be less fast as there is no hardware accelerated OpenGL on regular compute nodes.

[pemun@tetralith1 ~]$ interactive -N1 --exclusive -t 4:00:00
salloc: Granted job allocation 107651
srun: Step created for job 107651
[pemun@n192 ~]$ module load MATLAB/2023b-bdist
[pemun@n192 ~]$ matlab -softwareopengl

Tetralith only: Login with ThinLinc and use the "interactive.vgl" command to request a compute node equipped with one NVIDIA Tesla T4 GPU for more demanding interactive use that utilize the T4 GPU or with demanding graphic needs.

[pemun@tetralith1 ~]$ interactive.vgl -t 4:00:00
Enabling VirtualGL mode.
Adding --exclusive option. Note: your project will be charged for full nodes!
Adding --constraint=virtualgl to enable VirtualGL.
Adding --gres=gpu to allocate GPU to job.
Allocating one GPU for the interactive shell to allow accelerated graphics. Note: GPU will not be available from e.g job steps launched by srun
Remember to use "vglrun <application>" to enable accelerated graphics for <application>.
salloc: Granted job allocation 10460551
srun: Step created for job 10460551
[pemun@n1127 ~]$ module load MATLAB/2023b-bdist
[pemun@n1127 ~]$ vglrun matlab -nosoftwareopengl

Command line, without the graphical user interface

For non demanding, computationally and memory wise, work login to Tetralith or Sigma with "ssh", load an approporaiate MATLAB module and start MATLAB.

[pemun@tetralith1 ~]$ module load MATLAB/2023b-bdist
[pemun@tetralith1 ~]$ matlab -nodesktop -nosplash -softwareopengl

Please, only use this method for non demanding work as the login nodes are shared among users. MATLAB will run on a single computational thread on login nodes in order not to disturbe other users to much.

Use the "interactive" command to request a compute node for more demanding interactive use.

[pemun@tetralith1 ~]$ interactive -N1 --exclusive -t 4:00:00
salloc: Granted job allocation 107651
srun: Step created for job 107651
[pemun@n1142 ~]$ module load MATLAB/2023b-bdist
[pemun@n1142 ~]$ matlab -nodesktop -nosplash -softwareopengl

Submitting batch jobs from an interactive MATLAB session

Before submitting your first job

Information and output from jobs using MATLAB Distributed Computing Server, MDCS, is by default stored in one of the directories HOME/.matlab/genericclusterjobsor{HOME}/.matlab/local_cluster_jobs. The amount of data stored can become rather large and it is thus advisable to create directories under project storage and use symbolic links before the first use:

[pemun@tetralith1 ~]$ mkdir /proj/<your_project_directory>/users/${USER}/generic_cluster_jobs
[pemun@tetralith1 ~]$ ln -s /proj/<your_project_directory>/users/${USER}/generic_cluster_jobs ${HOME}/.matlab
[pemun@tetralith1 ~]$ mkdir /proj/<your_project_directory>/users/${USER}/local_cluster_jobs
[pemun@tetralith1 ~]$ ln -s /proj/<your_project_directory>/users/${USER}/local_cluster_jobs ${HOME}/.matlab

Or if you already have HOME/.matlab/genericclusterjobsand{HOME}/.matlab/local_cluster_jobs directories and like to keep the content:

[pemun@tetralith1 ~]$ mv ${HOME}/.matlab/generic_cluster_jobs /proj/<your_project_directory>/users/${USER}/
[pemun@tetralith1 ~]$ ln -s /proj/<your_project_directory>/users/${USER}/generic_cluster_jobs ${HOME}/.matlab
[pemun@tetralith1 ~]$ mv ${HOME}/.matlab/local_cluster_jobs /proj/<your_project_directory>/users/${USER}/
[pemun@tetralith1 ~]$ ln -s /proj/<your_project_directory>/users/${USER}/local_cluster_jobs ${HOME}/.matlab

Note: The default location for the EL7 installs of MATLAB was under ${HOME}/MdcsDataLocation. Those jobs can be reached by setting the JobStorageLocation property of the cluster object in MATLAB and make the same change in the Cluster Profile Manager under Processes.

>> c = parcluster;
>> c.JobStorageLocation = 'path to the old storage directory'

To use MDCS to submit jobs to the batch system from within MATLAB, a cluster profile have to be configured for the cluster. This have to be done once for each cluster.

Start MATLAB as decribed above. Use ThinLinc and the GUI if the MATLAB session only will be used to setup, submit and monitor batch jobs. Then configure the cluster:

>> configCluster               
    [1] sigma
    [2] tetralith
Select a cluster [1-2]: 2

                Must set AccountName and WallTime before submitting jobs to TETRALITH.  E.g.

                >> c = parcluster('tetralith');
                >> c.AdditionalProperties.AccountName = 'account-name';
                >> % 5 hour walltime
                >> c.AdditionalProperties.WallTime = '05:00:00';
                >> c.saveProfile

>>

Create a default cluster object with a handle:

>> c = parcluster;

As indicated above a account name, Resource Manager Name, have to be specified:

>> c.AdditionalProperties.AccountName = 'snicYYYY-X-N';

and a maximum wall time for the job to be submitted (For example one hour.):

>> c.AdditionalProperties.WallTime = '01:00:00';

It is also possible to specify additional submit arguments. For example to use the "devel" reservation for short test jobs:

>> c.AdditionalProperties.Reservation = 'devel';

To view the current value of all additional submit arguments:

>> c.AdditionalProperties

ans = 

  AdditionalProperties with properties:

     AdditionalSubmitArgs: ''
    DebugMessagesTurnedOn: 0
             EmailAddress: ''
             ProcsPerNode: 0
              AccountName: 'snicYYYY-X-N'
                QueueName: ''
              Reservation: 'devel'
          UseIdentityFile: 1
                 WallTime: '01:00:00'

To save this as your new default profile for the cluster:

>> c.saveProfile

To view all your profiles:

>> parallel.clusterProfiles

ans =

  1x2 cell array

    {'local'}    {'tetralith'}

Note: Profile names include version in the R2018a and R2018b versions of MATLB.

Once you have defined your cluster profiles you can select one directly. For example:

>> c = parcluster('tetralith');

or to select your default profile:

>> c = parcluster;

Serial batch jobs

To run a serial job on the cluster you have to create a handle to a cluster profile and then use the batch command. For example to generate a 2-by-5 random matrix:

>> % j = batch(c, @myfnc, N, {x1, x2, ...});
>> % j is a handle to the job
>> % c is a handle to the cluster
>> % myfnc is a serial MATLAB program or function
>> % N is the number of output arguments from myfnc
>> % {x1, x2, ...} are input argument to myfnc
>>
>> c = parcluster;
>> j = batch(c, @rand, 1, {2, 5}); 

additionalSubmitArgs =

    '--ntasks=1 -A nsc -t 00:10:00 --reservation=now'

>> wait(j)   % Wait for the job to finish
>> diary(j)  % Display the diary

--- Start Diary ---
--- End Diary ---

>> j.State   % Check the state of the job

ans =

    'finished'

>> r = fetchOutputs(j); % Get results into a cell array
>> r{1}                 % Display result

ans =

    0.3246    0.0084    0.3453    0.6588    0.8268
    0.6618    0.0048    0.4488    0.4859    0.2184

>> delete(j)  % Delete the job when you do not need it any more

You do not need to keep MATLAB running or be logged in once you have submited your job. You can quit MATLAB, log out and later login again, start MATLAB and retrieve non deleted jobs:

>> c = parcluster;              % Get a handle to the cluster
>> jobs = c.Jobs;               % Get a list of all jobs
>> r = fetchOutputs(jobs(2));   % Get output from the second job
>> delete(jobs(2));             % Delete the job when not needed any more

Parallel batch jobs

To run a parallel job on the cluster you have to create a handle to a cluster profile and then use the batch command. The difference compared to a serial job is that a MATLAB Pool of workers, in addition to the worker running the batch job itself, also have to be specified and created. The default is a Pool of size 0, that is a Pool without workers, which causes the script or function to run on only the worker running the batch job.

Consider for example a function, pfunction.m:

function [A, t] = pfunction(iter) 
t0 = tic; 

parfor i = 1:iter 
   A(i) = rand;  
end 

t = toc(t0);

end

To evaluate the function using a MATLAB Pool of 10 workers (This will require and use 10+1 cores.):

>> c = parcluster;
>> j = batch(c, @pfunction, 2, {524288}, 'Pool', 10);

additionalSubmitArgs =

    '--ntasks=11 -A nsc -t 00:10:00 --reservation=now'

>> wait(j)   % Wait for the job to finish
>> diary(j)  % Display the diary

--- Start Diary ---
--- End Diary ---

>> j.State   % Check the state of the job

ans =

    'finished'

>> r = fetchOutputs(j); % Get results into a cell array
>> size(r{1})           % Check the results

ans =

           1      524288

>> r{2}      

ans =

    0.4133

>> delete(j)  % Delete the job when you do not need it any more

Instead of with a function the same example can be inplemented with a script in a file pscript.m:

t0 = tic; 

parfor i = 1:iter 
   A(i) = rand;  
end 

t = toc(t0);

To evaluate the script using a MATLAB Pool of 10 workers (This will require and use 10+1 cores.):

>> c = parcluster;
>> iter = 524288;
>> 
>> j = batch(c, 'pscript', 'pool', 10);  

additionalSubmitArgs =

    '--ntasks=11 -A nsc -t 00:10:00 --reservation=now'

>> wait(j)   % Wait for the job to finish
>> diary(j)  % Display the diary

--- Start Diary ---
--- End Diary ---

>> j.State   % Check the state of the job

ans =

    'finished'

>> load(j, 'A', 't'); % Load job workspace variables 'A' and 't' into client workspace
>> load(j);           % Load the entire job workspace into client workspace

>> delete(j)  % Delete the job when you do not need it any more

Cancel a submitted job that is queued or running

Use the MATLAB command "cancel" to cancel queued or running jobs.

>> cancel(j)       % Cancel job 'j'

or

>> cancel(c.Jobs)  % Cancel all jobs on cluster 'c'.

Submitting MATLAB batch jobs directly from unix command line

Assume that you have a MATLAB function saved in the file parallel.m that you would like to run many times with different input (in the following example the inputs 1, 2, 3, ..., 40 will be used).

function S = parallel(x)
%
% Sum average of N random numbers

format long
s = RandStream.create('mt19937ar','seed',x);
N = 1e7;
R = rand(s, N, 1);
S = sum(R)/N;
filename = ['parallel_', num2str(x), '.out'];
fid=fopen(filename,'w');
fprintf(fid,'A small MATLAB example\n');
fprintf(fid,'Run with the input: %s\n', num2str(x));
fprintf(fid,'Sum average = %16.12f\n',S);
fclose(fid);

end

Create a batch script, job.sh:

#!/bin/bash
#
# Use 20 cores, for 10 minutes
#SBATCH -n 20         # Use 20 cores
#SBATCH -t 00:10:00   # Maximum 10 minutes wall clock time

# Load the MATLAB module
module load MATLAB/2023b-bdist

# Note the flag "-singleCompThread". Without it each MATLAB
# instance started with srun below starts computational threads
# on all cores leading to decreased performance.
MATLAB='matlab -nodesktop -nodisplay -singleCompThread'

# The name of the MATLAB script (without the trailing .m)
job=parallel

# Note the explicit "exit" to exit MATLAB and the "&" at
# the end of the second line. 
for i in $(seq 1 40);do
  srun -n1 --exact ${MATLAB} -r "${job}(${i});exit" > ${job}_${i}.log &
done
wait # needed to prevent the script from exiting before all MATLAB tasks are done

# End of script

Submit the batch job:

[pemun@tetralith1 ~]$ sbatch job.sh

You can monitor the progress as for any other batch job:

[pemun@tetralith1 ~]$ squeue -u ${USER}

Documentation at MathWorks

Below follow links to a small fraction of all available documentation on the MathWorks site.

General MATLAB documentation

Documentation regarding MATLAB Parallel Computing Toolbox

Documentation regarding MATLAB Distributed Computing Server


User Area

User support

Guides, documentation and FAQ.

Getting access

Applying for projects and login accounts.

System status

Everything OK!

No reported problems

Self-service

SUPR
NSC Express