Allinea/ARM-MAP

Allinea/ARM MAP is a profiler for C/C++, Fortran and Python. It helps you to analyze where your code spends its CPU time, to identify MPI communication, as well as memory usage and I/O performance. MAP gives you a collective result over all cores, which makes it easier to identify the major bottlenecks in your code.

Official homepage

For details, we refer to the official User Manual

Please contact NSC Support if you have any questions or problems.

Installations on NSC Systems

Tetralith and Sigma

Allinea/ARM-MAP is available via the module system on Tetralith and Sigma. For more information about available versions, please see the Tetralith and Sigma Software list

How to compile your code

1. Adding Debug information
To get the most benefit from the performance profiler, you should compile your code including debugging information. To do so, you have to add the compiler flag -g. Only this way you can exactly see where the time is being spend in your code. The flag -g typically does not slow down the executable in most cases, but increases the size of the program as it contains additional debugging symbols.

2. Optimization Flags
To optimize your code with the MAP profiler, you should compile your program with optimization flags, e.g. -O2. If you do not include your standard optimization flags, you may try to optimize parts of your code, that can already be optimized by the compiler.

3. Linking your program
For the basic use of MAP, you do not need to link any extra library to your executable. MAP will automatically link its required libraries at runtime. Sometimes, this automatic procedure causes problems. Particularly for parallel programs, you should read the section How to run MAP for Parallel Codes.

How to run

Load the MAP module corresponding to the version that you want to use, e.g.,

    module load arm-MAP/23.1.1

How to run MAP for Serial Codes

To interactively profile a serial code with MAP, start your application as following:

    map ./your_executable

It will launch the graphical user interface to profile your application.

MAP can also be started in a non-interactive mode, which is used e.g. within a slurm batch script:

    map --profile ./your_executable

MAP writes a \*.map file to disk, that contains the profiling data. By default, the filaname is autogenerated. To specify a filename, use the option -o \<mapfile.map\>:

    map --profile -o <mapfile.map> ./your_executable

You can view the map-file as following:

    map <mapfile.map>

A map-file will also be created when using the interactive graphical user interface.

How to run MAP for Parallel Codes

To profile a parallel code, MAP needs to access MPI. You should load the parallel environment that was used to compile your code. If you do not remember the build-environment that was used to compile the code, or if it is unknown to you, you can check how it was build on Tetralith/Sigma as following:

    dumptag ./your_executable

dumptag will tell you, how the executable was built. For example, to see how the executable pimpleFoam from OpenFOAM was built (we omit loading the OpenFOAM module for brievety):

    dumptag pimpleFoam

gives the output

    NSC tags ---------------------------------
    Build date      190912
    Build time      164115
    Built with MPI  impi 2018_3_222__eb
    Linked with     intel 2018b__eb
    Tag version     6
    ------------------------------------------

Here we see, that the build-environment intel 2018b__eb was used. Accordingly, we load the following module in this case:

    module load buildenv-intel/2018b-eb

Now, we have loaded the required modules, e.g.

    module load arm-MAP/23.1.1
    module load buildenv-intel/2018b-eb

How to start MAP for parallel codes depends on the MPI version that you are using. We distinguish between Intel MPI and Open MPI. For Intel MPI you have to use the option --mpi=intel-mpi -n \<tasks\>. In this case, you have to specify the number of parallel MPI tasks. For Open MPI, you simply start the parallel application with mpirun</code>. MAP will automatically pick up the number of parallel tasks by itself from the SLURM environment.

Interactive mode, with GUI

Intel MPI

    map --mpi=intel-mpi -n <tasks> your_executable

Open MPI

    map mpirun your_executable

Non-interactive mode, without GUI

In a slurm batch script you always have to use the option --profile, such that the GUI will not be started. To specify a filename for the profiler data, use the option -o \<mapfile.map\>. If you do not provide a filename, MAP will autogenerate a name.

Intel MPI

    map --mpi=intel-mpi --profile -o <mapfile.map> -n <tasks> your_executable

Open MPI

    map mpirun --profile -o <mapfile.map> your_executable

Common MAP Options

The following options are commonly used for MAP

Option Description
–profile run without user interaction and write profiler data to a file
-o <mapfile.map> write profiler data to <mapfile.map> rather than autogenerate the filename
-n number of parallel MPI tasks
–mpi=<implementation> Use specific MPI implementation
–list-mpis List of available MPI implementations
–nompi run without MPI support
–log=<filename> writes a log to <filename>
-h, –help display help and exit

User Area

User support

Guides, documentation and FAQ.

Getting access

Applying for projects and login accounts.

System status

Everything OK!

No reported problems

Self-service

SUPR
NSC Express