Memory Limits
When submitting a job to Slurm, it’s essential to set an appropriate memory limit to ensure that your job has enough resources to run efficiently. By default, Slurm sets a relatively small memory limit, which depends on the partition and can be found in a table in the next section.
Default Memory Size
MOGON NHR
Partition | Memory [MB] | |
---|---|---|
smallcpu | $1930$ | per CPU |
parallel | $248000$ | per Node |
longtime | $1930$ | per CPU |
largemem | $7930$ | per CPU |
hugemem | $15560$ | per CPU |
a40 | $7930$ | per CPU |
a100dl | $7930$ | per CPU |
a100ai | $15560$ | per CPU |
topml | $2000$ | per CPU |
komet | $1930$ | per CPU |
czlab | $7930$ | per CPU |
MOGON II
Partition | Memory [MB] | |
---|---|---|
smp | $300$ | per CPU |
parallel | $57000$ | per Node |
devel | $88500$ | per Node |
longtime | $300$ | per CPU |
m2_gpu | $300$ | per CPU |
m2_gpu_compile | $10000$ | per Node |
deeplearning | $242500$ | per Node |
himster2_th | $88500$ | per Node |
himster2_exp | $1350$ | per CPU |
himster2_interactive | $88500$ | per Node |
To request a larger memory limit for your job, you can add the –mem option to your job submission script:
to specify the real memory required per node. Default units are megabytes. Different units can be specified using the suffix [K|M|G|T]
. A memory size specification of zero (--mem=0
) is treated as a special case and grants the job access to all of the memory on each node.
Did you know?
Jobs which exceed their per-node memory limit are killed automatically by the batch system.
Other Memory Options
Command | Comment |
---|---|
--mem-per-cpu=<size>[units] | Minimum memory required per usable allocated CPU |
--mem-per-gpu=<size>[units] | Minimum memory required per allocated GPU |
Available RAM at runtime
The technical specification for RAM on our nodes is slightly different from the memory that is effectively available. A small part is always going to be reserved for the operating system, the parallel file system, the scheduler, etc. Therefore, you find memory limits that might be relevant for a job – for example when specifying the --mem
option – in the table below.
Binary prefixes are often indicated as , , , … ( , , ) to distinguish them from their decimal counterparts (kilo, mega, giga). That is not the case for Slurm, though. Slurm uses the decimal prefixes, but always refers to units based on powers of 2 (so 1 kB, corresponds to 1024 bytes).
To be consistent with Slurm’s documentation, we also stick to the standard SI prefixes despite the ambiguity.
You can use the SLURM command sinfo
to query all these limits. For example:
The output returns a list of our partitions and
- information on their nodes (
allocated/idle/other/total
) - CPU specs of these nodes (
sockets:cores:threads
) - size of real memory in megabytes
- walltime limits for job requests
- and feature constraints.
MOGON NHR
At the moment of writing, for example, the output on MOGON NHR looks like this:
Memory [MB] | Number of Nodes |
---|---|
$248.000$ | 432 |
$504.000$ | 176 |
$1.016.000$ | 28 |
$1.992.000$ | 4 |
Memory [MB] | Number of Nodes |
---|---|
$1.016.000$ | 20 |
$1.992.000$ | 4 |
MOGON II
At the moment of writing the output on MOGON II looks like this:
Memory [MB] | Number of Nodes | Type |
---|---|---|
$\space57.000$ | 584 | broadwell |
$\space88.500$ | 576 | skylake |
$120.000$ | 120 | broadwell |
$177.000$ | 120 | skylake |
$246.000$ | 40 | broadwell |
Memory [MB] | Number of Nodes | Type |
---|---|---|
$\space\space354.000$ | 32 | skylake |
$\space\space498.000$ | 20 | broadwell |
$1.002.000$ | 2 | broadwell |
$1.516.000$ | 2 | skylake |