Memory Limits

When submitting a job to Slurm, it’s essential to set an appropriate memory limit to ensure that your job has enough resources to run efficiently. By default, Slurm sets a relatively small memory limit, which depends on the partition and can be found in a table in the next section.

Unit prefix ambiguity

Binary prefixes are often indicated as mathematical expression or equation , mathematical expression or equation , mathematical expression or equation , … (kibi, mebi, gibi) to distinguish them from their decimal counterparts (kilo, mega, giga). That is not the case for Slurm, though. Slurm uses the decimal prefixes, but always refers to units based on powers of 2 (so 1 kB, corresponds to 1024 bytes).

To be consistent with Slurm’s documentation, we also stick to the standard SI prefixes despite the ambiguity.

Default Memory Size

MOGON NHR

PartitionMemory [MB]
smallcpuper CPU
parallelper Node
longtimeper CPU
largememper CPU
hugememper CPU
a40per CPU
a100dlper CPU
a100aiper CPU
topmlper CPU
kometper CPU
czlabper CPU

MOGON II

PartitionMemory [MB]
smpper CPU
parallelper Node
develper Node
longtimeper CPU
m2_gpuper CPU
m2_gpu_compileper Node
deeplearningper Node
himster2_thper Node
himster2_expper CPU
himster2_interactiveper Node

To request a larger memory limit for your job, you can add the --mem option to your job submission script:

#SBATCH --mem=<size>[units]

to specify the real memory required per node. Default units are megabytes. Different units can be specified using the suffix [K|M|G|T]. A memory size specification of zero (--mem=0) is treated as a special case and grants the job access to all of the memory on each node.

Did you know?

Jobs which exceed their per-node memory limit are killed automatically by the batch system.

Other Memory Options

CommandComment
--mem-per-cpu=<size>[units]Minimum memory required per usable allocated CPU
--mem-per-gpu=<size>[units]Minimum memory required per allocated GPU

Available RAM at runtime

The technical specification for RAM on our nodes is slightly different from the memory that is effectively available. A small part is always going to be reserved for the operating system, the parallel file system, the scheduler, etc. Therefore, you find memory limits that might be relevant for a job – for example when specifying the --mem option – in the table below.

You can use the SLURM command sinfo to query all these limits. For example:

sinfo -e -o "%20P %16F %8z %.8m %.11l %18f" -S "+P+m"

The output returns a list of our partitions and

  • information on their nodes (allocated/idle/other/total)
  • CPU specs of these nodes (sockets:cores:threads)
  • size of real memory in megabytes
  • walltime limits for job requests
  • and feature constraints.

MOGON NHR

At the moment of writing, for example, the output on MOGON NHR looks like this:

PARTITION            NODES(A/I/O/T)   S:C:T      MEMORY   TIMELIMIT
a100ai               1/2/1/4          2:64:2    1992000  6-00:00:00
a100dl               1/8/2/11         2:64:1    1016000  6-00:00:00
a40                  1/6/0/7          2:64:1    1016000  6-00:00:00
czlab                0/1/0/1          2:64:1    1031828  6-00:00:00
hugemem              0/1/3/4          2:64:1    1992000  6-00:00:00
komet                355/43/34/432    2:64:1     248000  6-00:00:00
largemem             0/19/9/28        2:64:1    1016000  6-00:00:00
longtime             9/0/1/10         2:64:1     248000 12-00:00:00
longtime             10/0/0/10        2:64:1     504000 12-00:00:00
mi250                0/2/0/2          2:64:1    1016000  6-00:00:00
mod                  167/4/5/176      2:64:1     504000  6-00:00:00
parallel             355/43/34/432    2:64:1     248000  6-00:00:00
parallel             167/4/5/176      2:64:1     504000  6-00:00:00
quick                355/43/34/432    2:64:1     248000     8:00:00
smallcpu             355/43/34/432    2:64:1     248000  6-00:00:00
topml                0/1/0/1          2:48:2    1547259  6-00:00:00
Memory [MB]Number of Nodes
432
176
28
4
Memory [MB]Number of Nodes
20
4

MOGON II

At the moment of writing the output on MOGON II looks like this:

PARTITION            NODES(A/I/O/T)   S:C:T      MEMORY   TIMELIMIT AVAIL_FEATURES    
bigmem               8/5/19/32        2:16:2     354000  5-00:00:00 anyarch,skylake   
bigmem               0/8/12/20        2:10:2     498000  5-00:00:00 anyarch,broadwell 
bigmem               0/0/2/2          2:10:2    1002000  5-00:00:00 anyarch,broadwell 
bigmem               0/2/0/2          2:16:2    1516000  5-00:00:00 anyarch,skylake   
deeplearning         0/1/1/2          2:20:2     490000    18:00:00 anyarch,broadwell 
devel                438/14/140/592   2:10:2      57000     4:00:00 anyarch,broadwell 
devel                473/9/138/620    2:16:2      88500     4:00:00 anyarch,skylake   
devel                99/23/46/168     2:10:2     120000     4:00:00 anyarch,broadwell 
devel                60/16/44/120     2:16:2     177000     4:00:00 anyarch,skylake   
devel                20/5/15/40       2:10:2     246000     4:00:00 anyarch,broadwell 
devel                8/5/19/32        2:16:2     354000     4:00:00 anyarch,skylake   
himster2_exp         1/57/7/65        2:16:2      88500  5-00:00:00 anyarch,skylake   
himster2_interactive 1/1/0/2          2:16:2      88500  5-00:00:00 anyarch,skylake   
himster2_th          272/7/17/296     2:16:2      88500  5-00:00:00 anyarch,skylake   
kph_NT               0/14/2/16        2:10:2     246000  5-00:00:00 anyarch,broadwell 
m2_gpu               20/5/5/30        2:12:2     115500  5-00:00:00 anyarch,broadwell 
m2_gpu-compile       20/5/5/30        2:12:2     115500     1:00:00 anyarch,broadwell 
m2_gputest           2/0/0/2          2:12:2     115500  5-00:00:00 anyarch,broadwell 
parallel             438/14/140/592   2:10:2      57000  5-00:00:00 anyarch,broadwell 
parallel             473/9/138/620    2:16:2      88500  5-00:00:00 anyarch,skylake   
parallel             99/23/46/168     2:10:2     120000  5-00:00:00 anyarch,broadwell 
parallel             60/16/44/120     2:16:2     177000  5-00:00:00 anyarch,skylake   
parallel             20/5/15/40       2:10:2     246000  5-00:00:00 anyarch,broadwell 
parallel             8/5/19/32        2:16:2     354000  5-00:00:00 anyarch,skylake   
smp                  438/14/140/592   2:10:2      57000  5-00:00:00 anyarch,broadwell 
smp                  473/9/138/620    2:16:2      88500  5-00:00:00 anyarch,skylake   
smp                  99/23/46/168     2:10:2     120000  5-00:00:00 anyarch,broadwell 
smp                  60/16/44/120     2:16:2     177000  5-00:00:00 anyarch,skylake   
smp                  20/5/15/40       2:10:2     246000  5-00:00:00 anyarch,broadwell 
smp                  8/5/19/32        2:16:2     354000  5-00:00:00 anyarch,skylake 
Memory [MB]Number of NodesType
584broadwell
576skylake
120broadwell
120skylake
40broadwell
Memory [MB]Number of NodesType
32skylake
20broadwell
2broadwell
2skylake