Job Settings
Mail Type

Memory

=<size[units]>
Specify the memory required per Node. Default units are megabytes.
=<size[units]>
Specify the memory required per CPU. Default units are megabytes.

Job Duration


Nodes and Parallelization Paradigms

M2_GPU: max 6 GPUs
Deeplearning: max 8 GPUs

Partitions

Max Walltime for : (d-hh:mm:ss)
For GPU-Jobs, only Broadwell CPU Architecture is available. MOGON Doks
GPUs are only useable, when your Project requested them.
Billing Weights: CPU=1.5*Num Mem=0.25*GB GPU=6*Num
For GPU-Jobs, only Broadwell CPU Architecture is available. MOGON Doks
GPUs are only useable, when your Project requested them.
Billing Weights: CPU=1.5*Num Mem=0.25*GB GPU=10*Num
SMP is only available when 1 Node is choosen. MOGON Doks
SMP is the common queue for most users, lowest bill overall.
Billing Weights: CPU=1.0*Num Mem=0.25*GB
Devel is only available when max 320 CPUs are used, with max 128 GB RAM.
Skylake CPUs are only available up to 96 GB RAM. MOGON Doks
Devel is a high priority queue, u bill for higher priority.
Billing Weights: CPU=2.0*Num Mem=0.5*GB
Parallel is an exclusive queue,
you must pay for all the resources of your allocated node,
even if you do not use them. MOGON Doks
Over 576 Nodes just Broadwell is available.
Billing Weights: Skylake: CPU=64 Mem=0.25*96/192
Broadwell: CPU=40 Mem=0.25*64/128/256
Longtime is a special queue for jobs,
that exceed the 5-day walltime limit of the other CPU partitions.
If your Job has less than 5 Days Walltime,
Slurm will not schedule your Job or accept your Script. MOGON Doks
Billing Weights: CPU=2.0*Num Mem=1.0*GB
Bigmem is a high memory queue,
for jobs that exceed the standard node's 256GB mem limit.
If your Job need more then 1TB RAM,
only Skylake is available. MOGON Doks
Billing Weights: CPU=2.0*Num Mem=1.0*GB
#!/bin/bash
#========[ + + + + MOGON Script Engine v1.24.10 + + + + ]========#
#
#  Documentation:  https://docs.hpc.uni-mainz.de
#   Chat Support:  https://mattermost.gitlab.rlp.net/hpc-support
# Ticket Support:  hpc@uni-mainz.de

#========[ + + + + Job Information + + + + ]========#
#SBATCH --mail-user=
#SBATCH --account=
#SBATCH --mail-type=
#SBATCH --job-name=
#SBATCH --comment=
#SBATCH --output=stdout_%x_%j.out
#SBATCH --output=_%x_%j.out
#SBATCH --error=stderr_%x_%j.err
#SBATCH --error=_%x_%j.err

#========[ + + + + Job Description + + + + ]========#
#SBATCH --partition=
#SBATCH --constrain=
#SBATCH --gres=gpu:
#SBATCH --time=:00
#SBATCH --signal=B:SIGUSR2@600
#SBATCH --ramdisk=
M
#SBATCH --mem=
#SBATCH --mem-per-cpu=
#SBATCH --nodes=
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=
#SBATCH --array=1-
export OMP_NUM_THREADS=
export MKL_NUM_THREADS=

#========[ + + + + Localscratch & Ramdisk + + + + ]========#
SAVEDPWD=$(pwd)
JOBDIR=/localscratch/$SLURM_JOB_ID
RAMDISK=$JOBDIR/ramdisk
cleanup(){
    cp /localscratch/${SLURM_JOB_ID}/output_file ${SAVEDPWD}/ &
    cp /localscratch/${SLURM_JOB_ID}/restart_file ${SAVEDPWD}/ &
    wait
    exit 0
}
trap 'cleanup' SIGUSR2
cp ${SAVEDPWD}/input_file /localscratch/${SLURM_JOB_ID}
cp ${SAVEDPWD}/restart_file /localscratch/${SLURM_JOB_ID}
cd /localscratch/${SLURM_JOB_ID}
${SAVEDPWD}/my_program
cleanup
######
cp *file in parallel file system* $RAMDISK/.

#========[ + + + + Modules + + + + ]========#
module purge
module load

#========[ + + + + Execution + + + + ]========#
srun --hint=nomultithread
Get your share: "sshare -A <account_name>" Mogon Docs
Total Resource Consumption
Total CPUs:
Total GPUs:
Total Memory: MB
Total CPU hours: h
Max Energy Consumption: up to for the Job
Billing: Your share costs