Home Docs Running Jobs Job Examples Job Examples How to use Slurm for submitting batch jobs to MOGON
On this page Single core job# This script requests 2 CPUs (out of 40) on a Broadwell-node on MOGON II.
#!/bin/bash
#========[ + + + + Requirements + + + + ]========#
#SBATCH --account=<mogon-project> # Specify allocation to charge against
#SBATCH --comment=""
#SBATCH --cpus-per-task=2
#SBATCH --job-name=mysimplejob
#SBATCH --mem=300M
#SBATCH --ntasks=1
#SBATCH --output=mysimplejob.%j.out
#SBATCH --partition=smp
#SBATCH --time=00:30:00 # Run time (hh:mm:ss) - 0.5 hours
#========[ + + + + Environment + + + + ]========#
# Load all necessary modules in the script to ensure a consistent environment.
module load gcc/6.3.0
#========[ + + + + Job Steps + + + + ]========#
# Launch the executable
srun --hint = nomultithread <myexecutable>
Full node job - threaded application# The following script will launch one task using 20 cores (please note that most applications do not scale that far) on a Broadwell node. Since we reserve a whole node, we have access to the node’s complete memory.
#!/bin/bash
#========[ + + + + Requirements + + + + ]========#
#SBATCH --account=<mogon-project> # Specify allocation to charge against
#SBATCH --comment=""
#SBATCH --cpus-per-task=20
#SBATCH --job-name=mysimplejob
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --output=mysimplejob.%j.out
#SBATCH --partition=parallel
#SBATCH --time=00:30:00
#========[ + + + + Environment + + + + ]========#
# Load all necessary modules in the script to ensure a consistent environment.
module load gcc/6.3.0
#========[ + + + + Job Steps + + + + ]========#
# Launch the executable with one task distributed on 20 cores:
srun <myexecutable>
Full node job - MPI application# This script requests 40 MPI-tasks on two Broadwell-nodes. The job will have access to the nodes’ complete memory.
#!/bin/bash
#========[ + + + + Requirements + + + + ]========#
#SBATCH --account=<mogon-project>
#SBATCH --comment=""
#SBATCH --job-name=mysimplejob
#SBATCH --nodes=2
#SBATCH --ntasks=40
#SBATCH --ntasks-per-node=20
#SBATCH --output=mysimplejob.%j.out
#SBATCH --partition=parallel
#SBATCH --time=00:30:00
#========[ + + + + Environment + + + + ]========#
module load <appropriate module(s)>
#========[ + + + + Job Steps + + + + ]========#
srun <myexecutable>
Hybrid MPI-OpenMP Job# Whereas MPI applications frequently adhere to the standard MPI idea of parallelization by multiprocessing and exchanging messages between the created processes, hybrid applications use internally threaded processes (e.g. either by means of MPI-tasks or OpenMP-threads).
For this example we assume you want to run GROMACS
on 2 Skylake nodes (64 CPUs per node) with 32 MPI tasks, using 2 CPUs for OpenMP threads per MPI task. The job script could look like this:
#!/bin/bash
#========[ + + + + Requirements + + + + ]========#
#SBATCH --account=<mogon-project>
#SBATCH --comment=""
#SBATCH --constraint=skylake
#SBATCH --job-name=my-gromacs-job
#SBATCH --nodes=2
#SBATCH --output=my-gromacs-job.%j.out
#SBATCH --partition=parallel
#SBATCH --time=00:30:00
#========[ + + + + Environment + + + + ]========#
module load bio/GROMACS
#========[ + + + + Job Steps + + + + ]========#
srun --ntasks = 32 --cpus-per-task=2 gmx_mpi mdrun -ntomp 2 -deffnm em