GROMACS

is a molecular dynamics package mainly designed for simulations of proteins, lipids, and nucleic acids.

GROMACS is a free and open-source software suite for high-performance molecular dynamics and output analysis.

Available Modules

Currently we have the following GROMACS modules available:

JGU HPC Modules
---------------- /apps/easybuild/current/core/modules/all ----------------
   bio/GROMACS/2023.3-foss-2023a-PLUMED-2.9.0
   bio/GROMACS/2024.3-foss-2023a-CPU          (D)

Usage

When loading bio/GROMACS/2024.3-foss-2023a-CPU or newer versions you can choose from four different binaries using the command line:

  • gmx - single precision without OpenMPI
  • gmx_mpi - single precision with OpenMPI
  • gmx_d - double precision without OpenMPI
  • gmx_d_mpi - double precision with OpenMPI

To test your workflow, you can use the Benchmark set provided by the developers of GROMACS.

An example workflow for some single node CPU tests could be:

wget https://ftp.gromacs.org/pub/benchmarks/ADH_bench_systems.tar.gz
tar xfz ADH_bench_systems.tar.gz
rm -f xfz ADH_bench_systems.tar.gz
cd ADH/adh_cubic
gmx grompp -f pme_verlet.mdp -c conf.gro -p topol.top -o bench.tpr
gmx mdrun -ntmpi 128 -noconfout -pin on -nstlist 200 -v -dlb yes -notunepme -s bench.tpr

In the same way you could use GROMACS for CUDA, but you have to make sure that you use the best Flags and Settings before running the simulation. Note that only the MPI variants are able to generate more than 1 PME-rang for the simulation and not every simulation can use all features of GPU acceleration! GROMACS is built to use cuFFTmp, with this feature you are able to use multi-GPU and multi-node acceleration.

wget https://ftp.gromacs.org/pub/benchmarks/ADH_bench_systems.tar.gz
tar xfz ADH_bench_systems.tar.gz
rm -f xfz ADH_bench_systems.tar.gz
cd ADH/adh_cubic
gmx grompp -f pme_verlet.mdp -c conf.gro -p topol.top -o bench.tpr

export GMX_GPU_PME_DECOMPOSITION=1
export GMX_USE_GPU_BUFFER_OPS=1
export GMX_DISABLE_GPU_TIMING=1
export GMX_ENABLE_DIRECT_GPU_COMM=1
export GMX_FORCE_UPDATE_DEFAULT_GPU=1

mpirun -np <Num_of_GPUs_per_Node> gmx_mpi mdrun -npme <Num_of_GPUs_per_Node/2> -noconfout -pin on -nstlist 200 -v -dlb yes -notunepme -s bench.tpr -nb gpu -pme gpu -bounded gpu -update gpu -pmefft gpu

The parameters above are for a full GPU node, of course you will need to do some benchmarking on your own simulation to optimise the parameters. Additional features and settings can be found in the Documentation for good mdrun performance.