What is MOGON?

MOGON is the High Performance Computing Cluster at the Johannes Gutenberg University Mainz (JGU) and named after the Roman city Mogontiacum, from which the current Mainz has emerged in the course of history.

High-Performance Computing, or HPC for short, utilizes a supercomputer comprised of hundreds or thousands of smaller computers that are connected through a high-speed network and run in parallel to perform calculations too large for a standalone computer to solve or too complex to be solved within a reasonable time frame.

The data center at Johannes Gutenberg University Mainz operates supercomputers that offer more than 53.000 CPU-cores with a peak performance of 2 PFLOPS (2 quadrillion floating point operations per second) along with 9 PB of storage on a parallel file server for scientists in Rhineland-Palatinate. This allows scientists from various disciplines like high-energy physics, meteorology, life-sciences and many more to achieve their competitive research objectives. Problems that might be solved within days, weeks or months on a desktop computer might take only minutes, hours or days on a supercomputer.

Clusters at JGU

Nodes

590 CPU nodes
10 GPU nodes

Processors



1200 × AMD EPYC 7713
76,800 CPU cores
Memory 186 TB
Accelerators 40 × Nvidia
A100-SXM4 (40 GB)

MOGON NHR Süd-West was acquired in 2022 and has been available since 2023. It features 590 compute nodes, each equipped with two 64 core processors (AMD EPYC 7713).

Nodes

101 CPU Nodes
14 GPU nodes

Processors



230 × AMD EPYC 7713
14,720 CPU cores
Memory 62.8 TB
Nvidia Accelerators 64 × Nvidia A40 (48 GB)
32 × Nvidia A100 (80 GB)
AMD Accelerators 8 × AMD MI250

MOGON KI was acquired in parallel with MOGON NHR and provides clusters for use by members of the JGU. It has been available since 2023 and, next to its compute nodes supporting dual AMD EPYC 7713 CPUs, it also includes 8 gpu nodes with 8 NVIDIA A40 each, 4 gpu nodes with 8 NVIDIA A100 each, 2 gpu nodes with 4 AMD MI250.

Nodes

1,876 CPU nodes
14 GPU nodes

Processors



1672 × Xeon E5-2630v4
2272 × Xeon Gold 6130
53,072 CPU cores
Memory 190.9 TB

Nvidia Accelerators 84 × GTX1080ti
112 × Nvidia V100

ZDV’s MOGON II cluster was purchased in 2016/17. The system consists of $1876$ individual nodes, of which $822$ nodes are each equipped with two 10-core Broadwell processors (Intel 2630v4), and $1136$ nodes are each equipped with two 16-core Skylake processors (Xeon Gold 6130) and connected via OmniPath $100\thinspace\text{Gbits}$ (Fat-tree). In total this results in around $50000$ cores.

Each node has RAM ranging from $64\thinspace\text{GiB}$ to $1536\thinspace\text{GiB}$ and either a $200\thinspace\text{GB}$ or $400\thinspace\text{GB}$ SSD for temporary files.

At the time of installation, the MOGON II cluster was ranked 65th in the TOP500 list and 51st in the GREEN500. MOGON II is operated at the JGU by the Center for Data Processing (ZDV) and the Helmholtz Institute Mainz (HIM).

Nodes

555 CPU nodes
13 GPU nodes
2 Xeon Phi nodes
Processors

2220 × AMD Opteron 6272
35520 total cores
Memory

89 TB
Nvidia Accelerators

52 × Geforce GPUs
(5–6 GB)
Intel Accelerators

8 × Xeon Phis

The original MOGON cluster - now decomissioned - was purchased in 2012, and GPU nodes were added in a second phase in 2013. The system consisted of $555$ individual nodes, each with $4$ AMD CPUs. Each CPU had $16$ cores, for a total of $35520$ cores.

Each node had RAM between $128\thinspace\text{GiB}$ and $512\thinspace\text{GiB}$ and also provided $1.5\thinspace\text{TB}$ of local hard drive space for temporary files.

In addition to a small GPU training cluster, there were also $13$ nodes with $4$ GPUs ($5-6\thinspace\text{GB}$ per GPU) per node and $2$ nodes with $4$ XeonPhis per node. The GPU nodes and the $2$ Phi nodes each have $2$ Intel CPUs with $8$ cores and $64\thinspace\text{GB}$ each.

The user had $1\thinspace\text{GBit}$ Ethernet and QDR Infiniband available as a network. Infiniband networking was based on a “full” fat tree.

Furthermore, several private sub-clusters from different working groups are managed at ZDV and made available to the respective users.

The following clusters are private clusters of the Helmholtz Institute Mainz (HIM). Further information on research, funding programs and contact can be found here .

Clover started operating in 2014. It consisted of $320$ nodes with $2$ Intel Ivybridge processors each. Every CPU had $8$ cores, so that the overall system had $5120$ cores. The CPUs ran at $2.6\thinspace\text{GHz}$. Each node had $32\thinspace\text{GB}$ of RAM ($2\thinspace\text{GB}$ per core).

The nodes were connected to QDR Infiniband and had access to 200TB of central storage.

HIMster had been calculating since 2011. It consisted of $130$ nodes, each with $2$ AMD Opteron processors. Each CPU had $8$ cores, so that the entire system had $2080$ cores. The CPUs were clocked at $2.3\thinspace\text{GHz}$. Each node had $32\thinspace\text{GB}$ RAM ($2\thinspace\text{GB}$ per core), $14$ of the nodes had $64\thinspace\text{GB}$ ($4\thinspace\text{GB}$ per core).

The nodes were connected to QDR Infiniband and had access to 557TB central storage with Fraunhofer Filsesystem (fhGFS).

References

Hardware Specs

A more detailed account on our hardware resources can be found here.

Cluster Partitioning

If you are looking for our partitioning scheme, click here.