Login Nodes

Login nodes are the staging area for your work on MOGON.

Processes running directly on these nodes should be limited to tasks such as data transfer and management, data analysis, editing, compiling code, and debugging—as long as these tasks are not resource intensive (be it memory, CPU, network, or I/O).

In turn, this means jobs should not run on login nodes. Any resource-intensive work must be performed on the compute nodes through our batch system.

Login node misuse
Our policies disallow using login nodes for work that inhibits the workflow of other users. Repeated login node misuse may result in your group administrator getting notified and possible suspension of your account.
Interactive work

Interactive work like testing code can be done via interactive jobs

Resource Limits on Login Nodes

In order to give everyone a fair share of the login nodes resources, we are imposing the following limits.

ResourceLimit
Memory$ 10\thinspace\text{GB} $
CPU cores4

Any process that is consuming extensive resources on a login-node may be killed, especially when it begins impacting other users on that node. If a process is creating significant problems for the system, it will be killed immediately and the user will be contacted via e-mail.

Login and Service Nodes

You can use the following service nodes to log in to MOGON

MOGON NHR

Service NodeFQDNDescription
mogon-nhr-01mogon-nhr-01.zdv.uni-mainz.deLogin Node
hpcgatehpcgate.zdv.uni-mainz.deJump Host

MOGON II

Service NodeFQDNDescription
login21miil01.zdv.uni-mainz.deLogin Node
login22miil02.zdv.uni-mainz.deLogin Node
login23miil03.zdv.uni-mainz.deLogin Node
hpcgatehpcgate.zdv.uni-mainz.deJump Host

Since you access MOGON Service-Nodes through the HPCGATE you can omit zdv.uni-mainz.de, e.g.: for login21 just miil01 is sufficient.

Common Pitfalls

On login nodes, Slurm commands are your tool for interacting with the scheduling system. They are resource intensive, though! Please avoid placing them in loops, like

for i in `seq ...`; do
  sbatch ... cmd $i
done

Replace this with a job array.

Please also refrain from using something like watch squeue to act upon a job status. It strains the scheduler and there are better solutions. To check a job status for subsequent work, for example, you could use job dependencies instead.