Login Nodes
Login nodes are the staging area for your work on MOGON.
Processes running directly on these nodes should be limited to tasks such as data transfer and management, data analysis, editing, compiling code, and debugging—as long as these tasks are not resource intensive (be it memory, CPU, network, or I/O).
In turn, this means jobs should not run on login nodes. Any resource-intensive work must be performed on the compute nodes through our batch system.
Interactive work
Interactive work like testing code can be done via interactive jobs
Resource Limits on Login Nodes
In order to give everyone a fair share of the login nodes resources, we are imposing the following limits.
Resource | Limit |
---|---|
Memory | $ 10\thinspace\text{GB} $ |
CPU cores | 4 |
Any process that is consuming extensive resources on a login-node may be killed, especially when it begins impacting other users on that node. If a process is creating significant problems for the system, it will be killed immediately and the user will be contacted via e-mail.
Login and Service Nodes
You can use the following service nodes to log in to MOGON
MOGON NHR
Service Node | FQDN | Description |
---|---|---|
mogon-nhr-01 | mogon-nhr-01.zdv.uni-mainz.de | Login Node |
hpcgate | hpcgate.zdv.uni-mainz.de | Jump Host |
MOGON II
Service Node | FQDN | Description |
---|---|---|
login21 | miil01.zdv.uni-mainz.de | Login Node |
login22 | miil02.zdv.uni-mainz.de | Login Node |
login23 | miil03.zdv.uni-mainz.de | Login Node |
hpcgate | hpcgate.zdv.uni-mainz.de | Jump Host |
Since you access MOGON Service-Nodes through the HPCGATE you can omit zdv.uni-mainz.de
, e.g.: for login21
just miil01
is sufficient.
Common Pitfalls
On login nodes, Slurm commands are your tool for interacting with the scheduling system. They are resource intensive, though! Please avoid placing them in loops, like
Replace this with a job array.
Please also refrain from using something like watch squeue
to act upon a job status. It strains the scheduler and there are better solutions. To check a job status for subsequent work, for example, you could use job dependencies instead.