Install modules locally
You can take an EasyConfig file and install module in your home directory.
On this page
You always have the possibility to install the software from the Easyconfig file only for yourself (or your group). There is an option for EasyBuild called --pretend
that does the build/installation in the folder $HOME/easybuildinstall
.
Here is a step-by-step guide on how to do this.
At first the overview.
- Find Easyconfig and determine the generation of the toolchain applied.
Go to https://github.com/easybuilders/easybuild-easyconfigs and find the Easyconfig file you want to install. Copy it to MOGON NHR or MOGON KI into a folder in your $HOME
, where you will collect local Easyconfigs, for example, local-eb-configs
. In this tutorial we will work with the file called GROMACS-2024.1-RAMD-2.1-foss-2023a-CUDA-12.1.1.eb
. From the name of the Easyconfig it is clear that the toolchain foss
is of the year 2023
- that’s exactly what we need. Activate corresponding EasyBuild-setup:
source /apps/easybuild/ebsetup-2023.sh
For the toolchains with the year 2024
you would need to adjust the year accordingly.
- Determine if the Easyconfig is for CPU or GPU architecture.
Again, from the name of the Easyconfig it is clear that it is an installation for the GPUs, as CUDA
is a GPU-technology. This will be required later in Steps 4 and 5.
- Fetch the software source code with EasyBuild.
In the folder with the Easyconfig, execute
eb-jgu --fetch GROMACS-2024.1-RAMD-2.1-foss-2023a-CUDA-12.1.1.eb
This will download the source code files.
- Check if dependencies are installed.
After successful fetching of the source code, check if all dependencies are installed:
eb-jgu -D --year=2023 --arch=cuda GROMACS-2024.1-RAMD-2.1-foss-2023a-CUDA-12.1.1.eb
Here we had specified the year of the toolchain determined earlier and the cuda-architecture. If your software is for CPU, just skip the --arch
flag.
- Install the Easyconfig.
Here comes the essence of the local installation:
eb-jgu --job --year=2023 --arch=cuda --pretend GROMACS-2024.1-RAMD-2.1-foss-2023a-CUDA-12.1.1.eb
It will send a SLURM job that will try to install the software. Sometimes you are lucky and the installation goes fine. In our case, we will find that the installation has failed, because we have to set additional parameters. The result of the installation will be in the file GROMACS-2024.1-RAMD-2.1-foss-2023a-CUDA-12.1.1-{jobid}.out
- Fix issues if they come out.
For this specific case we would need to set two additional parameters, --cuda-compute-capabilities=8.0,8.6
to tell EasyBuild which CUDA compute capabilities are supported by our GPUs (A100 and A40) and --skip-test-step
to skip the testing phase of GROMACS, in which one test always fails. Please, don’t think that these two flags have to be set all the time, it is just the case of GROMACS here. So, your full command becomes:
eb-jgu --job --year=2023 --arch=cuda --pretend GROMACS-2024.1-RAMD-2.1-foss-2023a-CUDA-12.1.1.eb --cuda-compute-capabilities=8.0,8.6 --skip-test-step
- Activate your local newly-build module.
Check the output file of the job after it ends. If it says COMPLETED
or at least doesnt fail - well done!
You can activate your local module path with:
module use /home/{username}/easybuildinstall/{year}/{arch}/modules/all
where {username}
is your username, {year}
and {arch}
were determined earlier (In case of the CPU-Installation the arch is core
).
In case you’ve used --arch=cuda
you would need to also activate CUDA-modules:
module use /apps/easybuild/2023/cuda/modules/all
Now you can actually load your module. Check the list of available ones with module avail
, find your local module (somewhere close to the top of the list) and load it:
module load bio/GROMACS/2024.1-RAMD-2.1-foss-2023a-CUDA-12.1.1
The actions of this step should be repeated in your SLURM script when submitting a job.