Some software is installed globally but in special locations or with several versions. The modules mechanism allows users to select which software/version they want to use in their current session. The modules software used on the linux machines in the cs network is lmod.
To see all available commands
module help
To list all available modules
module avail
Some modules only become available after other modules have been loaded
To load a module (e.g. mymodule) with the default version
module load mymodule
To load several modules at once
module load mymodule yourmodule hismodule
To load a module with specific version (e.g. 2.1)
module load mymodule/2.1
A module can only be loaded once, if loading a different version the previous will be automatically unloaded first.
To try to load a module, but don't fail if it doesn't exist:
module try-load nonexisiting
To unload a module (mymodule):
module unload mymodule
To list loaded modules
module list
To unload all loaded modules
module purge
To load modules on login, lmod reads the ~/.lmodrc file. This file is sourced by bash and (t)csh so it should contain only lines beginning with 'module '.
To load e.g. tensorflow by default upon login, this file should contain:
module try-load tensorflow
Or run:
touch ~/.lmodrc echo 'module try-load tensorflow' >> ~/.lmodrc
Following is a partial list of the modules available, which have special use cases. To see a full list of available modules, use
module avail
Some modules, even though exist, have native version which will work without the module load command
On machines with nvidia graphic card, a kernel nvidia driver is always loaded and the appropriate libraries are set up accordingly. However, the nvidia binaries (such as nvidia-smi) are needed, they can be loaded to the path with:
module load nvidia
The default version matches the kernel driver's version, so there is no need to load a different version (and it might not work).
cuda is the parallel computing platform from nvidia, and works only on nvidia graphic hardware. However, it is also possible to compile cuda code on computers without nvidia hardware (but not to run the code). To set the cuda environment:
module load cuda
Note that some cuda versions require specific gcc version, and as such the proper gcc module will be loaded automatically. Also note that not all nvidia drivers and not all nvidia hardware support all cuda versions.
tensorflow is installed for both cpu and gpu. The default is set depending on the machine's hardware. To load:
module load tensorflow
When loading the gpu version (the default on nvidia hardware) the appropriate cuda and cudnn modules are automatically loaded.
Note that both the cpu and gpu version of tensorflow might not run on old machines (either with old gpu or old cpu). Also note, that loading the wrong version (via tensorflow-all) will also probably not work properly.
Singularity is now named apptainer. The singularity module is available for backward compatibility, but apptainer is the one loaded
A module container with modules from the HURCS center (moriah cluster). To load:
module load hurcs
and then run
module avail
to see available modules.
This module is automatically loaded on the moriah cluster.
Loading it elsewhere should work, but the installed submodules depends on the distribution of the machine.
Java is available natively, or if different version are needed - by module.
There are two modules ‘java’ for openjdk version, and ‘java-oracle’ for the oracle versions. Both ‘java’ and ‘java-oracle’ can't be loaded simultaneously.
Python is installed natively. Other versions of python are available via a module.
Note that usually the non-native (module'd) versions of python are minimally installed, and any other module that require python, or contain a python package, might not work with non-native python version.