Some software is installed globally but in special locations or with several versions. The modules mechanism allows users to select which software/version they want to use in their current session. The modules software used on the linux machines in the cs network is lmod.
To see all available commands
module help
To list all available modules
module avail
Some modules only become available after other modules have been loaded
To load a module (e.g. mymodule) with the default version
module load mymodule
To load several modules at once
module load mymodule yourmodule hismodule
To load a module with specific version (e.g. 2.1)
module load mymodule/2.1
A module can only be loaded once, if loading a different version the previous will be automatically unloaded first.
To try to load a module, but don't fail if it doesn't exist:
module try-load nonexisiting
To unload a module (mymodule):
module unload mymodule
To list loaded modules
module list
To unload all loaded modules
module purge
To load modules on login, lmod reads the ~/.lmodrc file. This file is sourced by bash and (t)csh so it should contain only lines beginning with 'module '.
To load e.g. tensorflow by default upon login, this file should contain:
module try-load tensorflow
Or run:
touch ~/.lmodrc echo 'module try-load tensorflow' >> ~/.lmodrc
Following is a list of some of the modules available (use module avail to see a full list relevant to the machine)
Some software require a specific version of gcc, we have several installed. There is always a default gcc, but if you need a different version e.g. 7.4.0 (assuming it is installed):
module load gcc/7.4.0
On machines with nvidia graphic card, a kernel nvidia driver is always loaded and the appropriate libraries are set up accordingly. However, the nvidia binaries (such as nvidia-smi) are needed, they can be loaded to the path with:
module load nvidia
The default version matches the kernel driver's version, so there is no need to load a different version (and it might not work).
cuda is the parallel computing platform from nvidia, and works only on nvidia graphic hardware. However, it is also possible to compile cuda code on computers without nvidia hardware (but not to run the code). To set the cuda environment:
module load cuda
Note that some cuda versions require specific gcc version, and as such the proper gcc module will be loaded automatically
cudnn is a cuda deep neural network library from nvidia, and is available only after cuda has been loaded (and only for some versions of cuda). To load
module load cudnn
It is possible to load both cuda and cudnn in one line with:
module load cuda cudnn
A version of opencv that supports cuda and python is available using:
module load opencv
This will automatically load the appropriate cuda module
tensorflow is installed for both cpu and gpu. The default is set depending on the machine's hardware. To load:
module load tensorflow
When loading the gpu version (the default on nvidia hardware) the appropriate cuda and cudnn modules are automatically loaded.
Note that both the cpu and gpu version of tensorflow might not run on old machines (either with old gpu or old cpu). Also note, that loading the wrong version (via tensorflow-all) will also probably not work properly.
To use a non-default version of firefox.
To use a non-default version of golang.
To run a singularity container.
A module container with modules from the HURCS center (moriah cluster). To load:
module load hurcs
and then run
module avail
to see available modules.