Some software is installed globally but in special locations or with several versions. The modules mechanism allows users to select which software/version they want to use in their current session. The modules software used on the linux machines in the cs network is lmod.
To see all available commands
To list all available modules
Some modules only become available after other modules have been loaded
To load a module (e.g. mymodule) with the default version
module load mymodule
To load several modules at once
module load mymodule yourmodule hismodule
To load a module with specific version (e.g. 2.1)
module load mymodule/2.1
A module can only be loaded once, if loading a different version the previous will be automatically unloaded first.
To try to load a module, but don't fail if it doesn't exist:
module try-load nonexisiting
To unload a module (mymodule):
module unload mymodule
To list loaded modules
To unload all loaded modules
Automatically Loading Modules on Login
To load modules on login, lmod reads the ~/.lmodrc file. This file is sourced by bash and (t)csh so it should contain only lines beginning with 'module '.
To load e.g. tensorflow by default upon login, this file should contain:
module try-load tensorflow
touch ~/.lmodrc echo 'module try-load tensorflow' >> ~/.lmodrc
Following is a list of some of the modules available (use module avail to see a full list relevant to the machine)
Some software require a specific version of gcc, we have several installed. There is always a default gcc, but if you need a different version e.g. 4.9 (assuming it is installed):
module load gcc/4.9
On machines with nvidia graphic card, a kernel nvidia driver is always loaded and the appropriate libraries are set up accordingly. However, the nvidia binaries (such as nvidia-smi) are needed, they can be loaded to the path with:
module load nvidia
The default version matches the kernel driver's version, so there is no need to load a different version (and it might not work).
cuda is the parallel computing platform from nvidia, and works only on nvidia graphic hardware. However, it is also possible to compile cuda code on computers without nvidia hardware (but not to run the code). To set the cuda environment:
module load cuda
Note that some cuda versions require specific gcc version, and as such the proper gcc module will be loaded automatically
cudnn is a cuda deep neural network library from nvidia, and is available only after cuda has been loaded (and only for some versions of cuda). To load
module load cudnn
It is possible to load both cuda and cudnn in one line with:
module load cuda cudnn
A version of opencv that supports cuda and python is available using:
module load opencv
This will automatically load the appropriate cuda module
tensorflow is installed for both cpu and gpu. The default is set depending on the machine's hardware. To load:
module load tensorflow
When loading the gpu version (the default on nvidia hardware) the appropriate cuda and cudnn modules are automatically loaded.
Note that both the cpu and gpu version of tensorflow might not run on old machines (either with old gpu or old cpu). Also note, that loading the wrong version (via tensorflow-all) will also probably not work properly.
theano is installed and supports both cpu and gpu, and for python2.7 and python3.5. To load:
module load theano
When loading theano on gpu machines, cuda and cudnn modules are automatically loaded.
Note that theano loaded this way might not work on older machines (gpu or cpu).
The "device = cuda" might not work on all machine. In these cases it is best to still use the "device = gpu" option.
On machines with infiniband, theano might not work properly and complain about forks in mpi environments (and have some segfaults). This can be solved by setting the environment variable:
Jupyter is available. By default it contains only python2 and python3 kernels. To load:
module load jupyter
cling (interactive c++ interpreter) is available. This module also adds c++ and c kernels to jupyter. To load:
module load cling