Containers let users have an isolated environment with a bit more control inside than on the shared CS systems. Containers do not have all the overhead of Private Virtual Machines, but users have less control inside.
The advantages of using container images instead of installing software directly is portability and simplicity of installation. The disadvantages are the complexity of creating the images, and the disk space required for the images.
Singularity gives a partial isolated environment. The filesystem is isolated (like chroot), but the rest (network, users, pids) are the same as the host machine. This makes singularity ideal for multi-user shared environment like the CS network.
To use singularity, first one needs to load it into the environment using the modules system:
module load singularity
To run an image:
singularity run <image>
Where <image> can be an already built simg file, or a temporary on-the-spot built container from the hub (singularity or docker). e.g.:
singularity run image.simg
singularity run shub://vsoch/hello-world
singularity run docker://ubuntu
To get a shell inside a singularity image:
singularity shell <image>
To execute a command inside an image (not the default command):
singularity exec <image> <command>
To get help:
singularity help <command>
Singularity doesn't change the user namespace. This means all software runs using the normal user. One cannot run as root or as a different user inside singularity.
By default the home directory, /tmp, /proc, /sys and /dev filesystem are mounted inside the container. To add another filesystem, one can use the --bind option. E.g.:
singularity run --bind /cs/labs/<supervisor>/<user> <image>
Or to mount it on a different location inside the image:
singularity run --bind /cs/labs/<supervisor>/<user>:/<somewhere>/<else> <image>
The singularity can run as a normal shell program and can be piped from and to. E.g.
cat <somefile> | singularity exec <image> <mycommand> > <output-file>
Building a new singularity image requires root permissions, so it cannot be done directly on the CS linux machines. As such there are two options:
To build a singularity image on a CS computer, one can use a Private Virtual Machine such as rundeb10, to build the image.
To build an image, use the build command:
singularity build <output-image> <source-image>
Where <output-image> is the output image name. Make sure you have enough disk space to write there. <source-image> is another singularity image file, a hub (either shub:// or docker://), or a Singularity recipe file.
For example, to build the ubuntu image from the docker hub, inside a virtual machine, one can run:
rundeb10 -user -root -bind /cs/labs/<supervisor>/<user> -run 'singularity build /cs/labs/<supervisor>/<user>/ubuntu.simg docker://ubuntu'
Then, to run it:
singularity run /cs/labs/<supervisor>/<user>/ubuntu.simg
Pre-built singularity images can be found under /cs/containers
Docker is inherently not designed for a multiuser/shared environment. As such it cannot run directly on the CS system network. To overcome this there are two options:
This is the preferred method. It retains the low overhead of containers, but it might not work with all docker images.
Singularity supports docker images "out of the box". To run a docker container:
singularity run docker://<container>
To build a singularity image from a docker container:
singularity build /cs/labs/<supervisor>/<user>/<image>.simg docker://<image>
Note: You need to be root to build a singularity image. See Building A Singularity Image.
Running docker image in a VM, will add the overhead of the VM. If using a VM, it sometimes will be better to run the software directly there, instead of inside a docker.
Nonetheless, it's simple enough to run docker inside a VM. E.g.:
rundeb10 -root -run 'docker run <image>'
singularity build ibm-powerai.simg docker://ibmcom/powerai
module load nvidia cuda
singularity shell --nv ibm-powerai.simg
After entering the container's shell, you have to accept the license:
Guide can be found here
assuming the API key is located on ~/.ngc:
export SINGULARITY_DOCKER_PASSWORD=$(< ~/.ngc)
setenv SINGULARITY_DOCKER_USERNAME '$oauthtoken'
setenv SINGULARITY_DOCKER_PASSWORD `cat ~/.ngc`
Change to a desired directory, and fetch the container (for example: tensorflow container):
singularity build <container name>.simg docker://nvcr.io/nvidia/tensorflow:<container's tag>
Run shell in the container: (--nv passes the nvidia environment)
singularity shell --nv <container name>.simg
Run the container:
singularity run --nv <container name>.simg
Run commandline in the container:
singularity exec --nv <container name>.simg <command_to_run>