Difference between revisions of "Containers"

From CsWiki
Jump to: navigation, search
(Singularity: pre built images)
 
Line 146: Line 146:
 
Nonetheless, it's simple enough to run docker inside a VM. E.g.:
 
Nonetheless, it's simple enough to run docker inside a VM. E.g.:
 
<pre>rundeb9 -root -run 'docker run <image>'</pre>
 
<pre>rundeb9 -root -run 'docker run <image>'</pre>
 +
=Containers for PPC64LE platform (for blaise cluster)=
 +
==Building image from docker image==
 +
singularity build ibm-powerai.simg docker://ibmcom/powerai
 +
==Run image with nvidia and cuda libraries==
 +
  module load nvidia cuda
 +
  singularity shell --nv ibm-powerai.simg
 +
 +
  After entering the container's shell, you have to accept the license:
 +
  PATH=/opt/anaconda/envs/wmlce/bin/:${PATH}
 +
  export CONDA_PREFIX=/opt/anaconda/envs/wmlce
 +
  accept-powerai-license.sh
 +
 +
=Containers from Nvidia NGC (X86 platform only)=
 +
Guide can be found [https://docs.nvidia.com/ngc/ngc-user-guide/singularity.html here]
 +
# An account should be created in Nvidia site [https://ngc.nvidia.com/signup https://ngc.nvidia.com/signup]
 +
# Create an API key: [https://ngc.nvidia.com/setup/api-key https://ngc.nvidia.com/setup/api-key]
 +
# Find the desired container's tag (find the containers under [https://ngc.nvidia.com/catalog/containers https://ngc.nvidia.com/catalog/containers])
 +
assuming the API key is located on ~/.ngc:
 +
 +
bash:
 +
export SINGULARITY_DOCKER_USERNAME='$oauthtoken'
 +
export SINGULARITY_DOCKER_PASSWORD=$(< ~/.ngc)
 +
 +
csh:
 +
setenv SINGULARITY_DOCKER_USERNAME '$oauthtoken'
 +
setenv SINGULARITY_DOCKER_PASSWORD `cat ~/.ngc`
 +
 +
Change to a desired directory, and fetch the container (for example: tensorflow container):
 +
singularity build <container name>.simg docker://nvcr.io/nvidia/tensorflow:<container's tag>
 +
 +
Run shell in the container: (--nv passes the nvidia environment)
 +
singularity shell --nv <container name>.simg
 +
 +
Run the container:
 +
singularity run --nv <container name>.simg
 +
 +
Run commandline in the container:
 +
singularity exec --nv <container name>.simg <command_to_run>

Latest revision as of 16:39, 29 June 2020

Containers lets users have an isolated environment with a bit more control inside than on the shared CS systems. Containers do not have all the overhead of Private Virtual Machines, but users have less control inside.

The advantages of using container images instead of installing software directly is portability and simplicity of installation. The disadvantages are the complexity of creating the images, and the disk space required for the images.

At the CS linux systems we currently support Singularity. Docker is not directly supported, but it can be used with some limitations with Singularity or Virtual Machines.

Singularity

Singularity gives a partial isolated environment. The filesystem is isolated (like chroot), but the rest (network, users, pids) are the same as the host machine. This makes singularity ideal for multi-user shared environment like the CS network.

Basic Usage

To use singularity, first one needs to load it into the environment using the modules system:

module load singularity

To run an image:

singularity run <image>

Where <image> can be an already built simg file, or a temporary on-the-spot built container from the hub (singularity or docker). E.g.:

singularity run image.simg
singularity run shub://vsoch/hello-world
singularity run docker://ubuntu

To get a shell inside a singularity image:

singularity shell <image>

To execute a command inside an image (not the default command):

singularity exec <image> <command>

To get help:

singularity help
singularity help <command>

Filesystem access and permissions

Singularity doesn't change the user namespace. This means all software runs using the normal user. One cannot run as root or as a different user inside singularity.

By default the home directory, /tmp, /proc, /sys and /dev filesystem are mounted inside the container. To add another filesystem, one can use the --bind option. E.g.:

singularity run --bind /cs/labs/<supervisor>/<user> <image>

Or to mount it on a different location inside the image:

singularity run --bind /cs/labs/<supervisor>/<user>:/<somewhere>/<else> <image>

Automation

The singularity can run as a normal shell program and can be piped from and to. E.g.

cat <somefile> | singularity exec <image> <mycommand> > <output-file> 

Building A Singularity Image

Building a new singularity image requires root permissions, so it cannot be done directly on the CS linux machines. As such there are two options:

  1. Build it on a personal computer, where you have root access
  2. Build it inside a virtual machine.

To build a singularity image on a CS computer, one can use a Private Virtual Machine such as rundeb9, to build the image.

To build an image, use the build command:

singularity build <output-image> <source-image>

Where <output-image> is the output image name. Make sure you have enough disk space to write there. <source-image> is another singularity image file, a hub (either shub:// or docker://), or a Singularity recipe file.

For example, to build the ubuntu image from the docker hub, inside a virtual machine, one can run:

rundeb9 -user -root -bind /cs/labs/<supervisor>/<user> -run 'singularity build /cs/labs/<supervisor>/<user>/ubuntu.simg docker://ubuntu'

Then, to run it:

singularity run /cs/labs/<supervisor>/<user>/ubuntu.simg

Shared singularity images

Pre-built singularity images can be found under /cs/containers

Limitations

  • Singularity runs as a normal user
    • setuid programs and root will not work inside a singularity image.
    • If a user cannot access a file inside a singularity image due to permissions (inside the image), the image will have to be modified (i.e. rebuilt).
  • Depending on the way docker images were built, they might not work inside singularity.

Additional info

Docker

Docker is inherently not designed for a multiuser/shared environment. As such it cannot run directly on the CS system network. To overcome this there are two options:

  1. Run it in Singularity
  2. Run it in a Private Virtual Machine

Docker in Singularity

This is the preferred method. It retains the low overhead of containers, but it might not work with all docker images.

Singularity supports docker images "out of the box". To run a docker container:

singularity run docker://<container>

To build a singularity image from a docker container:

singularity build /cs/labs/<supervisor>/<user>/<image>.simg docker://<image>

Note: You need to be root to build a singularity image. See Building A Singularity Image.

Docker in a VM

Running docker image in a VM, will add the overhead of the VM. If using a VM, it sometimes will be better to run the software directly there, instead of inside a docker.

Nonetheless, it's simple enough to run docker inside a VM. E.g.:

rundeb9 -root -run 'docker run <image>'

Containers for PPC64LE platform (for blaise cluster)

Building image from docker image

singularity build ibm-powerai.simg docker://ibmcom/powerai

Run image with nvidia and cuda libraries

 module load nvidia cuda
 singularity shell --nv ibm-powerai.simg
 After entering the container's shell, you have to accept the license:
 PATH=/opt/anaconda/envs/wmlce/bin/:${PATH}
 export CONDA_PREFIX=/opt/anaconda/envs/wmlce
 accept-powerai-license.sh

Containers from Nvidia NGC (X86 platform only)

Guide can be found here

  1. An account should be created in Nvidia site https://ngc.nvidia.com/signup
  2. Create an API key: https://ngc.nvidia.com/setup/api-key
  3. Find the desired container's tag (find the containers under https://ngc.nvidia.com/catalog/containers)

assuming the API key is located on ~/.ngc:

bash:

export SINGULARITY_DOCKER_USERNAME='$oauthtoken'
export SINGULARITY_DOCKER_PASSWORD=$(< ~/.ngc)

csh:

setenv SINGULARITY_DOCKER_USERNAME '$oauthtoken'
setenv SINGULARITY_DOCKER_PASSWORD `cat ~/.ngc`

Change to a desired directory, and fetch the container (for example: tensorflow container):

singularity build <container name>.simg docker://nvcr.io/nvidia/tensorflow:<container's tag>

Run shell in the container: (--nv passes the nvidia environment)

singularity shell --nv <container name>.simg

Run the container:

singularity run --nv <container name>.simg

Run commandline in the container:

singularity exec --nv <container name>.simg <command_to_run>