Skip to content

Singularity

Singularity is an open source container platform designed to be simple, fast, and secure. It allows unprivileged users to run untrusted containers in a trusted way. It has been designed with reproducability and HPC workloads in mind.

It is available as an installed package on CREATE.

Take note

Singularity is only accessible from the compute nodes, to use Singularity and avoid the memory limitations of the login nodes, please request an interactive session.

1
2
k1234567@erc-hpc-login:~$ srun -p cpu --pty /bin/bash
k1234567@erc-hpc-comp1:~$ singularity --help

Usage

You can also download containers using singularity pull command from the Singularity Container Library

1
singularity pull library://sylabsed/examples/lolcow

or from the Docker Hub

1
singularity pull docker://godlovedc/lolcow

Alternatively you can build your own containers. Please see the Singularity documentation for more information.

Warning

Using third party containers is a great way to get started, but you need to make sure that the container is behaving as expected before using it in your work.

Running containers

Containers can be used like any other application via an interactive shell or a batch job. You can either run the container

1
singularity run ./container.sif

execute a command

1
singularity exec ./container.sif python hello.py

or start a shell

1
2
singularity shell ./container.sif
Singularity container.sif:~>

If you want to run a container so that it has access to the GPU, like tensorflow or pytorch, then when you run the above commands remember to add the --nv flag e.g.

1
singularity shell --nv ./gpu_enabled_container.sif

Alternative Cache location

File caching during the initial image build process is by default written to your home directory: /users/k1234566/.singularity/cache. Setting the following Singularity environment variable can be used to redirect caching to a larger scratch location where there is more free space available to you:

1
2
mkdir -p /scratch/users/k1234566/singularity/cache
export SINGULARITY_CACHEDIR=/scratch/users/k1234566/singularity/cache

It would be useful to make this alternative location more perminent by setting the above environment variable in your /users/k1234567/.bashrc file or at least include it in the scope of your batch scripts or interactive sessions. This is to ensure your cached image file is re-used so that you can avoid unnessarily consuming memory from your allocated resources to download images that are again not present in your default cache location.

Setting a Temporary directory

A temporary working space is required when pulling and running your containers. By default the TMPDIR environment variable will be used. However, to set the overriding SINGULARITY_TMPDIR variable might help avoid insufficient write space or improperly mounted variables in the container. The following can be included in your batch scripts, or executed when running interactively, before submission of any relevant singularity command:

1
2
export SINGULARITY_TMPDIR=/scratch/users/k1234567/$SLURM_JOB_ID/tmp
mkdir -p $SINGULARITY_TMPDIR

Unlike the SINGULARITY_CACHEDIR, which is intended for persistent storage, the above non-default tmp location can be deleted after your job run has completed.

Setting Bind paths

Certain path directories that exist on the HPC, such as: /scratch/prj or /scratch/users/, do not immediately exist within the context of your containers: can't open file '/scratch/users/k1234567/data': [Errno 2] No such file or directory. The following bind variable must be set to make such directories and their contents available to your containerized application:

1
singularity exec --bind /scratch/user/k1234567/project:/my-project ./container.sif python /my-project/hello.py

This mounts a /my-project bind path within your container to /scratch/user/k1234567/project.

Using the Remote Builder

Greater flexibility with permissions on the HPC is available with Singularity as the containerisation tool does not have to assume your user has root access to the system. A method to make use of this beneficial assumption is with the remote builder option.

Once you have created an account with Sylabs, you will then be able to generate a secret access token in your remote builder portal. With this access token, you will then be able to authenticate your remote login from CREATE HPC:

1
singularity remote login

This will then prompt for the generated access token, updating the contents of: /users/k1234567/.singularity/remote.yaml, to configure both your personal HPC Singularity setup and new Singularity remote builder account. And once complete, with the use of your own definition files, you should now be able to build your own containers whilst on CREATE with a command similar to the following:

1
singularity build --remote <container-to-build>.sif <definition-file>.def