4. Software on Spider

Tip

There are several ways to setup and use your software on Spider. In this page you will learn:

  • what is our policy for system software installations
  • how to install your software on the local filesystem
  • using containerization for making your software portable across different platforms
  • distributing your software on different platforms

4.1. System software

There are cases in which a user or project may need extra software not included on Spider system. It may require the installation of a new software tool (i.e. emacs editor) or an specific version of a software component that is already on Spider (i.e. gcc 9). As a user of this platform you are free to submit a request to our helpdesk to ask us install a software required for your project system wide. The requests will be evaluated case-by-case, but in general the following policy applies:

  • Stand-alone applications easily available through the official RPM repositories (CentOS, EPEL, …) are suitable to be installed system wide. Some examples are emacs, joe, jq, …
  • Alternative versions of core tools (i.e. Python 3.6, gcc 9, …) will have to be evaluated case by case. We will accept requests of software that can be deployed using well defined and automated procedures.

The standard supported login shell on Spider is bash. The standard supported software setup is identical on all nodes. Basic unix functionality is installed system-wide:

  • software compilers (e.g., gcc, g++, f95)
  • editors (e.g., vi, vim, emacs, nano and edit).
  • graphical tools are supported via X11 ssh forwarding on the login node.
  • operating system (OS on Spider is CentOS 8) on login and worker nodes.

4.2. User installed software

Software that does not require root privileges can be setup in user-space. The software that is installed on the local CephFS will be made available on all nodes for your jobs because the local directories are globally mounted.

4.2.1. Software on home

Home can be used to install software that you don’t want to share with other members in your Spider project. This can be placed in the location /home/$USER.

4.2.2. Software on project space

For software that you want to install and share with other project members, we advice you to use the /project/[PROJECTNAME]/Software in your Project spaces directories.

Only dedicated software managers have permissions to write in this directory, and all members in the project have read and execute permissions in this space.

The project members can use the software installed by the software manager, simply by exporting the right path in $HOME/.bashrc.

Take a look into our examples below for installing miniconda and samtools in the project space software directory by building from source without root privileges:

4.3. Singularity containers

On Spider we support Singularity. Singularity is a container solution for building software stacks in the form of images. Singularity enables these images to be run in user space. We do not provide a space for building Singularity images, but we do support the execution of these images by users on Spider.

The currently supported version of Singularity on Spider can be found by typing singularity --version on the command line after login into the system. Additional information can be found on Singularity SURFsara page and more generic info can be found at the Sylabs documentation.

4.3.1. Upload your image

Your Singularity image can be viewed as a single file containing all the necessary software for your purpose. When compared to traditionally compiled software it is similar to a binary file containing the executable software. The image can be placed anywhere on Spider, as long as the location is accessible to your processing jobs. However, we strongly recommend that you place your Singularity images in one of the dedicated locations for user space software that are described on the User installed software page.

4.3.2. Singularity in batch jobs

Regular commands and Singularity based commands are very similar. In many cases for your job submission script you simply add singularity exec in front of the commands to be executed within your job. However, please note that in some cases you may need to also use directory binding via the --bind option (see Binding directories). Below we provide an example comparing a regular command in a job with a Singularity command.

  • Regular job on Spider:
#!/bin/bash
#SBATCH -n 1
#SBATCH -t 10:00
#SBATCH -c 1
echo "Hello I am running a regular command using the python version installed on the host system"
echo "I am running on " $HOSTNAME
python /home/[USERNAME]/hello_world.py
  • Singularity command on Spider (in this example the image is placed in the home directory of the user):
#!/bin/bash
#SBATCH -n 1
#SBATCH -t 10:00
#SBATCH -c 1
echo "Hello I am running a singularity command using the my own python version installed in my image"
echo "I am running on " $HOSTNAME
singularity exec --pwd $PWD /home/[USERNAME]/my-singularity-python-image.simg python /home/[USERNAME]/hello_world.py

Please note that that the --pwd $PWD is recommended for use. By default, Singularity makes the current working directory within the container the same as on the host system (Spider), and this path is not always available. For resolving the current working directory, Singularity looks up the physical absolute path (see man pwd for more info). However, some directories on Spider may be symbolic links and the current working directory would then resolve differently than expected. This would then result in your files not being where you expected them to be (combined with some warning messages).

4.3.3. Binding directories

By default Singularity does not see the entire directory structure on Spider. This is because by default the file system overlap between the host system and the image is only partial. Additional directories can be made available by the user in severals ways:

  1. Create the directories within the image, see e.g. Singularity SURFsara (note that this requires sudo rights and thus needs to be done outside of Spider)
  2. Bind new directories at the time of execution via the --bind option. For binding directories it is only necessary to specify the top directory.

Below we provide an example for binding the cvmfs directory. This is necessary if your Singularity image is distributed via Softdrive.

  • Singularity command on Spider (in this example the image is placed in the Softdrive directory):
#!/bin/bash
#SBATCH -n 1
#SBATCH -t 10:00
#SBATCH -c 1
echo "Hello I am running a singularity command using the software installed in my image on Softdrive"
echo "I am running on " $HOSTNAME
singularity exec --bind /cvmfs --pwd $PWD /cvmfs/softdrive.nl/[USERNAME]/my-singularity-image.simg python /home/[USERNAME]/hello_world.py

Please note that it is possible to bind several directories by providing a comma separated list to the --bind option, e.g. --bind /cvmfs,/project. Additional information can be found in the Sylabs documentation.

4.4. Softdrive

With Softdrive SURFsara it is possible to install your software in a central place and distribute it automagically across any compute cluster, including Spider. Simply put, systems with the CernVM-FS installed have instant access to the Softdrive SURFsara software repositories via the command line. This is very handy when you work on multiple platforms to solve the problem of installing and maintaining the software in different places.

Access on Softdrive is not provided by default to the Spider projects. To request for Softdrive access, please contact our our helpdesk.

If you already have access on Softdrive, then you can use it directly from Spider, simply by exporting the /cvmfs/softdrive.nl/$USER software paths into your Spider scripts or your .bashrc file.

On Spider nodes, your Softdrive files will be available under:

/cvmfs/softdrive.nl/[SOFTDRIVE_USERNAME]/

Please note that your [SOFTDRIVE_USERNAME] can be different than your [SPIDER_USERNAME].

See also

Still need help? Contact our helpdesk