ARCC Configuration
The Advanced Research Computing Center (ARCC) for the University of Wyoming (UW) has been set up to allow its users to utilize Nextflow with Singularity.
Getting Started
First, you will need an account on ARCC, if you already have an account, skip ahead, otherwise please continue. To get an account, you will need to be a Principal Investigator (PI) or student at UW, or be sponsored by a UW PI. To learn more please visit ARCC - HPC Account Requests.
With an account in hand, you are ready to proceed.
Running Nextflow
Please consider making use of screen or tmux before launching your Interactive Job. This will allow you to resume it later.
When using Nextflow on ARCC it is recommended you launch Nextflow as an Interactive Jobs on one of the
compute nodes, instead of the login nodes. To do this you will use the salloc
command to launch an
Interactive Job.
Once you are on a compute node, you can then use the module
command to load Conda and/or
Singularity.
Creating a Nextflow environment
As an ARCC user you may have noticed there is already a module for Nextflow. However, it may be out of date or limited to a single version. All nf-core pipelines have minimum Nextflow version requirements, so its easier to create a Nextflow environment, as it will ensure you have the latest available Nextflow version.
Environment Variables
When using Nextflow on ARCC, you will need to set a few environment variables.
NXF_SINGULARITY_CACHEDIR
This is a Nextflow specific environment variable that will let Nextflow know where you have or would like downloaded Singularity images to be downloaded.
SBATCH_ACCOUNT
The SBATCH_ACCOUNT
environment variable will be used by Nextflow to inform SLURM which
account the job should be submitted under.
Available Paritions
At the moment, only the CPU based partitions are available from this config. In the event a GPU partition is needed, please reach out. The GPU partitions require additional arguements that will need to be added.
The available partitions include:
beartooth
beartooth-bigmem
beartooth-hugemem
moran
moran-bigmem
moran-hugemem
teton
teton-cascade
teton-hugemem
teton-massmem
teton-knl
Please see Beartooth Hardware Summary Table for the full list of partitions.
Specifying a Partition
Each partition is provided as a separate Nextflow profile, so you will need to pick a
specific partition to submit jobs to. Using the available partitions, you will replace
the -
(dash) with an underscore.
For example, to use beartooth
, you would provide the following:
To use `beartooth-bigmem“, you would provide:
Example: Running nf-core/fetchngs
If everything is successful, you will be met with: