Hasta Configuration

Using the Hasta config profile

Before running the pipeline Nextflow will need to be install in the conda environment being used.

To use, run the pipeline with -profile hasta (one hyphen). This will download and launch the hasta.config which has been pre-configured with a setup suitable for the hasta servers. It will enable Nextflow to manage the pipeline jobs via the Slurm job scheduler. Using this profile, Docker image(s) containing required software(s) will be downloaded, and converted to Singularity image(s) if needed before execution of the pipeline.

Recent version of Nextflow also support the environment variable NXF_SINGULARITY_CACHEDIR which can be used to supply images. A use case: NXF_SINGULARITY_CACHEDIR=/path/to/images; export NXF_SINGULARITY_CACHEDIR before running the pipeline.

Development and production config

Each user on hasta has a priority based on their allocated team, either development or production. To enable this when submitting a job to Slurm, submit with -profile hasta,dev_prio or -profile hasta,prod_prio. This overwrites certain parts of the config and submits the job based on different priorities.

Config file

See config file on GitHub

hasta.config
// Profile config names for nf-core/configs
params {
    config_profile_description   = 'Hasta, a local cluster setup at Clinical Genomics, Stockholm.'
    config_profile_contact       = 'Clinical Genomics, Stockholm'
    config_profile_url           = 'https://github.com/Clinical-Genomics'
    priority                     = null
    clusterOptions               = null
    schema_ignore_params         = "priority,clusterOptions"
    validationSchemaIgnoreParams = "priority,clusterOptions,schema_ignore_params"
}
 
singularity {
    enabled      = true
    envWhitelist = ['_JAVA_OPTIONS']
}
 
params {
    max_memory = 180.GB
    max_cpus   = 36
    max_time   = 336.h
}
 
process {
    resourceLimits = [
        memory: 180.GB,
        cpus: 36,
        time: 336.h
    ]
    clusterOptions = { "-A ${params.priority} ${params.clusterOptions ?: ''}" }
}
 
executor {
    name              = 'slurm'
    pollInterval      = '2 min'
    queueStatInterval = '5 min'
    submitRateLimit   = '2 sec'
}
 
profiles {
    stub_prio {
        params {
            priority       = 'development'
            clusterOptions = "--qos=low"
            max_memory     = 6.GB
            max_cpus       = 1
            max_time       = 1.h
        }
 
        process {
            resourceLimits = [
                memory: 6.GB,
                cpus: 1,
                time: 1.h
            ]
        }
    }
 
    dev_prio {
        params {
            priority       = 'development'
            clusterOptions = "--qos=low"
        }
    }
 
    prod_prio {
        params {
            priority       = 'production'
            clusterOptions = "--qos=low"
        }
    }
}