Define where the pipeline should find input data and save output data.

Path to comma-separated file containing information about samples.

required
type: string
pattern: ^\S+\.[ct]sv$

This CSV has one row per samples and contains information such as the location of input files, sample ID, labels, etc. Use this parameter to specify its location. See the documentaion for details on formatting this file.

Path to comma-separated file containing information about samples.

type: string
pattern: ^\S+\.[ct]sv$

This CSV has one row per reference and contains information such as the location of input files, reference ID, labels, etc. Use this parameter to specify its location. See the documentaion for details on formatting this file.

The output directory where the results will be saved. You have to use absolute paths to storage if running on Cloud infrastructure.

required
type: string

The location to save temporary files for processes. This is only used for some processes that produce large temporary files such as PICARD_SORTSAM.

type: string

The location to save downloaded files for later use. This is seperate from the cached data (usually stored in the 'work' directory), so that the cache can be cleared without having to repeat many large downloads.

type: string
default: path_surveil_data

Email address for completion summary.

type: string
pattern: ^([a-zA-Z0-9_\-\.]+)@([a-zA-Z0-9_\-\.]+)\.([a-zA-Z]{2,5})$

Set this parameter to your e-mail address to get a summary e-mail with details of the run sent to you when the workflow exits. If set in your user config file (~/.nextflow/config) then you don't need to specify this on the command line for every run.

MultiQC report title. Printed as page header, used for filename if not otherwise specified.

type: string

The path to the Bakta database folder. This or --download_bakta_db must be included.

type: string

Download the database required for running Bakta. This or --bakta_db must be included. Note that this will download gigabytes of information, so if you are planning to do repeated runs without --resume it would be better to download the database manually according to the bakta documentaion and specify it with --bakta_db.

type: boolean

Which type of the Bakta database to download. Must be 'light' (~2Gb) or 'full' (~40Gb).

type: string
default: light
pattern: light|full

Which type of caching to perform. Possible values include 'lenient', 'deep', 'true', and 'false'. 'lenient' caching does not take into account file modifications times and 'deep' takes into account file content. See https://www.nextflow.io/docs/latest/process.html#process-cache for more information.

type: string
default: true
pattern: lenient|deep|false|true

Parmaters that modify the analysis done by the pipleine.

Maximum depth of reads to be used for all analses. Samples with more reads are subsampled to this depth.

type: number
default: 100

When selecting references automatically, only consider references with names that appear to be standard latin bionomials (i.e. no numbers or symbols in the first two words).

type: boolean

The maximum number/percentage of references representing unique subspecies to download from RefSeq for each sample. Samples with similar initial indentifications will usually use the same references, so the total number of references downloaded for a goup of samples will depend on the taxonomic diversity of the samples.

type: number
default: 30

The maximum number/percentage of references representing unique species to download from RefSeq for each sample. Samples with similar initial indentifications will usually use the same references, so the total number of references downloaded for a goup of samples will depend on the taxonomic diversity of the samples.

type: number
default: 20

The maximum number/percentage of references representing unique genera to download from RefSeq for each sample. Samples with similar initial indentifications will usually use the same references, so the total number of references downloaded for a goup of samples will depend on the taxonomic diversity of the samples.

type: number
default: 10

The number of references most similar to each sample based on estimated ANI to include in phyogenetic anlyses.

type: number
default: 3

Same as the 'n_ref_closest' option except that it only applies to referneces with what apppear to be standard latin binomaial names (i.e. two words with no numbers or symbols). This is intended to ensure that a refernece with an informative name is present even if it is not the most similar.

type: number
default: 2

The number of references representing the entire range of ANI relative to each sample. These are meant to provide context for more similar references. For a group of samples, the fewest total references will be selected that satisify this count for each sample.

type: number
default: 7

The minimum number of genes needed to conduct a core gene phylogeny. Samples and references will be removed (as allowed by the min_core_samps and min_core_refs options) until this minimum is met.

type: number
default: 10

The maximum number of genes used to conduct a core gene phylogeny.

type: number
default: 200

The minimum ANI between a sample and potential reference for that reference to be used for mapping reads from that sample. To force all the samples in a report group to use the same reference, set this value very low.

type: number
default: 0.85

Parameters used to describe centralised config profiles. These should not be edited.

Git commit id for Institutional configs.

hidden
type: string
default: master

Base directory for Institutional configs.

hidden
type: string
default: https://raw.githubusercontent.com/nf-core/configs/master

If you're running offline, Nextflow will not be able to fetch the institutional config files from the internet. If you don't need them, then this is not a problem. If you do need them, you should download the files from the repo and tell Nextflow where to find them with this parameter.

Institutional config name.

hidden
type: string

Institutional config description.

hidden
type: string

Institutional config contact information.

hidden
type: string

Institutional config URL link.

hidden
type: string

Set the top limit for requested resources for any single job.

Maximum number of CPUs that can be requested for any single job.

hidden
type: integer
default: 16

Use to set an upper-limit for the CPU requirement for each process. Should be an integer e.g. --max_cpus 1

Maximum amount of memory that can be requested for any single job.

hidden
type: string
default: 64.GB
pattern: ^\d+(\.\d+)?\.?\s*(K|M|G|T)?B$

Use to set an upper-limit for the memory requirement for each process. Should be a string in the format integer-unit e.g. --max_memory '8.GB'

Maximum amount of time that can be requested for any single job.

hidden
type: string
default: 240.h
pattern: ^(\d+\.?\s*(s|m|h|day)\s*)+$

Use to set an upper-limit for the time requirement for each process. Should be a string in the format integer-unit e.g. --max_time '2.h'

Maximum number of CPUs that can be requested for all jobs combined. Should be an integer e.g. --max_total_cpus 1. Only applies if running the pipeline in a personal computer.

hidden
type: integer

Use to set an upper-limit for the CPU requirement for all jobs combined. Should be an integer e.g. --max_total_cpus 1

Maximum amount of memory that can be requested for all jobs combined. Should be a string in the format integer-unit e.g. --max_total_memory '8.GB'. Only applies if running the pipeline in a personal computer.

hidden
type: string
pattern: ^\d+(\.\d+)?\.?\s*(K|M|G|T)?B$

Use to set an upper-limit for the memory requirement for all jobs combined. Should be a string in the format integer-unit e.g. --max_total_memory '8.GB'

Maximum number of jobs that can run at once. Should be an integer e.g. --max_total_jobs 1

hidden
type: integer

Use to set an upper-limit for the jobs to schedule at once. Should be an integer e.g. --max_total_jobs 1

Less common options for the pipeline, typically set in a config file.

Display version and exit.

hidden
type: boolean

Method used to save pipeline results to output directory.

hidden
type: string

The Nextflow publishDir option specifies which intermediate files should be saved to the output directory. This option tells the pipeline what method should be used to move these files. See Nextflow docs for details.

Designates which files are copied from work/ directory

type: string

Sets publishDir mode for individual files. Storage footprint of the pipeline can be quite large, and files can be saved twice: both within the work/ directory and within the published output directory. By default, this parameter is set so that intermediate files will be linked from the published directory to their location in the work/ directory instead of being stored twice.

Email address for completion summary, only when pipeline fails.

hidden
type: string
pattern: ^([a-zA-Z0-9_\-\.]+)@([a-zA-Z0-9_\-\.]+)\.([a-zA-Z]{2,5})$

An email address to send a summary email to when the pipeline is completed - ONLY sent if the pipeline does not exit successfully.

Send plain-text email instead of HTML.

hidden
type: boolean

File size limit when attaching MultiQC reports to summary emails.

hidden
type: string
default: 25.MB
pattern: ^\d+(\.\d+)?\.?\s*(K|M|G|T)?B$

Do not use coloured log outputs.

hidden
type: boolean

Incoming hook URL for messaging service

hidden
type: string

Incoming hook URL for messaging service. Currently, MS Teams and Slack are supported.

Custom config file to supply to MultiQC.

hidden
type: string

Custom logo file to supply to MultiQC. File name must also be set in the MultiQC config file

hidden
type: string

Custom MultiQC yaml file containing HTML including a methods description.

type: string

Directory to keep pipeline Nextflow logs and reports.

hidden
type: string
default: ${params.outdir}/pipeline_info

Boolean whether to validate parameters against the schema at runtime

hidden
type: boolean
default: true

Show all params when using --help

hidden
type: boolean

By default, parameters set as hidden in the schema are not shown on the command line when a user runs with --help. Specifying this option will tell the pipeline to show all parameters.

Run this workflow with Conda. You can also use '-profile conda' instead of providing this parameter.

hidden
type: boolean

Name of queue in HPC environment to run jobs.

hidden
type: string

Base URL or local path to location of pipeline test dataset files

hidden
type: string
default: https://raw.githubusercontent.com/nf-core/test-datasets/