nf-core/rnafusion
RNA-seq analysis pipeline for detection of gene-fusions
1.2.0
). The latest
stable release is
3.0.2
.
Introduction
Nextflow handles job submissions on SLURM or other environments, and supervises running the jobs. Thus the Nextflow process must run until the pipeline is finished. We recommend that you put the process running in the background through screen
/ tmux
or similar tool. Alternatively you can run nextflow within a cluster job submitted your job scheduler.
It is recommended to limit the Nextflow Java virtual machines memory. We recommend adding the following line to your environment (typically in ~/.bashrc
or ~./bash_profile
):
Running the pipeline
The typical command for running the pipeline is as follows.
Running the pipeline using Docker
Running the pipeline using Singularity
If the nextflow download script crashes (network issue), please use the bash script instead.
The command bellow will launch the pipeline using singularity
.
Running specific tools
Note that the pipeline will create the following files in your working directory:
Updating the pipeline
When you run the above command, Nextflow automatically pulls the pipeline code from GitHub and stores it as a cached version. When running the pipeline after this, it will always use the cached version if available - even if the pipeline has been updated since. To make sure that you’re running the latest version of the pipeline, make sure that you regularly update the cached version of the pipeline:
Reproducibility
It’s a good idea to specify a pipeline version when running the pipeline on your data. This ensures that a specific version of the pipeline code and software are used when you run your pipeline. If you keep using the same tag, you’ll be running the same version of the pipeline, even if there have been changes to the code since.
First, go to the nf-core/rnafusion releases page and find the latest version number - numeric only (eg. 1.3.1
). Then specify this when running the pipeline with -r
(one hyphen) - eg. -r 1.3.1
.
This version number will be logged in reports when you run the pipeline, so that you’ll know what you used when you look back in the future.
Main arguments
-profile
Use this parameter to choose a configuration profile. Profiles can give configuration presets for different compute environments.
Several generic profiles are bundled with the pipeline which instruct the pipeline to use software packaged using different methods (Docker, Singularity, Conda) - see below.
We highly recommend the use of Docker or Singularity containers for full pipeline reproducibility, however when this is not possible, Conda is also supported.
The pipeline also dynamically loads configurations from https://github.com/nf-core/configs when it runs, making multiple config profiles for various institutional clusters available at run time. For more information and to see if your system is available in these configs please see the nf-core/configs documentation.
Note that multiple profiles can be loaded, for example: -profile test,docker
- the order of arguments is important!
They are loaded in sequence, so later profiles can overwrite earlier profiles.
If -profile
is not specified, the pipeline will run locally and expect all software to be installed and available on the PATH
. This is not recommended.
docker
- A generic configuration profile to be used with Docker
- Pulls software from DockerHub:
nfcore/rnafusion
singularity
- A generic configuration profile to be used with Singularity
- Pulls software from DockerHub:
nfcore/rnafusion
test
- A profile with a complete configuration for automated testing
- Includes links to test data so needs no other parameters
--reads
Use this to specify the location of your input FastQ files. For example:
Please note the following requirements:
- The path must be enclosed in quotes
- The path must have at least one
*
wildcard character - When using the pipeline with paired end data, the path must use
{1,2}
notation to specify read pairs.
If left unspecified, a default pattern is used: data/*{1,2}.fastq.gz
--single_end
By default, the pipeline expects paired-end data. If you have single-end data, you need to specify --single_end
on the command line when you launch the pipeline. A normal glob pattern, enclosed in quotation marks, can then be used for --reads
. For example:
Tool flags
--arriba
If enabled, executes Arriba
tool.
--arriba_opt
- Specify additional parameters. For more info, please refer to the documentation of the tool.
--ericscript
If enabled, executes Ericscript
tool.
--ericscript_opt
- Specify additional parameters. For more info, please refer to the documentation of the tool.
--fusioncatcher
If enabled, executes Fusioncatcher
tool.
--fusioncatcher_opt
- Specify additional parameters. For more info, please refer to the documentation of the tool.
--fusion_report
If enabled, download databases for fusion-report
.
fusion_report_opt
- Specify additional parameters. For more info, please refer to the documentation of the tool.
--pizzly
If enabled, executes Pizzly
tool.
--pizzly_k
- Number of k-mers. Deafult 31.
--squid
If enabled, executes Squid
tool.
--star_fusion
If enabled, executes STAR-Fusion
tool.
--star_fusion_opt
- Parameter for specifying additional parameters. For more info, please refer to the documentation of the tool.
Visualization flags
--arriba_vis
If enabled, executes build in Arriba
visualization tool.
--fusion_inspector
If enabled, executes Fusion-Inspector
tool.
Reference genomes
--arriba_ref
--databases
Required databases in order to run fusion-report
.
--ericscript_ref
Required reference in order to run EricScript
.
--fasta
If you prefer, you can specify the full path to your reference genome when you run the pipeline:
--fusioncatcher_ref
Required reference in order to run Fusioncatcher
.
--genome
This pipeline uses only Homo Sapiens
version GRCh38
. Also make sure to specify --genomes_base
.
--gtf
Required annotation file.
--reference_release
Ensembl version.
--star_index
If you prefer, you can specify the full path for STAR
index when you run the pipeline. If not specified, the pipeline will build the index using for reads with length 100bp (can be adjusted with parameter --read_length
).
--star_fusion_ref
Required reference in order to run STAR-Fusion
.
--transcript
Required transcript file.
Job resources
Automatic resubmission
Each step in the pipeline has a default set of requirements for number of CPUs, memory and time. For most of the steps in the pipeline, if the job exits with an error code of 143
(exceeded requested resources) it will automatically resubmit with higher requests (2 x original, then 3 x original). If it still fails after three times then the pipeline is stopped.
Custom resource requests
Wherever process-specific requirements are set in the pipeline, the default value can be changed by creating a custom config file. See the files hosted at nf-core/configs
for examples.
If you are likely to be running nf-core
pipelines regularly it may be a good idea to request that your custom config file is uploaded to the nf-core/configs
git repository. Before you do this please can you test that the config file works with your pipeline of choice using the -c
parameter (see definition below). You can then create a pull request to the nf-core/configs
repository with the addition of your config file, associated documentation file (see examples in nf-core/configs/docs
), and amending nfcore_custom.config
to include your custom profile.
If you have any questions or issues please send us a message on Slack.
AWS Batch specific parameters
Running the pipeline on AWS Batch requires a couple of specific parameters to be set according to your AWS Batch configuration. Please use -profile awsbatch
and then specify all of the following parameters.
--awsqueue
The JobQueue that you intend to use on AWS Batch.
--awsregion
The AWS region in which to run your job. Default is set to eu-west-1
but can be adjusted to your needs.
--awscli
The AWS CLI path in your custom AMI. Default: /home/ec2-user/miniconda/bin/aws
.
The AWS region to run your job in. Default is set to eu-west-1
but can be adjusted to your needs.
Please make sure to also set the -w/--work-dir
and --outdir
parameters to a S3 storage bucket of your choice - you’ll get an error message notifying you if you didn’t.
Other command line parameters
--debug
To run only a specific tool (testing freshly implemented tool) just add --debug
parameter. This parameter only works on fusion tools only!
--read_length
Length is used to build a STAR index. Default is 100bp (Illumina).
--outdir
The output directory where the results will be saved.
--email
Set this parameter to your e-mail address to get a summary e-mail with details of the run sent to you when the workflow exits. If set in your user config file (~/.nextflow/config
) then you don’t need to specify this on the command line for every run.
--email_on_fail
This works exactly as with --email
, except emails are only sent if the workflow is not successful.
--max_multiqc_email_size
Threshold size for MultiQC report to be attached in notification email. If file generated by pipeline exceeds the threshold, it will not be attached (Default: 25MB).
-name
Name for the pipeline run. If not specified, Nextflow will automatically generate a random mnemonic.
Name for the pipeline run. If not specified, Nextflow will automatically generate a random mnemonic. This is used in the MultiQC report (if not default) and in the summary HTML / e-mail (always).
NB: Single hyphen (core Nextflow option)
-resume
Specify this when restarting a pipeline. Nextflow will used cached results from any pipeline steps where the inputs are the same, continuing from where it got to previously.
Specify this when restarting a pipeline. Nextflow will used cached results from any pipeline steps where the inputs are the same, continuing from where it got to previously.
You can also supply a run name to resume a specific run: -resume [run-name]
. Use the nextflow log
command to show previous run names.
NB: Single hyphen (core Nextflow option)
-c
Specify the path to a specific config file (this is a core NextFlow command).
NB: Single hyphen (core Nextflow option)
Note - you can use this to override pipeline defaults.
--custom_config_version
Provide git commit id for custom Institutional configs hosted at nf-core/configs
. This was implemented for reproducibility purposes. Default: master
.
--custom_config_base
If you’re running offline, nextflow will not be able to fetch the institutional config files
from the internet. If you don’t need them, then this is not a problem. If you do need them,
you should download the files from the repo and tell nextflow where to find them with the
custom_config_base
option. For example:
Note that the nf-core/tools helper package has a
download
command to download all required pipeline files + singularity containers + institutional configs in one go for you, to make this process easier.
--max_memory
Use to set a top-limit for the default memory requirement for each process.
Should be a string in the format integer-unit. eg. --max_memory '8.GB'
--max_time
Use to set a top-limit for the default time requirement for each process.
Should be a string in the format integer-unit. eg. --max_time '2.h'
--max_cpus
Use to set a top-limit for the default CPU requirement for each process.
Should be a string in the format integer-unit. eg. --max_cpus 1
--plaintext_email
Set to receive plain-text e-mails instead of HTML formatted.
--monochrome_logs
Set to disable colourful command line output and live life in monochrome.
--multiqc_config
Specify a path to a custom MultiQC configuration file.