Heads up:
We’re moving the GATK website, docs and forum to a new platform. Read the full story and breakdown of key changes on this blog.
Notice:
If you happen to see a question you know the answer to, please do chime in and help your fellow community members. We encourage our fourm members to be more involved, jump in and help out your fellow researchers with their questions. GATK forum is a community forum and helping each other with using GATK tools and research is the cornerstone of our success as a genomics research community.We appreciate your help!

Test-drive the GATK tools and Best Practices pipelines on Terra


Check out this blog post to learn how you can get started with GATK and try out the pipelines in preconfigured workspaces (with a user-friendly interface!) without having to install anything.

Processing a large number of gVCFs files in a local cluster

Hi,

I am trying to develop a strategy to work with a large amount of WGS gVCFs ( ~ 3000) in an HPC cluster (using Slurm).

In my pipeline, I am downloading batches of ~ 200 gVCF files (generated with HaplotypeCaller) via GridFTP. Using a modified version of the joint-discovery pipeline, I am importing the files with GenotypeGVCFs , to finally generate a gVCF for the batch using SelectVariants (discarding the individual files). The plan is to use all the gVCFs generated in this way as input for the standard joint-discovery pipeline.

My main goal is to reduce the use of local storage space during the process (my current limiting factor), trying to generate, for each batch, a gVCF file with less size than the sum of individual files.

Do you have any recommendations to reduce the size of the intermediate files during the GenotypeGVCFs or SelectVariants intermediate steps? (for example to reduce the number GQ bands?).

Thanks!

Best Answer

Answers

  • LeandroGabLeandroGab Member

    Sorry, what I meant to say was "GenomicsDBImport" instead of "GenotypeGVCFs"

Sign In or Register to comment.