If you happen to see a question you know the answer to, please do chime in and help your fellow community members. We encourage our fourm members to be more involved, jump in and help out your fellow researchers with their questions. GATK forum is a community forum and helping each other with using GATK tools and research is the cornerstone of our success as a genomics research community.We appreciate your help!
Test-drive the GATK tools and Best Practices pipelines on Terra
Check out this blog post to learn how you can get started with GATK and try out the pipelines in preconfigured workspaces (with a user-friendly interface!) without having to install anything.
Combine gvcf files takes extremely long
I am combining 200 individual gVCF files (on average 2 GB in size) into a grouped gVCF file. Although I am working with a cluster with 1 TB of RAM, it is taking extremely long. The GATK log file estimates up to 3 weeks. To make it run faster, I divided the job into 24 seperate jobs to run parallel for each chromosome. Example of chromosome 1:
java -Xmx"$MEM"g -jar "$GATK" \
-T CombineGVCFs \
-R "$REFERENCE" \
-L "$BAITFILE.chr1.hg19.bed" \
--variant "$VARIANTS" \
--validation_strictness SILENT \
--logging_level INFO \
MEM is set to 80 (12 tasks at once for chr1-chr12 -> 1000GB/12 = ~80 GB). It still estimates some chromosomes to be done in 70hrs. I can't imagine this is the way it supposed to be.
Do you maybe have other suggestions to make the CombineGVCFs opton faster?
Thank you in advance,