If you happen to see a question you know the answer to, please do chime in and help your fellow community members. We encourage our fourm members to be more involved, jump in and help out your fellow researchers with their questions. GATK forum is a community forum and helping each other with using GATK tools and research is the cornerstone of our success as a genomics research community.We appreciate your help!
Test-drive the GATK tools and Best Practices pipelines on Terra
Check out this blog post to learn how you can get started with GATK and try out the pipelines in preconfigured workspaces (with a user-friendly interface!) without having to install anything.
How does the "-aggregate" argument in VariantRecalibrator compare to more samples genotyped together
I've tried to search for the answer to this question on the guidelines and forums pages, but I haven't been able to figure it out. I apologize if I'm missing something that should be obvious from the documentation.
So, I'm familiar with the current best practices for DNA-seq variant discovery with HC, call GVCFs and VQSR, and the requirement to have ample data for building the model in VQSR. To get enough data, one might add in extra variants, which you recommend doing in the CALLING stage.
I have a "ploidy 20"-dataset of several hundred samples where calling for practical computational purposes needs to be done in batches to avoid memory crash. But I'd nevertheless like to use all the variants for optimal VQSR. It looks like this might be done with the --aggregate argument in VariantRecalibrator by adding in raw VCFs from all batches in that stage. Would this really differ significantly from a workflow where all samples were called together? Why is the "--aggregate" option never mentioned in your advice on how to achieve a VQSR-worthy dataset?
Thanks for a great resource and website