We’re moving the GATK website, docs and forum to a new platform. Read the full story and breakdown of key changes on this blog.
If you happen to see a question you know the answer to, please do chime in and help your fellow community members. We encourage our fourm members to be more involved, jump in and help out your fellow researchers with their questions. GATK forum is a community forum and helping each other with using GATK tools and research is the cornerstone of our success as a genomics research community.We appreciate your help!
Test-drive the GATK tools and Best Practices pipelines on Terra
Check out this blog post to learn how you can get started with GATK and try out the pipelines in preconfigured workspaces (with a user-friendly interface!) without having to install anything.
We will be out of the office for a Broad Institute event from Dec 10th to Dec 11th 2019. We will be back to monitor the GATK forum on Dec 12th 2019. In the meantime we encourage you to help out other community members with their queries.
Thank you for your patience!
Is it necessary to perform additional quality filter to remove low quality reads and barcode contami
Hi dear all,
I went through the whole variant calling pipeline on my whole exome sequencing data.Now I have three questions here.Q1. Is it necessary to perform additional quality filter to remove low quality reads and barcode contamination before mapping? As there are dedupping and BQSR in downstream steps, can I assume that the effect brought by low quality bases and barcode contamination will be eliminated in downstream steps? Q2. Is it better to do joint calling than do variant calling individually? We aim to find pathological mutations by comparing SNPs between the affected and the normal in one family. For each family, we have data sets from 3-4 individuals. I marked each individual with different @RG tags. In my first trial, I just used the basic command calling SNPs one sample a time. I learned that VCF mode accepts multiple bam files. I can type -I No1.bam -I No2.bam -I No3.bam -I .... But gVCF mode only accepts one bam file a time. So I should merge multiple bams using 'printreads' before using 'HaplotypeCaller'. My confusion is that 'BaseRecalibrator' only accepts one bam file and output one BQSR table a time. So should I 'cat' all tables and use as -BQSR for 'printreads'? Which will be better? Still use VCF mode by inputting multiple bam files at a time or merge multiple bam files in advance and do gVCF calling? Q3.Should I use hard filters instead of VQSR? Though we are working on whole exome data, we are analyzing less than 30 samples a time. I saw in one of your answers that the minimum sample number should reach 30 to fit gaussian model.Though no error was reported when I ran VQSR in my first trial, the Ti/Tv value came out to be bad in my tranches files and model plots seemed different from your example in the best practice. So I think maybe I should just use hard filters then?