Test-drive the GATK tools and Best Practices pipelines on Terra
Check out this blog post to learn how you can get started with GATK and try out the pipelines in preconfigured workspaces (with a user-friendly interface!) without having to install anything.
Is it necessary to perform additional quality filter to remove low quality reads and barcode contami
Hi dear all,
I went through the whole variant calling pipeline on my whole exome sequencing data.Now I have three questions here.Q1. Is it necessary to perform additional quality filter to remove low quality reads and barcode contamination before mapping? As there are dedupping and BQSR in downstream steps, can I assume that the effect brought by low quality bases and barcode contamination will be eliminated in downstream steps? Q2. Is it better to do joint calling than do variant calling individually? We aim to find pathological mutations by comparing SNPs between the affected and the normal in one family. For each family, we have data sets from 3-4 individuals. I marked each individual with different @RG tags. In my first trial, I just used the basic command calling SNPs one sample a time. I learned that VCF mode accepts multiple bam files. I can type -I No1.bam -I No2.bam -I No3.bam -I .... But gVCF mode only accepts one bam file a time. So I should merge multiple bams using 'printreads' before using 'HaplotypeCaller'. My confusion is that 'BaseRecalibrator' only accepts one bam file and output one BQSR table a time. So should I 'cat' all tables and use as -BQSR for 'printreads'? Which will be better? Still use VCF mode by inputting multiple bam files at a time or merge multiple bam files in advance and do gVCF calling? Q3.Should I use hard filters instead of VQSR? Though we are working on whole exome data, we are analyzing less than 30 samples a time. I saw in one of your answers that the minimum sample number should reach 30 to fit gaussian model.Though no error was reported when I ran VQSR in my first trial, the Ti/Tv value came out to be bad in my tranches files and model plots seemed different from your example in the best practice. So I think maybe I should just use hard filters then?