Heads up:
We’re moving the GATK website, docs and forum to a new platform. Read the full story and breakdown of key changes on this blog.
Notice:
If you happen to see a question you know the answer to, please do chime in and help your fellow community members. We encourage our fourm members to be more involved, jump in and help out your fellow researchers with their questions. GATK forum is a community forum and helping each other with using GATK tools and research is the cornerstone of our success as a genomics research community.We appreciate your help!

Test-drive the GATK tools and Best Practices pipelines on Terra


Check out this blog post to learn how you can get started with GATK and try out the pipelines in preconfigured workspaces (with a user-friendly interface!) without having to install anything.
Attention:
We will be out of the office for a Broad Institute event from Dec 10th to Dec 11th 2019. We will be back to monitor the GATK forum on Dec 12th 2019. In the meantime we encourage you to help out other community members with their queries.
Thank you for your patience!

Too many (?) variants detected by joint genotyping of 8232 exomes

Hello,

I am about to finish analyzing 8232 exome samples. I have used GATK 3.8 and 3.6 throughout my workflow, and followed the best practices guideline. After making variant calling by running haplotypecaller in gvcf mode and using standard workflow, I have merged gvcf files hierarchically until all the 8232 samples have been merged. After initial rounds of gvcf merging, I run the program chr by chr and then further dividing the genome into 30-50 Mb pieces in order to reduce computational time. Finally, I started running genotypegvcfs and VQSR on each of the 70 genomic parts. 60 parts have been completed, and a total of 6.4 M variants have been detected. I estimate to get around 7.5M variants when I completely finish the workflow. I suspect that this amount might be too much than expected for that many exomes, but I am not sure; therefore I would like to have your comments. If it is too much indeed, what might be the cause of getting too many false calls? I can write all the commands I have used in this analysis.
At the end, I am planning to select the variants passing filters (I used tranches 99.5 and 99.0 for SNPs and indels, respectively). But I think this will not reduce the amount of variants significantly. I calculated the number of variants with call rate >80 in a few genomic parts, and I found that only around 50% of all variants reach this call rate. My second question is this: is it normal that my vcf files contain so many variants with low calling rate? If I select the variants with PASS flag and call rate >80, can I trust to the remaining set, or do you think getting to many variants with low call rate indicates that the output is unreliable?

Cigdem

Best Answer

Answers

  • SheilaSheila Broad InstituteMember, Broadie admin

    @cigdem
    Hi Cigdem,

    I calculated the number of variants with call rate >80 in a few genomic parts, and I found that only around 50% of all variants reach this call rate.

    I am not sure I understand what you mean by call rate >80?

    You may find this article helpful.

    Also, some tips from the developer for troubleshooting:

    1) collect QC data (coverage, contamination, chimerism, sequencing artifact metrics)
    2) look at the data (metrics, sequence data and variants)
    3) look at the code/pipeline.
    4) look at the data again.

    I hope this helps.

    -Sheila

  • Hello @SkyWarrior,
    No, I did not put such limitation. But I guess I can exclude off-target sites from the vcf file using vcftools. I'll try this.
    Thank you for the suggestion.

  • @Sheila
    Hi Sheila,

    Thank you for your comments. We can compare the resulting variants to the genotyping chip data as suggested in the article.

    As for my question about call rate, I have evaluated a 50Mb region on chr1, in which a total of 78,613 variants have been detected by joint genotyping of 8232 individuals. when I ran this region on vcftools using --max-missing 0.8 option, I ended up with 38,757 variants. So, about half of the variants in that vcf file have missing genotypes in at least 20% of the cohort. I wonder whether genotypegvcf takes account of call rate among individuals when run in gvcf mode and skips those with low call rate? And could the reason of getting too many low call variants be that I might have off-target calls as @SkyWarrior suggested?

    Thanks,

    Cigdem

  • SkyWarriorSkyWarrior TurkeyMember ✭✭✭

    That is probably the only reason for so many discordant sites.

  • SheilaSheila Broad InstituteMember, Broadie admin

    @cigdem
    Hi Cigdem,

    @SkyWarrior may be on to something with the off target sites. Let us know if restricting to the targeted intervals helps.

    -Sheila

  • cigdemcigdem Member

    Hello,
    @Sheila, @SkyWarrior

    Sorry for my very late reply. Since the targets file I have probably was not specific to the exome capture kit (it only lists positions for coding regions), I first tried to restrict variants based on their call rate throughout the cohort. By selecting only variants with call rate > 80%, I obtained 2.1 M variants. But recently I have done target site restriction instead of call rate filtering. I have added 100 bp flanking sequences to the positions in my bed file and selected the variants within those regions. Now I detected 1.6 M variants. I am going to use this data in my subsequent analyses.

    Thank you for your help.

    Cigdem

Sign In or Register to comment.