Heads up:
We’re moving the GATK website, docs and forum to a new platform. Read the full story and breakdown of key changes on this blog.
If you happen to see a question you know the answer to, please do chime in and help your fellow community members. We encourage our fourm members to be more involved, jump in and help out your fellow researchers with their questions. GATK forum is a community forum and helping each other with using GATK tools and research is the cornerstone of our success as a genomics research community.We appreciate your help!

Test-drive the GATK tools and Best Practices pipelines on Terra

Check out this blog post to learn how you can get started with GATK and try out the pipelines in preconfigured workspaces (with a user-friendly interface!) without having to install anything.
We will be out of the office for a Broad Institute event from Dec 10th to Dec 11th 2019. We will be back to monitor the GATK forum on Dec 12th 2019. In the meantime we encourage you to help out other community members with their queries.
Thank you for your patience!

Reasons MarkDuplicates might not remove all duplicates?

I was using MarkDuplicates to remove duplicates from a BAM file with extremely high coverage. It was for the gene GAPDH, and I calculated the maximum possible number of bases using Ensembl BioMart to be 2,877 (by adding up the longest possible length of an exon based on all isoforms). From this I would expect to see a maximum of that many reads (since if the reads are moving along the gene in a sliding window, the most unique ones there should be would be the same as the number of bases). For some reason when I use MarkDuplicates it doesn't get close, it still has nearly 8,000 reads. Everything online says MarkDuplicates removes reads based on 5' position and strand, not sequence, so it shouldn't be affected by reads having slight variation, if they're mapped to the same position then it should be removing them right? I'm not sure if I'm using it wrong, or if there's a concept I'm not understanding so any help would be greatly appreciated.

This is my code:

    java -jar picard.jar MarkDuplicates \
       I="${SORTED_BAM_FILE}" \
       M="${HASH_DIRECTORY}/marked_dup_metrics.txt" \
       PROGRAM_RECORD_ID=null \
       ASSUME_SORT_ORDER=coordinate   #File is sorted by coordinate earlier in the pipeline

Best Answer


  • shleeshlee CambridgeMember, Broadie ✭✭✭✭✭

    Do you have paired end reads? If so, MarkDuplicates defines duplicates at the _insert_ level. 
  • Hi, thanks for your answer :) Could you expand a bit? I'm not entirely sure I understand what you mean by that. Is it that because it's considering the insert they are less likely to be the same?

Sign In or Register to comment.