We've moved!
This site is now read-only. You can find our new documentation site and support forum for posting questions here.
Be sure to read our welcome blog!

Reasons MarkDuplicates might not remove all duplicates?

I was using MarkDuplicates to remove duplicates from a BAM file with extremely high coverage. It was for the gene GAPDH, and I calculated the maximum possible number of bases using Ensembl BioMart to be 2,877 (by adding up the longest possible length of an exon based on all isoforms). From this I would expect to see a maximum of that many reads (since if the reads are moving along the gene in a sliding window, the most unique ones there should be would be the same as the number of bases). For some reason when I use MarkDuplicates it doesn't get close, it still has nearly 8,000 reads. Everything online says MarkDuplicates removes reads based on 5' position and strand, not sequence, so it shouldn't be affected by reads having slight variation, if they're mapped to the same position then it should be removing them right? I'm not sure if I'm using it wrong, or if there's a concept I'm not understanding so any help would be greatly appreciated.

This is my code:

    java -jar picard.jar MarkDuplicates \
       I="${SORTED_BAM_FILE}" \
       M="${HASH_DIRECTORY}/marked_dup_metrics.txt" \
       PROGRAM_RECORD_ID=null \
       ASSUME_SORT_ORDER=coordinate   #File is sorted by coordinate earlier in the pipeline

Best Answer


  • shleeshlee CambridgeMember, Broadie ✭✭✭✭✭

    Do you have paired end reads? If so, MarkDuplicates defines duplicates at the _insert_ level. 
  • Hi, thanks for your answer :) Could you expand a bit? I'm not entirely sure I understand what you mean by that. Is it that because it's considering the insert they are less likely to be the same?

Sign In or Register to comment.