The current GATK version is 3.7-0
Examples: Monday, today, last week, Mar 26, 3/26/04

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Get notifications!


You can opt in to receive email notifications, for example when your questions get answered or when there are new announcements, by following the instructions given here.

Formatting tip!


Wrap blocks of code, error messages and BAM/VCF snippets--especially content with hashes (#)--with lines with three backticks ( ``` ) each to make a code block as demonstrated here.

Jump to another community
Picard 2.9.0 is now available. Download and read release notes here.
GATK 3.7 is here! Be sure to read the Version Highlights and optionally the full Release Notes.

(How to) Mark duplicates with MarkDuplicates or MarkDuplicatesWithMateCigar

shleeshlee CambridgeMember, Broadie, Moderator Posts: 595 admin
edited May 2016 in Tutorials


image This tutorial updates Tutorial#2799.

Here we discuss two tools, MarkDuplicates and MarkDuplicatesWithMateCigar, that flag duplicates. We provide example data and example commands for you to follow along the tutorial (section 1) and include tips in estimating library complexity for PCR-free samples and patterned flow cell technologies. In section 2, we point out special memory considerations for these tools. In section 3, we highlight the similarities and differences between the two tools. Finally, we get into some details that may be of interest to some that includes comments on the metrics file (section 4).

To mark duplicates in RNA-Seq data, use MarkDuplicates. Reasons are explained in section 2 and section 3. And if you are considering using MarkDuplicatesWithMateCigar for your DNA data, be sure insert lengths are short and you have a low percentage of split or multi-mapping records.

Obviously, expect more duplicates for samples prepared with PCR than for PCR-free preparations. Duplicates arise from various sources, including within the sequencing run. As such, even PCR-free data can give rise to duplicates, albeit at low rates, as illustrated here with our example data.

Which tool should I use, MarkDuplicates or MarkDuplicatesWithMateCigar? new section 5/25/2016

The Best Practices so far recommends MarkDuplicates. However, as always, consider your research goals.

If your research uses paired end reads and pre-processing that generates missing mates, for example by application of an intervals list or by removal of reference contigs after the initial alignment, and you wish to flag duplicates for these remaining singletons, then MarkDuplicatesWithMateCigar will flag these for you at the insert level using the mate cigar (MC) tag. MarkDuplicates skips these singletons from consideration.

If the qualities by which the representative insert in a duplicate set is selected is important to your analyses, then note that MarkDuplicatesWithMateCigar is limited to prioritizing by the total mapped length of a pair, while MarkDuplicates can use this OR the default sum of base qualities of a pair.

If you are still unsure which tool is appropriate, then consider maximizing comparability to previous analyses. The Broad Genomics Platform has used only MarkDuplicates in their production pipelines. MarkDuplicatesWithMateCigar is a newer tool that has yet to gain traction.

This tutorial compares the two tools to dispel the circulating notion that the outcomes from the two tools are equivalent and to provide details helpful to researchers in optimizing their analyses.

We welcome feedback. Share your suggestions in the Comment section at the bottom of this page.


Jump to a section

  1. Commands for MarkDuplicates and MarkDuplicatesWithMateCigar
  2. Slow or out of memory error? Special memory considerations for duplicate marking tools
  3. Conceptual overview of duplicate flagging
  4. Details of interest to some

Tools involved

Prerequisites

  • Installed Picard tools
  • Coordinate-sorted and indexed BAM alignment data. Secondary/supplementary alignments are flagged appropriately (256 and 2048 flags) and additionally with the mate unmapped (8) flag. See the MergeBamAlignment section (3C) of Tutorial#6483 for a description of how MergeBamAlignment ensures such flagging. **Revision as of 5/17/2016:** I wrote this tutorial at a time when the input could only be an indexed and coordinate-sorted BAM. Recently, the tools added a feature to accept queryname-sorted inputs that in turn activates additional features. The additional features that providing a queryname-sorted BAM activates will give DIFFERENT duplicate flagging results. So for the tutorial's observations to apply, use coordinate-sorted data.
  • For MarkDuplicatesWithMateCigar, pre-computed Mate CIGAR (MC) tags. Data produced according to Tutorial#6483 will have the MC tags added by MergeBamAlignment. Alternatively, see tools RevertOriginalBaseQualitiesAndAddMateCigar and FixMateInformation.
  • Appropriately assigned Read Group (RG) information. Read Group library (RGLB) information is factored for molecular duplicate detection. Optical duplicates are limited to those from the same RGID.

Download example data

  • Use the advanced tutorial bundle's human_g1k_v37_decoy.fasta as reference. This same reference is available to load in IGV.
  • tutorial_6747.tar.gz data contain human paired 2x150 whole genome sequence reads originally aligning at ~30x depth of coverage. The sample is a PCR-free preparation of the NA12878 individual run on the HiSeq X platform. This machine type, along with HiSeq 4000, has the newer patterned flow cell that differs from the typical non-patterned flow cell. I took the reads aligning to a one Mbp genomic interval (10:96,000,000-97,000,000) and sanitized and realigned the reads (BWA-MEM -M) to the entire genome according to the workflow presented in Tutorial#6483 to produce snippet.bam. The data has (i) no supplementary records; (ii) secondary records flagged with the 256 flag and the mate-unmapped (8) flag; and (iii) unmapped records (4 flag) with mapped mates (mates have 8 flag), zero MAPQ (column 5) and asterisks for CIGAR (column 6). The notation allows read pairs where one mate maps and the other does not to sort and remain together when we apply genomic intervals such as in the generation of the snippet.

Related resources


1. Commands for MarkDuplicates and MarkDuplicatesWithMateCigar

The following commands take a coordinate-sorted and indexed BAM and return (i) a BAM with the same records in coordinate order and with duplicates marked by the 1024 flag, (ii) a duplication metrics file, and (iii) an optional matching BAI index.

For a given file with all MC (mate CIGAR) tags accounted for:

  • and where all mates are accounted for, each tool--MarkDuplicates and MarkDuplicatesWithMateCigar--examines the same duplicate sets but prioritize which inserts get marked duplicate differently. This situation is represented by our snippet example data.
  • but containing missing mates records, MarkDuplicates ignores the records, while MarkDuplicatesWithMateCigar still considers them for duplicate marking using the MC tag for mate information. Again, the duplicate scoring methods differ for each tool.

Use the following commands to flag duplicates for 6747_snippet.bam. These commands produce qualitatively different data.

Score duplicate sets based on the sum of base qualities using MarkDuplicates:

java -Xmx32G -jar picard.jar MarkDuplicates \
INPUT=6747_snippet.bam \ #specify multiple times to merge 
OUTPUT=6747_snippet_markduplicates.bam \
METRICS_FILE=6747_snippet_markduplicates_metrics.txt \ 
OPTICAL_DUPLICATE_PIXEL_DISTANCE=2500 \ #changed from default of 100
CREATE_INDEX=true \ #optional
TMP_DIR=/tmp

Score duplicate sets based on total mapped reference length using MarkDuplicatesWithMateCigar:

java -Xmx32G -jar picard.jar MarkDuplicatesWithMateCigar \
INPUT=6747_snippet.bam \ #specify multiple times to merge
OUTPUT=6747_snippet_markduplicateswithmatecigar.bam \
METRICS_FILE=6747_snippet_markduplicateswithmatecigar_metrics.txt \ 
OPTICAL_DUPLICATE_PIXEL_DISTANCE=2500 \ #changed from default of 100
CREATE_INDEX=true \ #optional
TMP_DIR=/tmp

Comments on select parameters

  • **Revision as of 5/17/2016:** The example input 6747_snippet.bam is coordinate-sorted and indexed. Recently, the tools added a feature to accept queryname-sorted inputs that in turn by default activates additional features that will give DIFFERENT duplicate flagging results than outlined in this tutorial. Namely, if you provide MarkDuplicates a queryname-sorted BAM, then if a primary alignment is marked as duplicate, then the tool will also flag its (i) unmapped mate, (ii) secondary and/or (iii) supplementary alignment record(s) as duplicate.
  • Each tool has a distinct default DUPLICATE_SCORING_STRATEGY. For MarkDuplicatesWithMateCigar it is TOTAL_MAPPED_REFERENCE_LENGTH and this is the only scoring strategy available. For MarkDuplicates you can switch the DUPLICATE_SCORING_STRATEGY between the default SUM_OF_BASE_QUALITIES and TOTAL_MAPPED_REFERENCE_LENGTH. Both scoring strategies use information pertaining to both mates in a pair, but in the case of MarkDuplicatesWithMateCigar the information for the mate comes from the read's MC tag and not from the actual mate.
  • To merge multiple files into a single output, e.g. when aggregating a sample from across lanes, specify the INPUT parameter for each file. The tools merge the read records from the multiple files into the single output file. The tools marks duplicates for the entire library (RGLB) and accounts for optical duplicates per RGID. INPUT files must be coordinate sorted and indexed.
  • The Broad's production workflow increases OPTICAL_DUPLICATE_PIXEL_DISTANCE to 2500, to better estimate library complexity. The default setting for this parameter is 100. Changing this parameter does not alter duplicate marking. It only changes the count for optical duplicates and the library complexity estimate in the metrics file in that whatever is counted as an optical duplicate does not factor towards library complexity. The increase has to do with the fact that our example data was sequenced in a patterned flow cell of a HiSeq X machine. Both HiSeq X and HiSeq 4000 technologies decrease pixel unit area by 10-fold and so the equivalent pixel distance in non-patterned flow cells is 250. You may ask why are we still counting optical duplicates for patterned flow cells that by design should have no optical duplicates. We are hijacking this feature of the tools to account for other types of duplicates arising from the sequencer. Sequencer duplicates are not limited to optical duplicates and should be differentiated from PCR duplicates for more accurate library complexity estimates.
  • By default the tools flag duplicates and retain them in the output file. To remove the duplicate records from the resulting file, set the REMOVE_DUPLICATES parameter to true. However, given you can set GATK tools to include duplicates in analyses by adding -drf DuplicateRead to commands, a better option for value-added storage efficiency is to retain the resulting marked file over the input file.
  • To optionally create a .bai index, add and set the CREATE_INDEX parameter to true.

For snippet, the duplication metrics are identical whether marked by MarkDuplicates or MarkDuplicatesWithMateCigar. We have 13.4008% duplication, with 255 unpaired read duplicates and 18,254 paired read duplicates. However, as the screenshot at the top of this page illustrates, and as section 4 explains, the data qualitatively differ.

back to top


2. Slow or out of memory error? Special memory considerations for duplicate marking tools

The seemingly simple task of marking duplicates is one of the most memory hungry processes, especially for paired end reads. Both tools are compute-intensive and require upping memory compared to other processes.

Because of the single-pass nature of MarkDuplicatesWithMateCigar, for a given file its memory requirements can be greater than for MarkDuplicates. What this means is that MarkDuplicatesWithMateCigar streams the duplicate marking routine in a manner that allows for piping. Due to these memory constraints for MarkDuplicatesWithMateCigar, we recommend MarkDuplicates for alignments that have large reference skips, e.g. spliced RNA alignments.

For large files, (1) use the Java -Xmx setting and (2) set the environmental variable TMP_DIR for a temporary directory. These options allow the tool to run without slowing down as well as run without causing an out of memory error. For the purposes of this tutorial, commands are given as if the example data is a large file, which we know it is not.

    java -Xmx32G -jar picard.jar MarkDuplicates \
    ... \
    TMP_DIR=/tmp 

These options can be omitted for small files such as the example data and the equivalent command is as follows.

    java -jar picard.jar MarkDuplicates ...   

Set the java maxheapsize, specified by the -Xmx#G option, to the maximum your system allows.

The high memory cost, especially for MarkDuplicatesWithMateCigar, is in part because the tool systematically traverses genomic coordinate intervals for inserts in question, and for every read it marks as a duplicate it must keep track of the mate, which may or may not map nearby, so that reads are marked as pairs with each record emitted in its coordinate turn. In the meanwhile, this information is held in memory, which is the first choice for faster processing, until the memory limit is reached, at which point memory spills to disk. We set this limit high to minimize instances of memory spilling to disk.

In the example command, the -Xmx32G Java option caps the maximum heap size, or memory usage, to 32 gigabytes, which is the limit on the server I use. This is in contrast to the 8G setting I use for other processes on the same sample data--a 75G BAM file. To find a system's default maximum heap size, type java -XX:+PrintFlagsFinal -version, and look for MaxHeapSize.

Set an additional temporary directory with the TMP_DIR parameter for memory spillage.

When the tool hits the memory limit, memory spills to disk. This causes data to traverse in and out of the processor's I/O device, slowing the process down. Disk is a location you specify with the TMP_DIR parameter. If you work on a server separate from where you read and write files to, setting TMP_DIR to the server's local temporary directory (typically /tmp) can reduce processing time compared to setting it to the storage disk. This is because the tool then additionally avoids traversing the network file system when spilling memory. Be sure the TMP_DIR location you specify provides enough storage space. Use df -h to see how much is available.

back to top


3. Conceptual overview of duplicate flagging

The aim of duplicate marking is to flag all but one of a duplicate set as duplicates and to use duplicate metrics to estimate library complexity. Duplicates have a higher probability of being non-independent measurements from the exact same template DNA. Duplicate inserts are marked by the 0x400 bit (1024 flag) in the second column of a SAM record, for each mate of a pair. This allows downstream GATK tools to exclude duplicates from analyses (most do this by default). Certain duplicates, i.e. PCR and sequencer duplicates, violate assumptions of variant calling and also potentially amplify errors. Removing these, even at the cost of removing serendipitous biological duplicates, allows us to be conservative in calculating the confidence of variants.

GATK tools allow you to disable the duplicate read filter with -drf DuplicateRead so you can include duplicates in analyses.

For a whole genome DNA sample, duplicates arise from three sources: (i) in DNA shearing from distinct molecular templates identical in insert mapping, (ii) from PCR amplification of a template (PCR duplicates), and (iii) from sequencing, e.g. optical duplicates. The tools cannot distinguish between these types of duplicates with the exception of optical duplicates. In estimating library complexity, the latter two types of duplicates are undesirable and should each factor differently.

When should we not care about duplicates? Given duplication metrics, we can make some judgement calls on the quality of our sample preparation and sequencer run. Of course, we may not expect a complex library if our samples are targeted amplicons. Also, we may expect minimal duplicates if our samples are PCR-free. Or it may be that because of the variation inherent in expression level data, e.g. RNA-Seq, duplicate marking becomes ritualistic. Unless you are certain of your edge case (amplicon sequencing, RNA-Seq allele-specific expression analysis, etc.) where duplicate marking adds minimal value, you should go ahead and mark duplicates. You may find yourself staring at an IGV session trying to visually calculate the strength of the evidence for a variant. We can pat ourselves on the back for having the forethought to systematically mark duplicates and turn on the IGV duplicate filter.

The Broad's Genomics Platform uses MarkDuplicates twice for multiplexed samples. Duplicates are flagged first per sample per lane to estimate lane-level library complexity, and second to aggregate data per sample while marking all library duplicates. In the second pass, duplicate marking tools again assess all reads for duplicates and overwrite any prior flags.

Our two duplicate flagging tools share common features but differ at the core. As the name implies, MarkDuplicatesWithMateCigar uses the MC (mate CIGAR) tag for mate alignment information. Unlike MarkDuplicates, it is a single-pass tool that requires pre-computed MC tags.

  • For RNA-Seq data mapped against the genome, use MarkDuplicates. Specifically, MarkDuplicatesWithMateCigar will refuse to process data with large reference skips frequent in spliced RNA transcripts where the gaps are denoted with an N in the CIGAR string.
  • Both tools only consider primary mappings, even if mapped to different contigs, and ignore secondary/supplementary alignments (256 flag and 2048 flag) altogether. Because of this, before flagging duplicates, be sure to mark primary alignments according to a strategy most suited to your experimental aims. See MergeBamAlignment's PRIMARY_ALIGNMENT_STRATEGY parameter for strategies the tool considers for changing primary markings made by an aligner.
  • Both tools identify duplicate sets identically with the exception that MarkDuplicatesWithMateCigar additionally considers reads with missing mates. Missing mates occur for example when aligned reads are filtered using an interval list of genomic regions. This creates divorced reads whose mates aligned outside the targeted intervals.
  • Both tools identify duplicates as sets of read pairs that have the same unclipped alignment start and unclipped alignment end. The tools intelligently factor for discordant pair orientations given these start and end coordinates. Within a duplicate set, with the exception of optical duplicates, read pairs may have either pair orientation--F1R2 or F2R1. For optical duplicates, pairs in the set must have the same orientation. Why this is is explained in section 4.
  • Both tools take into account clipped and gapped alignments and singly mapping reads (mate unmapped and not secondary/supplementary).
  • Each tool flags duplicates according to different priorities. MarkDuplicatesWithMateCigar prioritizes which pair to leave as the representative nondup based on the total mapped length of a pair while MarkDuplicates can prioritize based on the sum of base qualities of a pair (default) or the total mapped length of a pair. Duplicate inserts are marked at both ends.

back to top


4. Details of interest to some

To reach a high target coverage depth, some fraction of sequenced reads will by stochastic means be duplicate reads.

Let us hope the truth of a variant never comes down to so few reads that duplicates should matter so. Keep in mind the better evidence for a variant is the presence of overlapping reads that contain the variant. Also, take estimated library complexity at face value--an estimate.

Don't be duped by identical numbers. Data from the two tools qualitatively differ.

First, let me reiterate that secondary and supplementary alignment records are skipped and never flagged as duplicate.

Given a file with no missing mates, each tool identifies the same duplicate sets from primary alignments only and therefore the same number of duplicates. To reiterate, the number of identical loci or duplicate sets and the records within each set are the same for each tool. However, each tool differs in how it decides which insert(s) within a set get flagged and thus which insert remains the representative nondup. Also, if there are ties, the tools may break them differently in that tie-breaking can depend on the sort order of the records in memory.

  • MarkDuplicates by default prioritizes the sum of base qualities for both mates of a pair. The pair with the highest sum of base qualities remains as the nondup.
  • As a consequence of using the mate's CIGAR string (provided by the MC tag), MarkDuplicatesWithMateCigar can only prioritize the total mapped reference length, as provided by the CIGAR string, in scoring duplicates in a set. The pair with the longest mapping length remains as the nondup.
  • If there are ties after applying each scoring strategy, both tools break the ties down a chain of deterministic factors starting with read name.

Duplicate metrics in brief

We can break down the metrics file into two parts: (1) a table of metrics that counts various categories of duplicates and gives the library complexity estimate, and (2) histogram values in two columns.

See DuplicationMetrics for descriptions of each metric. For paired reads, duplicates are considered for the insert. For single end reads, duplicates are considered singly for the read, increasing the likelihood of being identified as a duplicate. Given the lack of insert-level information for these singly mapping reads, the insert metrics calculations exclude these.

The library complexity estimate only considers the duplicates that remain after subtracting out optical duplicates. For the math to derive estimated library size, see formula (1.2) in Mathematical Notes on SAMtools Algorithms.

The histogram values extrapolate the calculated library complexity to a saturation curve plotting the gains in complexity if you sequence additional aliquots of the same library. The first bin's value represents the current complexity.

Pair orientation F1R2 is distinct from F2R1 for optical duplicates

Here we refer you to a five minute video illustrating what happens at the molecular level in a typical sequencing by synthesis run.

What I would like to highlight is that each strand of an insert has a chance to seed a different cluster. I will also point out, due to sequencing chemistry, F1 and R1 reads typically have better base qualities than F2 and R2 reads.

Optical duplicate designation requires the same pair orientation.

Let us work out the implications of this for a paired end, unstranded DNA library. During sequencing, within the flow cell, for a particular insert produced by sample preparation, the strands of the insert are separated and each strand has a chance to seed a different cluster. Let's say for InsertAB, ClusterA and ClusterB and for InsertCD, ClusterC and ClusterD. InsertAB and InsertCD are identical in sequence and length and map to the same loci. It is possible InsertAB and InsertCD are PCR duplicates and also possible they represent original inserts. Each strand is then sequenced in the forward and reverse to give four pieces of information in total for the given insert, e.g. ReadPairA and ReadPairB for InsertAB. The pair orientation of these two pairs are reversed--one cluster will give F1R2 and the other will give F2R1 pair orientation. Both read pairs map exactly to the same loci. Our duplicate marking tools consider ReadPairA and ReadPairB in the same duplicate set for regular duplicates but not for optical duplicates. Optical duplicates require identical pair orientation.

back to top


Post edited by shlee on

Comments

  • LonginottoLonginotto FreiburgMember Posts: 36

    Thank you very much for this article Soo - really detailed!
    In Section 3 it is written that MarkDuplicates is run twice for multiplexed samples - once for the lane-barcode, and again for the merged bam. It also writes that this first run of MarkDuplicates is not totally necessary, since there would be no duplicates marked in the first run not marked in the second merged bam -- but you do it anyway to better estimate library complexity.

    How do you estimate library complexity at the Broad using this first MarkDuplicates pass? Just with .metrics output file? What should we be looking for? :)

    Thank you, and all the best!

    • John
  • shleeshlee CambridgeMember, Broadie, Moderator Posts: 595 admin
    edited February 2016

    Hi @Longinotto,

    The aim of the lane-level marking is different than the library-level duplicate marking. The lane-level marking is done earlier in the pipeline, before aggregation, to control for lane-specific variation. The Broad's Genomics Platform does this step if it's possible and I'm told they may consider removing this step in the future. I gather any sequencing facility would have an interest in lane-level duplication rates, for noting duplicates that arise during sequencing, for optimization purposes. For those of us on the data receiving end, we may or may not be interested in lane-level information depending on the quality assured by our sequencing provider.

    The library-level duplicate marking accomplishes two things. One, it merges all the BAMs. Two, it marks all the duplicates that we want to discount from our analyses and hide from IGV view.

    We use the library complexity estimate + the histogram values (both provided in the metrics file) in assessing the quality of our libraries IN CONJUNCTION with coverage information. For example, the Genomics Platform uses the coverage information to assess whether they have reached promised coverage (80%) at the promised depths (30x). If the coverage target is not met, then the histogram values (derived from the complexity estimate) tell us how much more of the library should be sequenced to reach the target.

    I'm glad you find the article useful. And I hope I have answered your questions. Let me know if otherwise.

    -Soo Hee

  • LonginottoLonginotto FreiburgMember Posts: 36
    edited March 2016

    Sorry it took me so long to reply - I decided to come back to this after I had done library duplicate marking (which I did at the very end before calling duplicates). I didn't actually know that you could call duplicates and merge in the same step! Next time that will speed things up :)

    So I think I see now that for data analysis duplicate marking at the lane level is interesting for some, but for others the total duplicates is all that matters. I think, in that case, the best practices should really only talk about marking duplicates on the whole library (which as you said will probably happen soon anyway), but maybe show a method for counting duplicates per RSID tag (to get the lane-level duplicate information). I actually have almost finished a QC tool which could do that if I wrote a stat module for it. I will let you know if that ever happens :)

    Also, I had no idea that other useful information was in the .metrics file other than the histogram data! Particularly the estimated library size! That's awesome! I think I will build into my QC tool a module for getting all the data out of a metrics file and into my user's faces so I make sure they look at it! :smiley:

    Thank you as always Soo Hee! :)

  • NebetbastetNebetbastet FranceMember Posts: 6
    edited May 2016

    Thank you for this tutorial. Even if it is very clear, I met some problems. Could you help me ?
    I tried to use Markduplicates, following your tutorial, to estimate the number of duplicates due to the sequencer. I worked with a RNAseq 50bp single-end dataset, sequenced with the Hiseq4000. This dataset has lots of duplicates.

    Here are the used parameters:

    OPTICAL_DUPLICATE_PIXEL_DISTANCE=2500
    MAX_SEQUENCES_FOR_DISK_READ_ENDS_MAP=50000
    MAX_FILE_HANDLES_FOR_READ_ENDS_MAP=8000
    SORTING_COLLECTION_SIZE_RATIO=0.25
    REMOVE_SEQUENCING_DUPLICATES=false
    TAGGING_POLICY=DontTag
    REMOVE_DUPLICATES=false
    ASSUME_SORTED=false
    DUPLICATE_SCORING_STRATEGY=SUM_OF_BASE_QUALITIES
    PROGRAM_RECORD_ID=MarkDuplicates
    PROGRAM_GROUP_NAME=MarkDuplicates
    READ_NAME_REGEX="optimized capture of last three ':' separated fields as numeric values" (dafault)
    VERBOSITY=INFO
    QUIET=false
    VALIDATION_STRINGENCY=STRICT
    COMPRESSION_LEVEL=5
    MAX_RECORDS_IN_RAM=500000
    CREATE_INDEX=false
    CREATE_MD5_FILE=false
    GA4GH_CLIENT_SECRETS=client_secrets.json

    It always tells me "Found 0 optical duplicate clusters." However, I could check manually that I have some sequencer duplicates. For example, I have two reads which are:

    K00201:25:H7HKGBBXX:2:2124:14387:26353 256 chr1 267802 3 24M1I25M * 0 0 CTTGGATGTTCGGGAAAGGGGGTTCTTTATCTAGGATCCTTGAAGCACCC AAFFFJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJ CC:Z:chr17 MD:Z:24A24 PG:Z:MarkDuplicates XG:i:1 NH:i:2 HI:i:0 NM:i:2 XM:i:1 XN:i:0 XO:i:1 CP:i:83229346 AS:i:-14 XS:A:- YT:Z:UU

    K00201:25:H7HKGBBXX:2:2124:14976:27232 256 chr1 267802 3 24M1I25M * 0 0 CTTGGATGTTCGGGAAAGGGGGTTCTTTATCTAGGATCCTTGAAGCACCC AAFFFJJJFJJJJJJJJJJJJJ<JJJJJJJJJJJJJJJJJJJJJJJJJJJ CC:Z:chr17 MD:Z:24A24 PG:Z:MarkDuplicates XG:i:1 NH:i:2 HI:i:0 NM:i:2 XM:i:1 XN:i:0 XO:i:1 CP:i:83229346 AS:i:-14 XS:A:- YT:Z:UU

    Both are on the same tile (2124) and their coordinates are: x=14387 and y=26353 for the first one, and x=14976 and y=27232 for the second one. They are duplicates and they are 1058 px far (<2500), so why are not they marked as optical duplicates? Where did I do mistake?

    Thank you in advance for your help :smile:

    Issue · Github
    by Sheila

    Issue Number
    890
    State
    closed
    Last Updated
    Assignee
    Array
    Milestone
    Array
    Closed By
    sooheelee
  • yfarjounyfarjoun Broad InstituteDev Posts: 63 ✭✭

    These are secondary alignments. Can you show the primary alignments instead?

  • shleeshlee CambridgeMember, Broadie, Moderator Posts: 595 admin

    Hi @Nebetbastet,

    The tutorial as presented uses a coordinate sorted input BAM. MarkDuplicates, given this coordinate sorted input, will ignore supplementary/secondary alignments. That is, it does not mark them. Given the alignment records you're showing us are marked with the 256 SAM flag as @yfarjoun points out, indicating they are secondary alignments, they are not considered by the tool. That you're getting a Found 0 optical duplicate clusters count may at first be puzzling but can be rationalized. Do the primary alignments of these reads align to the same locus? If they do, and have matching start sites, are they then marked as duplicate? Remember that multimapping reads can be distributed by the aligner to different loci.

    On a side note, if you care about marking duplicates for secondary alignments, Picard recently added a new feature to MarkDuplicates that allows for this. I'll update the tutorial details to reflect this. Basically, if you provide MarkDuplicates a queryname sorted BAM (which is something SortSam can do for you), then if a primary alignment is marked as duplicate (whether optical or other type of duplicate), then its (i) unmapped mate, (ii) secondary and/or (iii) supplementary alignment record will also get flagged as duplicate.

    I hope this helps. Let us know what you find.

  • NebetbastetNebetbastet FranceMember Posts: 6

    Thank you very much for your answer and for the information you gave me about secondary alignments.

    Actually, I think I understood what my problem was. I used single-end data (most of the projects in my team are single-end) and I just noticed Markduplicates needs paired-end data. I read the documentation too quickly and I was simply supposing Markduplicates could detect optical duplicates using both single-end and paired-end data. So my problem was quite trivial...

    I just used it in paired-end data and I could detect "optical" duplicates :)

  • shleeshlee CambridgeMember, Broadie, Moderator Posts: 595 admin
    edited May 2016

    @Nebetbastet

    I question your conclusion that the single-ended nature of your data is the cause of any discrepancies. As far as I know, MarkDuplicates will take either single-end or paired-end data and should flag both optical and molecular duplicates for either. You yourself have said that you get many duplicates marked with your single-end RNA-Seq data.

    There are implications for duplicate flagging for these types of data that you should keep in mind and I'm going to mention them here since you bring it up.

    For paired-end data, the insert is considered for duplicate marking (both first and second reads are marked duplicate if the insert is considered duplicate). For single-end data, the reads alone are considered for duplicate marking since the data do not provide insert information. So for single-end data you'll end up with more reads flagged as duplicate than if the same inserts had been paired. This artificially inflates the number of duplicates as defined in the conventional sense, i.e. insert duplicates. For example, if you had two inserts sized 200 bp and 300 bp and the reads from each mapped identically, then one of these reads would be marked as duplicate for the single-end data. If on the other hand data is paired, then neither insert would be marked as duplicate because it is obvious that the inserts are different and thus cannot be duplicates. I think this will impact single-end data's estimated library complexity metric by artificially lowering it unless the tool has a different formula in calculating this to accommodate single-ended data. I'd have to check on this last bit.

    Post edited by shlee on
  • NebetbastetNebetbastet FranceMember Posts: 6

    Actually, you are right. I just realized after I wrote it. Indeed, with my single-end data, I did find duplicates. But Markduplicates never found any optical duplicates! I tried with many samples in many datasets (and by testing several values of optical duplicate pixel distance) . Honestly, I cannot believe there is 0 optical duplicates in all these samples.
    In addition, with the only two paired-end samples I tested, I found 12% and 6% of optical duplicates.

    I wonder if there is not a problem with the software. I think I will contact Picard to ask them.

    Thank you for your information about single-end reads, it's good to have that in mind.

  • yfarjounyfarjoun Broad InstituteDev Posts: 63 ✭✭

    Actually, MarkDuplicates only gathers information for OpticalDuplicates on PAIRS. Note that the metric is called READ_PAIR_OPTICAL_DUPLICATES and the comments specifically states that this is only for pairs. While this could be changed, this would involve changing the metrics, and a strong case for that would need to be made.

  • shleeshlee CambridgeMember, Broadie, Moderator Posts: 595 admin
    edited May 2016

    Thanks @yfarjoun. That is good to know. Just to clarify further @Nebetbastet , just because optical duplicates are not counted, does not mean they are not flagged. MarkDuplicates flags all duplicates that fit the criteria then takes this pool for optical duplicate consideration. So we've learned that for single-end data, we do not segment out optical duplicates from the pool of marked duplicates for counting. For single-end data, the supposition is that we cannot apply the pair-orientation criteria that we apply to paired-end optical duplicates, and are thus doubly uncertain (uncertain of common insert, uncertain of common pair orientation), despite proximity, that a duplicate may be optical. So although they are marked as duplicate alongside other types of duplicates, they are not counted in the stdout nor in the metrics file.

    I'd like to add that in our typical conservative approach to variant discovery, we don't distinguish the duplicate types and discount all duplicates. I'm curious, @Nebetbastet, how are the optical duplicates important to your research?

  • NebetbastetNebetbastet FranceMember Posts: 6
    edited May 2016

    Thank you @yfarjoun and @shlee. I did not know that Markduplicates did not count optical duplicates for single-end data on purpose. I thought there was a problem.

    I needed it because my team has aquired a new sequencer (Hiseq4000). Previously, we worked with the Hiseq2500. With paired-end data, I could find that there is a proportion of "optical" duplicates 10-500 times stronger with the Hiseq4000. It's an information we wanted to know.

    In addition, as I analyze several NGS projects for reserachers, I need to control data quality. I am interested to use routinely a tool to control the % of duplicates due to sequencing. I think this information can be interesting if, for example, I meet a problem with the data. It can help to figure out what happened. Even if the estimation is uncertain, I think it could perhaps be useful to compare between projects/datasets.

    In any case, thank you very much for your insightful answers.

  • shleeshlee CambridgeMember, Broadie, Moderator Posts: 595 admin

    I'm glad we could be of help. Perhaps @yfarjoun has comments on your high duplicate rates.

  • benjaminpelissiebenjaminpelissie Madison, WIMember Posts: 45
    edited May 2016
  • cooperjamcooperjam NIHMember Posts: 27

    Hello, I have noticed that when I use Mark Duplicates with single end reads that I do not get the ROI table that I get when using paired end data. Is this a limitation of SE reads or a bug? The metrics file contains the standard line about # redundant reads, % dup etc but the histogram data isn't present.

    I found this thread on biostars suggesting that it might have to do with the read group. https://www.biostars.org/p/115044/

    However I have tried with and without the read group info added and still no histogram.

  • lhogstromlhogstrom Cambridge, MAMember, Broadie Posts: 1

    Cooperjam, the return-on-investment projection is based the READ_PAIRS_EXAMINED and READ_PAIR_DUPLICATES entries reported in the metrics file. The tool does not currently support library complexity predictions based on single end reads.

  • timktimk AustraliaMember Posts: 17
    edited July 2016
  • nevaneva 7CCMember Posts: 1

    Hi Soo Hee,

    What kind of error are you trying to capture with a larger distance, ex amp cluster errors? Do you know of any way to definitively verify (via the sequence string e.g. instead of mapping position) that a duplicate is a sequencer duplicate and not a PCR duplicate? And one final question - have you considered counting the optical/sequencer duplicates across tiles? Couldn't a read be a sequencer duplicate of one located just across the tile boundary on an adjacent tile?

    Thanks for any insight!

  • yfarjounyfarjoun Broad InstituteDev Posts: 63 ✭✭
    edited July 2016

    The larger distance is for two things: 1. the pixels definitions on the HiSeqX (and possibly the 4000) has changed, requiring us to increase the distance. 2. an effect dubbed "pad-hopping" that happens during the ex-amp seems to cause templates to appear in nearby wells. the 2500 distance seems to capture most of the resulting effect.

    I am not aware of a way to "definitively" distinguish between the two classes, but our analysis shows that this is good enough for the purpose of library-size estimation (especially given that the model for library-size estimation, is approximate as well)

    we have considered looking at neighboring tiles, the problem is that each sequencer might have a different geometry, not only regarding what constitutes a neighboring tile, but also the size of each tile, and both of these are needed in order to do this properly. The added complication, together with the fact that it doesn't seem to affect the result (estimated library size) and the fact that (as I alluded to before) the model for Library-size estimation is approximate, made us conclude that it isn't worth the effort.

    i hope this helps.

  • xhe764xhe764 xhe764@gmail.comMember Posts: 7

    Hi Soo Hee
    Thank you for this tutorial and your answers to various questions. I'm having trouble to run MarkDuplicates on my RNA-seq data. The problem is MarkDuplicates program identifies some extremely large duplicate sets ranging from 3 millions to more than 7 millions and the output shows the program is stuck to run "OpticalDuplicateFinder compared" 1000 reads by 1000 reads and does not finish after 48 hours. Do you have any suggestions for dealing with this problem? Any comments will be highly appreciated. Xiao

    Issue · Github
    by Sheila

    Issue Number
    1163
    State
    closed
    Last Updated
    Assignee
    Array
    Milestone
    Array
    Closed By
    sooheelee
  • shleeshlee CambridgeMember, Broadie, Moderator Posts: 595 admin

    Hi Xiao (@xhe764),

    You can skip optical duplicate finding by setting READ_NAME_REGEX=null. Let me know if this does not work for you.

  • yfarjounyfarjoun Broad InstituteDev Posts: 63 ✭✭

    should be READ_NAME_REGEX=null

  • yfarjounyfarjoun Broad InstituteDev Posts: 63 ✭✭

    Please note that without optical duplicate finding all the duplicates will be assumed to be non-optical, which, in turn, will cause the library size estimation to be less accurate.

  • jinkeanlimjinkeanlim ScandinaviaMember Posts: 5

    I have encountered with an error (below) when using the MarkDuplicates. No results were produced.
    /appl/bio/picard/picard-2.6.0/bin/picard: line 5: 27651 Killed /usr/lib/jvm/java-1.8.0-oracle.x86_64/bin/java -Xmx32g -jar /appl/bio/picard/picard-2.6.0/picard/build/libs/picard.jar $@

    Could some please help?

  • shleeshlee CambridgeMember, Broadie, Moderator Posts: 595 admin

    Hi @jinkeanlim,

    Can you please post the command that produced the error? Thanks.

  • jinkeanlimjinkeanlim ScandinaviaMember Posts: 5

    @shlee said:
    Hi @jinkeanlim,

    Can you please post the command that produced the error? Thanks.

    Dear @shlee

    The problem was solved. It was due to server restriction instead of potential bug of Picard tools.

    Thanks for your time.

  • marielmariel UKMember Posts: 8

    Hello,

    I am working with whole genome resequencing data for population genomic analyses.
    I am marking and removing duplicates with MarkDuplicates.
    I was wondering whether I should remove optical duplicates or keep them using READ_NAME_REGEX. Also if I am removing them, should I change the optical duplicate pixel distance as data were generated with the HiSeq X-ten?

    Thank you,

    Marie

  • shleeshlee CambridgeMember, Broadie, Moderator Posts: 595 admin

    Hi @mariel, MarkDuplicates marks all duplicates whether optical or not.

  • SumuduSumudu Sri LankaMember Posts: 24

    Hi,

    I get a out of memory error when I try to run MarkDuplicates in my machine. I'm working on a stand-alone computer with RAM ~ 3GB and disk space ~900GB. Platform: x86_64-pc-linux-gnu (64-bit) Running under: Ubuntu 16.04 LTS. My initial R1 and R2 FASTQ data sets each having size 1.4GB. I mapped to ref with BWA-mem.

    When I check with " java -XX:+PrintFlagsFinal -version ", MaxHeapSize shows ~~ 1015 021 568. Is this means that I can use -Xmx up to 1015G?? I used -Xmx35G previously when I run MarkDuplicates.

    And kindly expalin how to set TMP_DIR in my case since my machine is not connected to any server? Is it the disk with 900GB?

    Appreciate if you can give some advice.

    Thank you.
    Sumudu

  • Geraldine_VdAuweraGeraldine_VdAuwera Cambridge, MAMember, Administrator, Broadie Posts: 11,734 admin

    Hi there, the xmx settings apply to the amount of ram memory, so you're allocating far more than what your machine can provide. Simply put, you will not be able to run full scale work on your laptop you need a server or a cloud platform to do this work.

    Geraldine Van der Auwera, PhD

  • marielmariel UKMember Posts: 8

    Hi @shlee,

    Thanks for your answer. When I run MarkDuplicates and merge several bam files (previously sorted), the output bam file should be sorted too, right?

    Thanks,

    Marie

  • SheilaSheila Broad InstituteMember, Broadie, Moderator Posts: 4,855 admin

    @mariel
    Hi Marie,

    Yes, the output BAM file should be sorted.

    -Sheila

  • fabiodpfabiodp Padova, ItalyMember Posts: 5

    Hi everyone,
    Have you any information about some criticisms or critical parameters to set in bowtie2 in order to properly found duplicates with MarkDuplicatesWithMateCigar?
    Hope this is the right forum... :smile:
    Thank you,
    Fabio

  • shleeshlee CambridgeMember, Broadie, Moderator Posts: 595 admin

    Sorry @fabiodp, bowtie2 is not a tool that we use in our workflows so we have no comments for you.

  • shleeshlee CambridgeMember, Broadie, Moderator Posts: 595 admin

    I feel bad about my curt reply @fabiodp. It's a result of my efforts to stay within scope as I have a tendency to go down rabbit holes. Let me just follow-up and say that you should not use MarkDuplicatesWithMateCigar with RNA-Seq data because the memory requirements are restrictive. Rather, we have a new tool, UmiAwareMarkDuplicatesWithMateCigar that should be better for RNA-Seq data. However, we have yet to suss out our recommendations for the tool parameters. I'm sorry I cannot be more helpful.

  • fabiodpfabiodp Padova, ItalyMember Posts: 5

    Do not worry @shlee, no problem at all, but I really appreciate your kindness! Actually my are ChipSeq data so not RNAseq data, I will try the tool you suggested me. Just to update, I saw that, for same reason that I still don't understand, MarkDuplicates worked out the duplication level in my both bowtie and bowitie2 alignments whereas MarkDuplicatesWithMateCigar does not. I guess it could be a problem with the cigar in bowtie sam outputs...

  • beadorobeadoro Edinburgh, UKMember Posts: 2

    Hello,

    I have a puzzling problem with formatting the command line for MarkDuplicates such that multiple input bam files get merged. I want to execute MarkDuplicates inside a loop over samples and the number of bam files per sample is not constant. So, for each sample I construct a string $INFILES that I place in the MarkDuplicates command line, for example:

    echo $INFILES
    INPUT=SK2665_2_reordered.bam INPUT=SK2665_1_reordered.bam
    

    This follows the instructions above, but generates the following error:

    [Thu Mar 09 04:21:04 CET 2017] picard.sam.markduplicates.MarkDuplicates INPUT=[SK2665_2_reordered.bam INPUT=SK2665_1_reordered.bam] <etc>
    ...
    Cannot read non-existent file: /path/to/scratch/SK2665_2_reordered.bam INPUT=SK2665_1_reordered.bam
    

    The files are definitely there (I checked). I've tried various permutations (using 'INPUT=' only once, followed a space/comma/comma-space separated list of files): same result. So, to move forward somehow I decided to use MergeBamFiles first and then input the merged file into MarkDuplicates. MergeBamFiles: same $INFILES spec, same error.

    But when I place the command in the script without string variable and exactly as specified in the documentation:

    java -jar picard.jar MergeSamFiles \
          I=SK2665_1_reordered.bam \
          I=SK2665_2_reordered.bam \
          O=SK2665_merged.bam
    

    it works. I am new to shell scripting and at a loss to understand what is going on here. It would be really useful if there was a way to construct MarkDuplicates commands inside the loop such that the number of input files per sample could vary.

  • yfarjounyfarjoun Broad InstituteDev Posts: 63 ✭✭

    can you do this

    java -jar picard.jar MarkDuplicates $(echo $INFILES) OUTPUT=out....
    
  • SheilaSheila Broad InstituteMember, Broadie, Moderator Posts: 4,855 admin

    @fabiodp
    Hi Fabio,

    We don't have any experience with ChIP-seq data, but there are some other users who have been working with ChIP-seq data and GATK. You may find this thread/user helpful.

    -Sheila

  • beadorobeadoro Edinburgh, UKMember Posts: 2

    @yfarjoun

    I just tried this

    INFILES="INPUT=SK2665_2_reordered.bam INPUT=SK2665_1_reordered.bam" 
    java -jar picard.jar MarkDuplicates \
          $(echo $INFILES) \
          OUTPUT=dedupped.bam \
          <etc>
    

    and it worked. Marvellous. Many thanks for your advice.

  • prasundutta87prasundutta87 EdinburghMember Posts: 7

    Hi,

    How important is it to remove duplicates when looking for allele specific expression from RNAseq data? Different papers write differently about this step. According to me, not removing duplicates is the way to go because highly expressed genes will get saturated with reads. Will removing duplicates actually affect variant calling for ASE analysis? And if at all we use MarkDuplicates, is it ok to use REMOVE_SEQUENCING_DUPLICATES=true, for only removing PCR duplicates rather than removing all kinds of duplicates?

  • shleeshlee CambridgeMember, Broadie, Moderator Posts: 595 admin

    Hi @prasundutta87 ,

    For RNA-Seq data, yes, we recommend disabling downstream tools' duplicate read filter with -drf DuplicateRead (drf=disable read filter). We do NOT recommend actually removing duplicate reads from your data, as this is a loss of information.

  • prasundutta87prasundutta87 EdinburghMember Posts: 7

    So technically, I can easily go without using 'markDuplicates' right? or just marking duplicates (and not removing any kind of duplicates) should be the process? (Best practices for RNAseq says so), but for ASE,can this step can be completely avoided?

  • shleeshlee CambridgeMember, Broadie, Moderator Posts: 595 admin

    @prasundutta87,

    Allele-specific expression (ASE) is not a GATK workflow. It's up to you to determine how best to use our tools for your aims.

    Yes, for any process that requires comparing counts of reads as representing RNA expression, removing any reads from the analysis would be detrimental and counter to the aim.

    I'm not sure if this is still the case--@Geraldine_VdAuwera will know for sure--certain GATK tools reject alignment files for which duplicates are unmarked.

  • Geraldine_VdAuweraGeraldine_VdAuwera Cambridge, MAMember, Administrator, Broadie Posts: 11,734 admin

    Right, for ASE people typically don't discount duplicates -- either by not marking them in the first place or by setting the tool to ignore the marking. See this paper for more discussion.

    Geraldine Van der Auwera, PhD

  • shleeshlee CambridgeMember, Broadie, Moderator Posts: 595 admin

    Thanks for this 2015 paper @Geraldine_VdAuwera. I'm going to have to read it. Skimming, looks like the relevant passage is:

    However, we observe consistent albeit infrequent signs of PCR artifacts in the Geuvadis AE data, especially affecting lowly covered sites — where duplicates are mostly true PCR duplicates, since saturation is unlikely. Removing duplicate reads reduces technical sources of AE at these sites, while having a minimal effect on highly covered, read-saturated SNPs (Figure S4e in Additional file 4). Thus, we suggest that removing duplicate reads is a good default approach for AE analysis, and it is implemented as a default in the GATK tool. However, it is important that the retained read is either chosen randomly or by base quality, and not by mapping score, so as not to bias towards the reference allele.

    My thoughts -- This approach is definitely relevant to samples with lower library complexity and less relevant for samples with high library complexity as would be obtained from PCR-free preps.

  • prasundutta87prasundutta87 EdinburghMember Posts: 7

    Thank you for your thoughts, I had gone through this paper as well and stumbled about this para as well. I may need to check the relevancy of this approach based on the library prep. Thank you.

  • prasundutta87prasundutta87 EdinburghMember Posts: 7

    And is there a way that what ever update I get in any gatk community (like this), I get an update in an email (I log in through my gmail account) ? I just revisited this discussion and found a new reply which I noticed after 8 days.

  • Geraldine_VdAuweraGeraldine_VdAuwera Cambridge, MAMember, Administrator, Broadie Posts: 11,734 admin
    In the FAQs there is an article with instructions for setting notifications.

    Geraldine Van der Auwera, PhD

  • prasundutta87prasundutta87 EdinburghMember Posts: 7
    Thanks..I changed my notification preferences..
  • qtianqtian changsha, chinaMember Posts: 2

    Hello Dear,
    This tutorial is of great help for me. And I have a question about this part: if I marked duplicates follow the instructions of MarkDuplicates tools, and set REMOVE_DUPLICATES parameter to false. So the OUTPUT=6747_snippet_markduplicates.bam file keeps the duplicates. Can I remove the marked_duplicates later directly without re-markduplicates of 6747_snippet_markduplicates.bam? And how? May
    "java -Xmx32G -jar picard.jar MarkDuplicates \
    INPUT=6747_snippet_markduplicateswithmatecigar.bam \
    OUTPUT=6747_snippet_removed_duplicateswithmatecigar.bam \
    REMOVE_DUPLICATES=true"
    work?

  • yfarjounyfarjoun Broad InstituteDev Posts: 63 ✭✭

    Your method would work, but would be computationally expensive, since it would be re-marking-duplicates.

    If you have marked duplicates already, a faster approach would be to use

    samtools -F 1024 ......

    see the samtools man page (http://www.htslib.org/doc/samtools.html) and the explain sam flags page (https://broadinstitute.github.io/picard/explain-flags.html)

  • qtianqtian changsha, chinaMember Posts: 2

    That's helpful, thank you very much!

Sign In or Register to comment.