To celebrate the release of GATK 4.0, we are giving away free credits for running the GATK4 Best Practices pipelines in FireCloud, our secure online analysis portal. It’s first come first serve, so sign up now to claim your free credits worth $250. Sponsored by Google Cloud. Learn more at https://software.broadinstitute.org/firecloud/documentation/freecredits

MarkDuplicates takes too long to be finished within 48 hours

xhe764xhe764 xhe764@gmail.comMember

I've been trying to run MarkDuplicates on a MergeBamAlignment-produced bam file, which was derived from a paired-end RNA-seq dataset. However, the program couldn't be finished even after running 48 hours on the Stampede supercomputer system. The size of the bam file is 6.5 gb. Below is the command I used:

$WORK/tools/jre1.8.0_91/bin/java -Xmx128G -jar $WORK/tools/picard-tools-2.4.1/picard.jar MarkDuplicates \
INPUT=$WORK/GATK/XHD1/XHD1-MergeBamAlignment.bam OUTPUT=$WORK/GATK/XHD1/XHD1_markduplicates.bam \
METRICS_FILE=$WORK/GATK/XHD1/XHD1_markduplicates_metrics.txt OPTICAL_DUPLICATE_PIXEL_DISTANCE=250 \
CREATE_INDEX=true TMP_DIR=$SCRATCH/XHD1-MD

After running the program on individual chromosome's bam files, I found that the reads mapped to chr 17 bogged down the program. The output shows messages like following:
INFO 2016-07-18 17:10:24 OpticalDuplicateFinder compared 37,000 ReadEnds to others. Elapsed time: 00:26:52s. Time for last 1,000: 42s. Last read position: 0:7,276

It appears there are several extremely large sets of duplicates mapped to chr 17.

Are there any solutions to this problem? Any help will be highly appreciated.

Tagged:

Answers

Sign In or Register to comment.