We've moved!
This site is now read-only. You can find our new documentation site and support forum for posting questions here.
Be sure to read our welcome blog!

GATK2 ReduceReads gets stuck on large (100Gb) Bam Files after a few hours

sinasina Member
edited October 2012 in Ask the GATK team

Hi Team,

I have been running GATK2 ReduceReads on a large (100Gb) Bam file, and even though at the very beginning it runs very smoothly and predicts a week for finishing the task, after a few hours it gets totally stock. We first thought that it could be a garbage collection (or java memory allocation issue), but the logs show that the garbage collection works well.

The command is (similar behavior for smaller Xms and Xmx values) java -Xmx30g -Xms30g -XX:+PrintGCTimeStamps -XX:+UseParallelOldGC -XX:+PrintGCDetails -Xloggc:gc.log -verbose:gc -jar $path $ref -T ReduceReads -I input.bam -o output.bam

The first few lines of the log file are

INFO 01:12:21,541 TraversalEngine - chr1:1094599 5.89e+05 9.9 m 16.8 m 0.0% 19.4 d 19.4 d
INFO 01:13:21,628 TraversalEngine - chr1:2112411 9.44e+05 10.9 m 11.6 m 0.1% 11.2 d 11.2 d
INFO 01:14:22,065 TraversalEngine - chr1:3051535 1.29e+06 11.9 m 9.3 m 0.1% 8.5 d 8.5 d
INFO 01:15:22,297 TraversalEngine - chr1:4084547 1.59e+06 12.9 m 8.1 m 0.1% 6.9 d 6.9 d
INFO 01:16:24,130 TraversalEngine - chr1:4719991 1.82e+06 13.9 m 7.7 m 0.2% 6.4 d 6.4 d

but after a short while it gets totally stock, and even in the location 121485073 of chromosome 1, there is almost no progress at all, and the estimated finish time goes over 11 weeks, and still increasing.

Any idea what the reason for this could be, and how we can solve the problem? The same command runs successfully on small (less than 5gig) Bam files though

Thanks in advance.

Post edited by Geraldine_VdAuwera on

Best Answer


  • Geraldine_VdAuweraGeraldine_VdAuwera Cambridge, MAMember, Administrator, Broadie admin

    Have you looked at the pileup of reads in the area where it gets stuck? How does the coverage & quality look?

  • sinasina Member

    I should clarify that it is not the case that it gets stuck on one particular read or base-pair. The time spent on each base-pair increases gradually (and looks like exponentially). The time to analyze the basepair at location 121485068 is less than a minute, it goes to 5 minutes at 121485077, and then gradually increases until it gets to about 8 hours (and still running) at 121485336. The whole dataset has about 40x coverage, and around this particular location, and the coverage at 121485336 is about 900 reads.

  • Geraldine_VdAuweraGeraldine_VdAuwera Cambridge, MAMember, Administrator, Broadie admin
    Accepted Answer

    That sounds like a problem we know of, that affects regions with high coverage. We have a solution for that problem but it won't be publicly available until the next minor version release (2.2).

  • sinasina Member

    Is there any estimate regarding the release time?

  • ebanksebanks Broad InstituteMember, Broadie, Dev ✭✭✭✭

    Actually, I don't know whether the change Geraldine refers to will make it into the 2.2 release; it's possible that you will have to wait until 2.3. We don't have an exact time frame for releases, but hopefully 2.2 will come out soon and 2.3 6-8 weeks after that.

Sign In or Register to comment.