Notice:
If you happen to see a question you know the answer to, please do chime in and help your fellow community members. We encourage our fourm members to be more involved, jump in and help out your fellow researchers with their questions. GATK forum is a community forum and helping each other with using GATK tools and research is the cornerstone of our success as a genomics research community.We appreciate your help!

Test-drive the GATK tools and Best Practices pipelines on Terra


Check out this blog post to learn how you can get started with GATK and try out the pipelines in preconfigured workspaces (with a user-friendly interface!) without having to install anything.

PrintReads keeps running out of memory (GC overhead limit exceeded)

I am using base recalibration on human whole genome data and I keep running out of memory. I am running this on a cluster with plenty of memory but it seems that as I increase memory PrintReads is still using up what I provide to it. After trying 1 and 2 cores with 16GB in the java execution (-Xmx) line, in the most recent iteration, I requested 5 cores, each with 20GB of memory and executed the following command:

java -Xmx18g -jar GenomeAnalysisTK-3.4-46.jar -T PrintReads -nct 5 -R /gatk-resource/human_g1k_v37_decoy.fasta -I 2-18-1.BWA_aln_umi.rmdupumi.mdup.sorted.realigned.bam -BQSR 2-18-1.BWA_aln_umi.rmdupumi.mdup.sorted.realigned.recal.table -o 2-18-1.BWA_aln_umi.rmdupumi.mdup.sorted.realigned.recalibrated.bam

As before, I once again received the error:
java.lang.OutOfMemoryError: GC overhead limit exceeded

The cluster scheduler (LSF) reports:
Max Memory: 19267 MB
so clearly the 20GB requested is getting used.

For what it's worth, the program output seems to show that PrintReads is getting stuck on a relatively innocuous region of chromosome 11 that has 40X coverage. I can "samtools view" the suspect coordinate (+/- 100 bases without issue).

INFO 13:13:42,792 ProgressMeter - 11:98220738 3.81160926E8 5.9 h 55.0 s 61.0% 9.6 h 3.8 h
INFO 13:14:12,793 ProgressMeter - 11:100977123 3.81760933E8 5.9 h 55.0 s 61.1% 9.6 h 3.7 h
INFO 13:14:50,040 ProgressMeter - 11:101826859 3.81860934E8 5.9 h 55.0 s 61.1% 9.6 h 3.8 h
INFO 13:15:31,039 ProgressMeter - 11:101826859 3.81860934E8 5.9 h 55.0 s 61.1% 9.7 h 3.8 h
INFO 13:16:34,404 ProgressMeter - 11:101826859 3.81860934E8 5.9 h 55.0 s 61.1% 9.7 h 3.8 h
INFO 13:17:07,201 ProgressMeter - 11:101826859 3.81860934E8 5.9 h 55.0 s 61.1% 9.7 h 3.8 h
INFO 13:17:39,947 ProgressMeter - 11:101826859 3.81860934E8 5.9 h 56.0 s 61.1% 9.7 h 3.8 h
INFO 13:18:39,404 ProgressMeter - 11:101826859 3.81860934E8 6.0 h 56.0 s 61.1% 9.8 h 3.8 h
INFO 13:19:42,125 ProgressMeter - 11:101826859 3.81860934E8 6.0 h 56.0 s 61.1% 9.8 h 3.8 h
INFO 13:20:15,785 ProgressMeter - 11:101826859 3.81860934E8 6.0 h 56.0 s 61.1% 9.8 h 3.8 h
INFO 13:22:12,643 ProgressMeter - 11:101826859 3.81860934E8 6.0 h 56.0 s 61.1% 9.8 h 3.8 h
INFO 13:23:15,204 ProgressMeter - 11:101826859 3.81860934E8 6.0 h 56.0 s 61.1% 9.9 h 3.8 h
[then it dies]

I am trying to switch back to single thread and request 80GB of memory on a single node. Any other ideas?

Best Answer

Answers

Sign In or Register to comment.