The current GATK version is 3.7-0
Examples: Monday, today, last week, Mar 26, 3/26/04

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Did you remember to?


1. Search using the upper-right search box, e.g. using the error message.
2. Try the latest version of tools.
3. Include tool and Java versions.
4. Tell us whether you are following GATK Best Practices.
5. Include relevant details, e.g. platform, DNA- or RNA-Seq, WES (+capture kit) or WGS (PCR-free or PCR+), paired- or single-end, read length, expected average coverage, somatic data, etc.
6. For tool errors, include the error stacktrace as well as the exact command.
7. For format issues, include the result of running ValidateSamFile for BAMs or ValidateVariants for VCFs.
8. For weird results, include an illustrative example, e.g. attach IGV screenshots according to Article#5484.
9. For a seeming variant that is uncalled, include results of following Article#1235.

Did we ask for a bug report?


Then follow instructions in Article#1894.

Formatting tip!


Surround blocks of code, error messages and BAM/VCF snippets--especially content with hashes (#)--with lines with three backticks ( ``` ) each to make a code block.
Powered by Vanilla. Made with Bootstrap.
Picard 2.9.0 is now available. Download and read release notes here.
GATK 3.7 is here! Be sure to read the Version Highlights and optionally the full Release Notes.

Picard SamToFastq problem : Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit

emixaMemixaM CanadaMember Posts: 33

Hi GATK team!

I have an issue with Picard SamToFastq.

I ran on a 16 GB BAM this command (on the latest version of Picard, I tried several):

java -Djava.io.tmpdir=/scratch -XX:ParallelGCThreads=8 -Dsamjdk.use_async_io=true -Dsamjdk.buffer_size=4194304 -Xmx12G -jar /picard/dist/picard.jar SamToFastq \
  VALIDATION_STRINGENCY=LENIENT \
  INPUT=input.bam \
  FASTQ=output.pair1.fastq.gz \
  SECOND_END_FASTQ=output.pair2.fastq.gz

With this command-line, I was able to process WES BAM two or three times larger without the RAM complaining at all.

On this occurence, I have this result (and I tweaked MAX_RECORDS_IN_RAM argument, change from a 24 GB RAM machine to a 48 GB RAM machine):

[Wed Sep 16 12:17:28 EDT 2015] picard.sam.SamToFastq INPUT=input.bam FASTQ=output.pair1.fastq.gz SECOND_END_FASTQ=output.pair2.fastq.gz VALIDATION_STRINGENCY=LENIENT    OUTPUT_PER_RG=false RG_TAG=PU RE_REVERSE=true INTERLEAVE=false INCLUDE_NON_PF_READS=false READ1_TRIM=0 READ2_TRIM=0 INCLUDE_NON_PRIMARY_ALIGNMENTS=false VERBOSITY=INFO QUIET=false COMPRESSION_LEVEL=5 MAX_RECORDS_IN_RAM=500000 CREATE_INDEX=false CREATE_MD5_FILE=false GA4GH_CLIENT_SECRETS=client_secrets.json
[Wed Sep 16 12:17:28 EDT 2015] Executing as emixaM@glop on Linux 2.6.32-504.23.4.el6.x86_64 amd64; Java HotSpot(TM) 64-Bit Server VM 1.7.0_60-ea-b07; Picard version: 1.139(fd19c75b9a42d82cb57d45b53e1f93f0a3588541_1442367311) JdkDeflater
INFO    2015-09-16 12:17:36 SamToFastq  Processed     1 000 000 records.  Elapsed time: 00:00:07s.  Time for last 1 000 000:    7s.  Last read position: chr1:19 244 878
INFO    2015-09-16 12:17:45 SamToFastq  Processed     2 000 000 records.  Elapsed time: 00:00:17s.  Time for last 1 000 000:    9s.  Last read position: chr1:38 435 197
INFO    2015-09-16 12:17:57 SamToFastq  Processed     3 000 000 records.  Elapsed time: 00:00:29s.  Time for last 1 000 000:   12s.  Last read position: chr1:63 999 267
INFO    2015-09-16 12:18:10 SamToFastq  Processed     4 000 000 records.  Elapsed time: 00:00:42s.  Time for last 1 000 000:   13s.  Last read position: chr1:101 467 007
INFO    2015-09-16 12:18:17 SamToFastq  Processed     5 000 000 records.  Elapsed time: 00:00:48s.  Time for last 1 000 000:    6s.  Last read position: chr1:144 192 193
INFO    2015-09-16 12:18:36 SamToFastq  Processed     6 000 000 records.  Elapsed time: 00:01:07s.  Time for last 1 000 000:   19s.  Last read position: chr1:154 917 552
INFO    2015-09-16 12:18:43 SamToFastq  Processed     7 000 000 records.  Elapsed time: 00:01:14s.  Time for last 1 000 000:    7s.  Last read position: chr1:171 501 716
INFO    2015-09-16 12:19:01 SamToFastq  Processed     8 000 000 records.  Elapsed time: 00:01:33s.  Time for last 1 000 000:   18s.  Last read position: chr1:201 869 422
INFO    2015-09-16 12:19:08 SamToFastq  Processed     9 000 000 records.  Elapsed time: 00:01:40s.  Time for last 1 000 000:    6s.  Last read position: chr1:230 511 835
INFO    2015-09-16 12:19:15 SamToFastq  Processed    10 000 000 records.  Elapsed time: 00:01:46s.  Time for last 1 000 000:    6s.  Last read position: chr2:20 182 092
INFO    2015-09-16 12:19:57 SamToFastq  Processed    11 000 000 records.  Elapsed time: 00:02:28s.  Time for last 1 000 000:   41s.  Last read position: chr2:52 391 334
INFO    2015-09-16 12:20:55 SamToFastq  Processed    12 000 000 records.  Elapsed time: 00:03:26s.  Time for last 1 000 000:   58s.  Last read position: chr2:87 420 712
[Wed Sep 16 12:38:29 EDT 2015] picard.sam.SamToFastq done. Elapsed time: 21,03 minutes.
Runtime.totalMemory()=11453595648
To get help, see http://broadinstitute.github.io/picard/index.html#GettingHelp
Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded
    at htsjdk.samtools.BinaryTagCodec.readNullTerminatedString(BinaryTagCodec.java:414)
    at htsjdk.samtools.BinaryTagCodec.readSingleValue(BinaryTagCodec.java:318)
    at htsjdk.samtools.BinaryTagCodec.readTags(BinaryTagCodec.java:282)
    at htsjdk.samtools.BAMRecord.decodeAttributes(BAMRecord.java:308)
    at htsjdk.samtools.BAMRecord.getAttribute(BAMRecord.java:288)
    at htsjdk.samtools.SAMRecord.getReadGroup(SAMRecord.java:691)
    at picard.sam.SamToFastq.doWork(SamToFastq.java:166)
    at picard.cmdline.CommandLineProgram.instanceMain(CommandLineProgram.java:206)
    at picard.cmdline.PicardCommandLine.instanceMain(PicardCommandLine.java:95)
    at picard.cmdline.PicardCommandLine.main(PicardCommandLine.java:105)

Do you have an idea on what I can change to not cross this overhead limit?

Cheers!

Tagged:

Issue · Github
by Sheila

Issue Number
172
State
closed
Last Updated
Assignee
Array
Milestone
Array
Closed By
chandrans

Best Answer

Answers

Sign In or Register to comment.