PairHMM bug in HaplotypeCaller?

HaplotypeCaller gave after analyzing 20 % of a 173 Gb Bam file an error: "java.lang.IllegalStateException: Pair HMM Log Probalility cannot be greater than 0: haplotype (..a long list of numbers..), result: Infinity".
The Bam file as fixmated with Picard tools and BaseRecalibrated.
I used the dbSNP,vcf file from the bundle; -rf BadCigar and -nct 8 as settings (all the CPUs on the machine).
GATK 2.7.2 and Java 1.7.0_25.
I was asked by the program to present the forum with this (possible) bug.

Best Answer


  • Re-running the HaplotypeCaller with the same specs in the area of the chromosomal interval (2:8478874 was the last variant noted) of original error did not replicate the error. I can therefore not send a snipped.
    I got a hunch that the nct setting 8 in a 8 core machine may be the problem.
    I am now running the script without the nct setting; but that demands some patience as the run-time is projected to be 3 days.

  • This time the HC produced after 60 hrs a vcf file. 12.3% reads were filtered out (HCMappingQualityFilter).

  • I'm in between definite that the nct thread setting is the problem.
    Running HC on the Baserecalibrated and Leftaligned bam file with the nct 6 setting gave a new erro: Java.lang.NullPointerException after running for 58 minutes. Interesting because I just updated to Ubuntu Saucy Salamander afte which the IGV was launched must faster.
    Could there be a Java versus nct setting problem?

  • Geraldine_VdAuweraGeraldine_VdAuwera Cambridge, MAMember, Administrator, Broadie admin

    Hi Hans, I'm not sure what "in between definite" means... Is the error reproducible at all?

  • Hi Geraldine, maybe not definite in a scientific robust way. But after using the nct 6 option on my 8 core machine while running HC I got each time an error in different region of the genome en with different error types. That's why I blame these threads.

  • Geraldine_VdAuweraGeraldine_VdAuwera Cambridge, MAMember, Administrator, Broadie admin

    Ah, I see. That's very possible. We don't really see those kinds of error because we run everything using Queue scatter-gather, which retries any jobs that fail, so as long as the error is not persistent in a given region/job, it all eventually completes successfully. If you have the opportunity to use Queue, I strongly recommend it. This is not to say we're giving up on fixing these Heisenbugs -- but it is not a priority, to be honest.

Sign In or Register to comment.