Hi GATK Users,

Happy Thanksgiving!
Our staff will be observing the holiday and will be unavailable from 22nd to 25th November. This will cause a delay in reaching out to you and answering your questions immediately. Rest assured we will get back to it on Monday November 26th. We are grateful for your support and patience.
Have a great holiday everyone!!!

Regards
GATK Staff

OpenMP multi-threaded AVX-accelerated native PairHMM in HaplotypeCaller not supported

I'm unable to get a multithreaded instance of PairHMM to work in HaplotypeCaller with JDK 1.8 on my local machine (Intel 4770K 8-core i7 processor) running MacOS 10.12.6. I've tried both a pre-built version from the Docker hub as well as one that I built on my local machine, and in both cases I get the warning:
"NativeLibraryLoader - Unable to find native library: native/libgkl_pairhmm_omp.dylib

I've tried the "-pairHMM AVX_LOGLESS_CACHING_OMP" option, but I then get:
"A USER ERROR has occurred: Machine does not support OpenMP AVX PairHMM.
PairHMM - OpenMP multi-threaded AVX-accelerated native PairHMM implementation is not supported"

I suspect this might be caused by having a version of clang that doesn't support OpenMP, but I'm not sure. I'm using Homebrew gcc and c++ compilers, and an OpenMP clang (http://openmp.llvm.org) to no avail. Or maybe Intel 4770K can't support OpenMP PairHMM?

Here's my command:
gatk --java-options "-Xmx20g -DGATK_STACKTRACE_ON_USER_EXCEPTION=true" HaplotypeCaller \
-R /Volumes/HighSierra/Users/tschappe/Documents/P.nicotianae_assembly/ASM148301v1/GCA_001483015.1_ASM148301v1_genomic.fna \
-I /Volumes/HighSierra/Users/tschappe/Documents/P.nicotianae_assembly/race0_2_sorted.bam \
-O /Volumes/HighSierra/Users/tschappe/Documents/P.nicotianae_assembly/race0.g.vcf.gz \
-pairHMM AVX_LOGLESS_CACHING_OMP

Here's the entire error stack trace:
16:28:17.652 INFO NativeLibraryLoader - Loading libgkl_compression.dylib from jar:file:/Applications/gatk-4.0/gatk/build/libs/gatk-package-4.0.0.0-37-g1316033-SNAPSHOT-local.jar!/com/intel/gkl/native/libgkl_compression.dylib
16:28:17.731 INFO HaplotypeCaller - ------------------------------------------------------------
16:28:17.731 INFO HaplotypeCaller - The Genome Analysis Toolkit (GATK) v4.0.0.0-37-g1316033-SNAPSHOT
16:28:17.731 INFO HaplotypeCaller - For support and documentation go to https://software.broadinstitute.org/gatk/
16:28:17.731 INFO HaplotypeCaller - Executing as [email protected] on Mac OS X v10.12.6 x86_64
16:28:17.731 INFO HaplotypeCaller - Java runtime: Java HotSpot(TM) 64-Bit Server VM v1.8.0_161-b12
16:28:17.731 INFO HaplotypeCaller - Start Date/Time: January 24, 2018 4:28:17 PM EST
16:28:17.731 INFO HaplotypeCaller - ------------------------------------------------------------
16:28:17.731 INFO HaplotypeCaller - ------------------------------------------------------------
16:28:17.732 INFO HaplotypeCaller - HTSJDK Version: 2.14.1
16:28:17.732 INFO HaplotypeCaller - Picard Version: 2.17.2
16:28:17.732 INFO HaplotypeCaller - HTSJDK Defaults.COMPRESSION_LEVEL : 1
16:28:17.732 INFO HaplotypeCaller - HTSJDK Defaults.USE_ASYNC_IO_READ_FOR_SAMTOOLS : false
16:28:17.732 INFO HaplotypeCaller - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_SAMTOOLS : true
16:28:17.732 INFO HaplotypeCaller - HTSJDK Defaults.USE_ASYNC_IO_WRITE_FOR_TRIBBLE : false
16:28:17.732 INFO HaplotypeCaller - Deflater: IntelDeflater
16:28:17.732 INFO HaplotypeCaller - Inflater: IntelInflater
16:28:17.732 INFO HaplotypeCaller - GCS max retries/reopens: 20
16:28:17.732 INFO HaplotypeCaller - Using google-cloud-java patch 6d11bef1c81f885c26b2b56c8616b7a705171e4f from https://github.com/droazen/google-cloud-java/tree/dr_all_nio_fixes
16:28:17.732 INFO HaplotypeCaller - Initializing engine
16:28:18.287 INFO HaplotypeCaller - Done initializing engine
16:28:18.332 INFO HaplotypeCallerEngine - Disabling physical phasing, which is supported only for reference-model confidence output
16:28:18.877 INFO NativeLibraryLoader - Loading libgkl_utils.dylib from jar:file:/Applications/gatk-4.0/gatk/build/libs/gatk-package-4.0.0.0-37-g1316033-SNAPSHOT-local.jar!/com/intel/gkl/native/libgkl_utils.dylib
16:28:18.880 WARN NativeLibraryLoader - Unable to find native library: native/libgkl_pairhmm_omp.dylib
16:28:18.880 INFO HaplotypeCaller - Shutting down engine
[January 24, 2018 4:28:18 PM EST] org.broadinstitute.hellbender.tools.walkers.haplotypecaller.HaplotypeCaller done. Elapsed time: 0.02 minutes.
Runtime.totalMemory()=740294656


A USER ERROR has occurred: Machine does not support OpenMP AVX PairHMM.


org.broadinstitute.hellbender.exceptions.UserException$HardwareFeatureException: Machine does not support OpenMP AVX PairHMM.
at org.broadinstitute.hellbender.utils.pairhmm.VectorLoglessPairHMM.(VectorLoglessPairHMM.java:78)
at org.broadinstitute.hellbender.utils.pairhmm.PairHMM$Implementation.lambda$static$4(PairHMM.java:64)
at org.broadinstitute.hellbender.utils.pairhmm.PairHMM$Implementation.makeNewHMM(PairHMM.java:120)
at org.broadinstitute.hellbender.tools.walkers.haplotypecaller.PairHMMLikelihoodCalculationEngine.(PairHMMLikelihoodCalculationEngine.java:141)
at org.broadinstitute.hellbender.tools.walkers.haplotypecaller.AssemblyBasedCallerUtils.createLikelihoodCalculationEngine(AssemblyBasedCallerUtils.java:169)
at org.broadinstitute.hellbender.tools.walkers.haplotypecaller.HaplotypeCallerEngine.initialize(HaplotypeCallerEngine.java:191)
at org.broadinstitute.hellbender.tools.walkers.haplotypecaller.HaplotypeCallerEngine.(HaplotypeCallerEngine.java:160)
at org.broadinstitute.hellbender.tools.walkers.haplotypecaller.HaplotypeCallerEngine.(HaplotypeCallerEngine.java:151)
at org.broadinstitute.hellbender.tools.walkers.haplotypecaller.HaplotypeCaller.onTraversalStart(HaplotypeCaller.java:197)
at org.broadinstitute.hellbender.engine.GATKTool.doWork(GATKTool.java:891)
at org.broadinstitute.hellbender.cmdline.CommandLineProgram.runTool(CommandLineProgram.java:136)
at org.broadinstitute.hellbender.cmdline.CommandLineProgram.instanceMainPostParseArgs(CommandLineProgram.java:179)
at org.broadinstitute.hellbender.cmdline.CommandLineProgram.instanceMain(CommandLineProgram.java:198)
at org.broadinstitute.hellbender.Main.runCommandLineProgram(Main.java:152)
at org.broadinstitute.hellbender.Main.mainEntry(Main.java:195)
at org.broadinstitute.hellbender.Main.main(Main.java:275)

Issue · Github
by Sheila

Issue Number
2882
State
closed
Last Updated
Assignee
Array
Milestone
Array
Closed By
vdauwera

Comments

  • SkyWarriorSkyWarrior TurkeyMember ✭✭✭

    OpenMP implementation is not supported on macOS yet. This is what Intel GKL github says. You don't need to force that with parameter. Instead use the default and it will fallback to the normal AVX accelerated pairHMM. If not then there is a problem.

    On my mac I have no trouble to fall back to AVX accelerated pairHMM. OpenMP works flawlessly on Ubuntu workstation though.

  • A bit disappointing but I'll be able to work around that with an Ubuntu machine. I was able to get the single-threaded AVX pairHMM going before I tried to force it to use OpenMP on the Mac, but it would take far too long to run for the data I have. I wonder if there is a timeline for OpenMP implementation on Mac?

    Thanks for the quick response!

  • SkyWarriorSkyWarrior TurkeyMember ✭✭✭

    I don't think there is too much of a difference between their performances unless you request high number of cores per native AVX call which I think only marginally affects the performance.

  • Can you clarify? Are you saying that there would only be a marginal improvement in performance enabling 8 cores compared to 1 core?

  • SheilaSheila Broad InstituteMember, Broadie, Moderator admin

    @tlschapp
    Hi,

    I am checking with the team and will get back to you soon with more information.

    -Sheila

  • Geraldine_VdAuweraGeraldine_VdAuwera Cambridge, MAMember, Administrator, Broadie admin

    @tlschapp Have you considered moving to a cloud platform? That will allow you to dramatically reduce wall time across the entire pipeline. We can provide guidance if needed, and Google is currently sponsoring free compute credits for anyone who wants to try out our FireCloud portal (which itself is completely free and has all the best practices pipelines preloaded).

Sign In or Register to comment.