Holiday Notice:
The Frontline Support team will be offline February 18 for President's Day but will be back February 19th. Thank you for your patience as we get to all of your questions!

Detailed documentation of how GATK tools employ SPARK

psb21psb21 BejaMember

Good afternoon,

It's been a while since GATK4 is out and Spark tools got introduced (yeyyy:)), but so far I haven't been able to find a good link to read on how exactly GATK employs it.

If you could fill these pages with some content would be great (single multi core,spark cluster). Particularly, I'm interested to know how the jobs are managed like: if running locally with for instance local[40], how does Haplotype Caller traverses the data ? Does the Active Region traversal still applies for the SPARK tools ? What about the concept of Walkers? How many blocks of data each Spark RDD contains ? Have you done some tests to improve performance, or mostly rely on default Spark settings to manage parallelism ?

Best regards,
Pedro

Answers

  • bhanuGandhambhanuGandham Member, Administrator, Broadie, Moderator admin

    Hi @psb21

    Here is a spark document that should help with your questions:https://software.broadinstitute.org/gatk/blog?id=23420

    Let me know if you still have questions after.

  • psb21psb21 BejaMember

    Hello,

    Thanks @bhanuGandham for the link to this new post. It surely helps, but I still find it difficult to understand the core methodological changes from non-spark to spark tools. What do you mean by "sharding boundary effects" ? For instance, in a variant calling pipeline, the genotype likelihoods for a given interval do not depend from those calculated in another region (e.g. different chromosome) right? In what sense is this a problem to match the non-spark Haplotype Caller results?

    I'm trying to speed up the process of calling variants using SPARK. I have access to a slurm HPC cluster, so I guess it's not that straightforward to run GATK in a proper distributed master-slave architecture (if there is any tutorial on how to setup slurm jobs to use GATK Spark tools on multiple nodes I would appreciate it a lot).
    Therefore, I run GATK in local mode with some SPARK threads, eventually speeding up the process by parallelising the number of samples processed simultaneously with GNU parallel. But then, I'm having troubles because some samples crash due to SPARK errors. Perhaps you could send my logs to the developers ? I'm trying to run 8 parallel GATk jobs (8 samples) using 5 Spark cpus on each in a node with 40 cpus.

    Best,
    Pedro

Sign In or Register to comment.