Heads up:
We’re moving the GATK website, docs and forum to a new platform. Read the full story and breakdown of key changes on this blog.
Notice:
If you happen to see a question you know the answer to, please do chime in and help your fellow community members. We encourage our fourm members to be more involved, jump in and help out your fellow researchers with their questions. GATK forum is a community forum and helping each other with using GATK tools and research is the cornerstone of our success as a genomics research community.We appreciate your help!

Test-drive the GATK tools and Best Practices pipelines on Terra


Check out this blog post to learn how you can get started with GATK and try out the pipelines in preconfigured workspaces (with a user-friendly interface!) without having to install anything.

Google Dataproc - Spark cluster service

GATK_TeamGATK_Team
edited January 2018 in Dictionary

Dataproc is Google's Spark cluster service, which you can use to run GATK tools that are Spark-enabled very quickly and efficiently. To use it, you need a Google login and billing account, as well as the gcloud command-line utility, ak.a. Google Cloud SDK.

There are two ways to create and control a Spark cluster on Dataproc: through a form in Google's web-based console, or directly through gcloud. Using the form is easier because it shows you all the possible options for configuring your cluster, but gcloud is faster when you already know what you want, and you can automate it through a script. See this tutorial for details.

Once you have created your cluster on Dataproc, all you need to do to run GATK Spark tools on it is add a few arguments to your GATK command-line to tell GATK where to find your cluster. Note that your data will need to be located in Google Cloud Storage (GCS), and your file paths will need to be prefixed gs:// accordingly. And be sure to look into jar caching to speed up the process!

Post edited by Sheila on
Sign In or Register to comment.