Heads up:
We’re moving the GATK website, docs and forum to a new platform. Read the full story and breakdown of key changes on this blog.
If you happen to see a question you know the answer to, please do chime in and help your fellow community members. We encourage our fourm members to be more involved, jump in and help out your fellow researchers with their questions. GATK forum is a community forum and helping each other with using GATK tools and research is the cornerstone of our success as a genomics research community.We appreciate your help!

Test-drive the GATK tools and Best Practices pipelines on Terra

Check out this blog post to learn how you can get started with GATK and try out the pipelines in preconfigured workspaces (with a user-friendly interface!) without having to install anything.

Pre-data processing pipeline keep copying ubam and reference files for each task

minimaxminimax Member
edited May 15 in Ask the GATK team

Someone asked this question before but somehow the solution does not work for me. So I made this new post.

I'm running the pre-data processing for variant discovery pipeline using GATK4 on a linux server. For each task, if it needs ubam or the references, or the output files from other tasks, those files are copied to the "inputs" folder within that task folder. This consumes lots of disk usages. Someone mentioned to change the localization in the config to be soft-link, so I tried, still not work.

Could someone help me with this? Thank you!

Below are the contents of the .conf that I used for cromwell-39.jar:

include required(classpath("application"))

workflow-options {

call-caching {
  enabled = true

config {
    filesystems {
        local {
            localization: [
                "soft-link", "hard-link", "copy"
            caching {
              duplication-strategy: [
                "soft-link", "hard-link", "copy"


  • minimaxminimax Member

    Forgot to mention, I'm running the pipeline without using Docker.

    By the way, it would be nice to have a version of the official pipeline without using Docker (although it's not difficult to modify the current pipeline). Running docker requires root privilege. If the job needs to be done on a university or a company's server (which probably applies to quite a large portion of GATK users), a researcher normally will not be a root user.

  • AdelaideRAdelaideR Member admin

    Hi minimax -

    Try passing the config file to the cromwell jar.

    java -Dconfig.file=/path/to/your.conf -jar cromwell-[VERSION].jar server

    Here is a tutorial that walks you through the steps of setting up the config file for your local cromwell instance.

  • minimaxminimax Member

    Hi @AdelaideR , thanks for your reply! I tried, it did not work.

    Below is how I pass the conf (defined in my original post) to the cromwell.jar:

    java -Dconfig.file=my.conf -jar cromwell-39.jar \
        run my.wdl \
        -i my.inputs.json

    Note, I did not use server mode because I'm running the job on a server and it is inconvenient to use the server mode.

Sign In or Register to comment.