To celebrate the release of GATK 4.0, we are giving away free credits for running the GATK4 Best Practices pipelines in FireCloud, our secure online analysis portal. It’s first come first serve, so sign up now to claim your free credits worth $250. Sponsored by Google Cloud. Learn more at https://software.broadinstitute.org/firecloud/documentation/freecredits

Could not find suitable filesystem among Default to Parse

wrosanwrosan FargoMember
edited September 2017 in Ask the WDL team

Hi,

I was trying to use WDL cromwell to run haplotype caller. I came up with a problem using my own files so decided to use tutorial files. But, I still get this error message:

[2017-09-28 12:01:29,01] [info] Slf4jLogger started
[2017-09-28 12:01:29,14] [info] RUN sub-command
[2017-09-28 12:01:29,14] [info] WDL file: /home/bobsgenefinder/share_8TB1/Roshan_exome_anlaysis/Rpr/jointCallingGenotypes/jointCallingGenotypes/jointCallingGenotypes.wdl
[2017-09-28 12:01:29,14] [info] Inputs: /home/bobsgenefinder/share_8TB1/Roshan_exome_anlaysis/Rpr/jointCallingGenotypes/jointCallingGenotypes/jointCallingGenotypes_inputs.json
[2017-09-28 12:01:29,21] [info] SingleWorkflowRunnerActor: Submitting workflow
[2017-09-28 12:01:29,45] [info] Workflow 337ae93d-9013-4d2c-a573-32f6b861771a submitted.
[2017-09-28 12:01:29,45] [info] SingleWorkflowRunnerActor: Workflow submitted 337ae93d-9013-4d2c-a573-32f6b861771a
[2017-09-28 12:01:30,42] [info] Running with database db.url = jdbc:hsqldb:mem:c000543e-8eb8-494e-b3f7-2f2db6172aad;shutdown=false;hsqldb.tx=mvcc
[2017-09-28 12:01:37,49] [info] Running migration RenameWorkflowOptionsInMetadata with a read batch size of 100000 and a write batch size of 100000
[2017-09-28 12:01:37,50] [info] [RenameWorkflowOptionsInMetadata] 100%
[2017-09-28 12:01:37,57] [info] Metadata summary refreshing every 2 seconds.
[2017-09-28 12:01:38,31] [info] 1 new workflows fetched
[2017-09-28 12:01:38,31] [info] WorkflowManagerActor Starting workflow 337ae93d-9013-4d2c-a573-32f6b861771a
[2017-09-28 12:01:38,32] [info] WorkflowManagerActor Successfully started WorkflowActor-337ae93d-9013-4d2c-a573-32f6b861771a
[2017-09-28 12:01:38,32] [info] Retrieved 1 workflows from the WorkflowStoreActor
[2017-09-28 12:01:38,73] [info] MaterializeWorkflowDescriptorActor [337ae93d]: Call-to-Backend assignments: jointCallingGenotypes.GenotypeGVCFs -> Local, jointCallingGenotypes.HaplotypeCallerERC -> Local
[2017-09-28 12:01:38,82] [error] WorkflowManagerActor Workflow 337ae93d-9013-4d2c-a573-32f6b861771a failed (during MaterializingWorkflowDescriptorState): Workflow input processing failed:
Workflow has invalid declarations: Could not evaluate workflow declarations:
jointCallingGenotypes.inputSamples:
java.lang.IllegalArgumentException: Could not find suitable filesystem among Default to parse X:/Ubuntu_8TB1/Roshan_exome_anlaysis/Rpr/jointCallingGenotypes/jointCallingGenotypes/inputs/inputsTSV.txt.
Could not find suitable filesystem among Default to parse X:/Ubuntu_8TB1/Roshan_exome_anlaysis/Rpr/jointCallingGenotypes/jointCallingGenotypes/inputs/inputsTSV.txt.
cromwell.engine.workflow.lifecycle.MaterializeWorkflowDescriptorActor$$anonfun$3$$anon$1: Workflow input processing failed:
Workflow has invalid declarations: Could not evaluate workflow declarations:
jointCallingGenotypes.inputSamples:
java.lang.IllegalArgumentException: Could not find suitable filesystem among Default to parse X:/Ubuntu_8TB1/Roshan_exome_anlaysis/Rpr/jointCallingGenotypes/jointCallingGenotypes/inputs/inputsTSV.txt.
Could not find suitable filesystem among Default to parse X:/Ubuntu_8TB1/Roshan_exome_anlaysis/Rpr/jointCallingGenotypes/jointCallingGenotypes/inputs/inputsTSV.txt.
at cromwell.engine.workflow.lifecycle.MaterializeWorkflowDescriptorActor$$anonfun$3.applyOrElse(MaterializeWorkflowDescriptorActor.scala:137)
at cromwell.engine.workflow.lifecycle.MaterializeWorkflowDescriptorActor$$anonfun$3.applyOrElse(MaterializeWorkflowDescriptorActor.scala:129)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
at akka.actor.FSM$class.processEvent(FSM.scala:663)
at cromwell.engine.workflow.lifecycle.MaterializeWorkflowDescriptorActor.akka$actor$LoggingFSM$$super$processEvent(MaterializeWorkflowDescriptorActor.scala:116)
at akka.actor.LoggingFSM$class.processEvent(FSM.scala:799)
at cromwell.engine.workflow.lifecycle.MaterializeWorkflowDescriptorActor.processEvent(MaterializeWorkflowDescriptorActor.scala:116)
at akka.actor.FSM$class.akka$actor$FSM$$processMsg(FSM.scala:657)
at akka.actor.FSM$$anonfun$receive$1.applyOrElse(FSM.scala:651)
at akka.actor.Actor$class.aroundReceive(Actor.scala:496)
at cromwell.engine.workflow.lifecycle.MaterializeWorkflowDescriptorActor.aroundReceive(MaterializeWorkflowDescriptorActor.scala:116)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
at akka.actor.ActorCell.invoke(ActorCell.scala:495)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
at akka.dispatch.Mailbox.run(Mailbox.scala:224)
at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

[2017-09-28 12:01:38,83] [error] Failed to delete workflow log
java.nio.file.FileSystemException: /home/bobsgenefinder/share_8TB1/Roshan_exome_anlaysis/Rpr/jointCallingGenotypes/jointCallingGenotypes/cromwell-workflow-logs/workflow.337ae93d-9013-4d2c-a573-32f6b861771a.log: Text file busy
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.implDelete(UnixFileSystemProvider.java:244)
at sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
at java.nio.file.Files.delete(Files.java:1126)
at better.files.File.delete(File.scala:618)
at cromwell.core.path.BetterFileMethods$class.delete(BetterFileMethods.scala:413)
at cromwell.core.path.DefaultPath.delete(DefaultPathBuilder.scala:53)
at cromwell.core.logging.WorkflowLogger$$anonfun$deleteLogFile$1$$anonfun$apply$mcV$sp$1.apply(WorkflowLogger.scala:111)
at cromwell.core.logging.WorkflowLogger$$anonfun$deleteLogFile$1$$anonfun$apply$mcV$sp$1.apply(WorkflowLogger.scala:111)
at scala.Option.foreach(Option.scala:257)
at cromwell.core.logging.WorkflowLogger$$anonfun$deleteLogFile$1.apply$mcV$sp(WorkflowLogger.scala:111)
at cromwell.core.logging.WorkflowLogger$$anonfun$deleteLogFile$1.apply(WorkflowLogger.scala:111)
at cromwell.core.logging.WorkflowLogger$$anonfun$deleteLogFile$1.apply(WorkflowLogger.scala:111)
at scala.util.Try$.apply(Try.scala:192)
at cromwell.core.logging.WorkflowLogger.deleteLogFile(WorkflowLogger.scala:111)
at cromwell.engine.workflow.WorkflowActor$$anonfun$8.applyOrElse(WorkflowActor.scala:309)
at cromwell.engine.workflow.WorkflowActor$$anonfun$8.applyOrElse(WorkflowActor.scala:278)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
at akka.actor.FSM$$anonfun$handleTransition$1.apply(FSM.scala:606)
at akka.actor.FSM$$anonfun$handleTransition$1.apply(FSM.scala:606)
at scala.collection.immutable.List.foreach(List.scala:381)
at akka.actor.FSM$class.handleTransition(FSM.scala:606)
at akka.actor.FSM$class.makeTransition(FSM.scala:688)
at cromwell.engine.workflow.WorkflowActor.makeTransition(WorkflowActor.scala:157)
at akka.actor.FSM$class.applyState(FSM.scala:673)
at cromwell.engine.workflow.WorkflowActor.applyState(WorkflowActor.scala:157)
at akka.actor.FSM$class.processEvent(FSM.scala:668)
at cromwell.engine.workflow.WorkflowActor.akka$actor$LoggingFSM$$super$processEvent(WorkflowActor.scala:157)
at akka.actor.LoggingFSM$class.processEvent(FSM.scala:799)
at cromwell.engine.workflow.WorkflowActor.processEvent(WorkflowActor.scala:157)
at akka.actor.FSM$class.akka$actor$FSM$$processMsg(FSM.scala:657)
at akka.actor.FSM$$anonfun$receive$1.applyOrElse(FSM.scala:651)
at akka.actor.Actor$class.aroundReceive(Actor.scala:496)
at cromwell.engine.workflow.WorkflowActor.aroundReceive(WorkflowActor.scala:157)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
at akka.actor.ActorCell.invoke(ActorCell.scala:495)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
at akka.dispatch.Mailbox.run(Mailbox.scala:224)
at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
[2017-09-28 12:01:38,83] [info] WorkflowManagerActor WorkflowActor-337ae93d-9013-4d2c-a573-32f6b861771a is in a terminal state: WorkflowFailedState
[2017-09-28 12:01:38,84] [info] Message [cromwell.subworkflowstore.SubWorkflowStoreActor$SubWorkflowStoreCompleteSuccess] from Actor[akka://cromwell-system/user/SingleWorkflowRunnerActor/$c#-771610513] to Actor[akka://cromwell-system/user/SingleWorkflowRunnerActor/WorkflowManagerActor/WorkflowActor-337ae93d-9013-4d2c-a573-32f6b861771a#-152341426] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[2017-09-28 12:01:45,31] [info] SingleWorkflowRunnerActor workflow finished with status 'Failed'.
Workflow 337ae93d-9013-4d2c-a573-32f6b861771a transitioned to state Failed

I used cromwell 26 and 29 and got same error.

Post edited by wrosan on

Best Answer

Answers

  • wrosanwrosan FargoMember

    Thank you so much for pointing out my file path. I use virtual box and share the folder between it. So, i was actually giving a path that is in windows but linux uses a different path to access the shared files.

    However, after fixing the path I got the following error:

    [2017-09-28 14:11:15,57] [warn] Localization via hard link has failed: /home/bobsgenefinder/bcftools-1.3/Desktop/jointCallingGenotypes/cromwell-executions/jointCallingGenotypes/d0182845-2940-4879-b969-1fec6eb66145/call-HaplotypeCallerERC/shard-2/inputs/home/bobsgenefinder/share_8TB1/Roshan_exome_anlaysis/Rpr/jointCallingGenotypes/jointCallingGenotypes/ref/human_g1k_b37_20.fasta -> /home/bobsgenefinder/share_8TB1/Roshan_exome_anlaysis/Rpr/jointCallingGenotypes/jointCallingGenotypes/ref/human_g1k_b37_20.fasta: Invalid cross-device link

    I was able to make it work by moving the folder in the linux and accessing it from there. Though I was able to run it, I won't be able to run it on my bam files because they are big and I don't have enough space to move it to linux in virtual machine. Is there way to solve this localization problem so that I can run cromwell using files in my shared folder?

  • kshakirkshakir Broadie, Dev

    For a portable pipeline that may be shared with others running on different systems, I do NOT recommend any of this, but--

    There are some steps one could do to avoid cromwell copying inputs. To refer to an existing file input without localizing at all, change the task input from a File to a String.

    Example:

    task SizeOfBam {
      # File bam = "/path/to/bam"
      # File bai = "/path/to/bai"
      String bam = "/path/to/bam"
      String bai = "/path/to/bai"
      command {
        ls -l ${bam} > bam_size.txt
      }
      runtime {
       # this first example assume there is no docker
       # docker: "myimage"
      }
    }
    

    If do happen to be using docker with cromwell, and you disable localization, then you'll also need to need to make sure your command to launch docker is overridden in your customized config to run docker with your volume mounted using -v.

    NOTE: This sort of setup should be considered experimental, and may change in the future. We usually encourage WDL to be as portable as possible such that it may be tested locally on small data, and run on some sort of larger compute infrastructure when ready to run on a larger scale.

  • shleeshlee CambridgeMember, Broadie, Moderator
    edited February 7

    Hi,

    I'm getting the same error as above and as in this earlier post. Has there been any more development on solving this? I'd appreciate it if you could share with me the solution.

    I am running Cromwell v30.2, on a Google Cloud VM, and using WDL scripts from the broadinstitute/gatk repository, specifically the Somatic CNV WDL scripts for GATK v4.0.1.1.

    I have changed File types to String types for the BAM and BAI inputs that refer to files in my gs:// bucket. I can access these gs:// files without issue using GATK's NIO feature, independently of Cromwell:

    shlee@lettuce:~/snail/cnv_180207$ gatk CollectFragmentCounts \
    > -L /home/shlee/Documents/cnv_180207/intervals/targets_C.interval_list \
    > --input gs://shlee-dev/1kg/exome_GRCh38DH/bam_bifem/HG00133.alt_bwamem_GRCh38DH.20150826.GBR.exome.bam \
    > --reference /home/shlee/Documents/ref/hg38/GRCh38_full_analysis_set_plus_decoy_hla.fa \
    > --interval-merging-rule OVERLAPPING_ONLY \
    > --output HG00133.test.csv
    Using GATK jar /home/shlee/gatk-4.0.0.0/gatk-package-4.0.0.0-local.jar
    Running:
        java -Dsamjdk.use_async_io_read_samtools=false -Dsamjdk.use_async_io_write_samtools=true -Dsamjdk.use_async_io_write_tribble=false -Dsamjdk.compression_level=1 -jar /home/shlee/gatk-4.0.0.0/gatk-package-4.0.0.0-local.jar CollectFragmentCounts -L /home/shlee/Documents/cnv_180207/intervals/targets_C.interval_list --input gs://shlee-dev/1kg/exome_GRCh38DH/bam_bifem/HG00133.alt_bwamem_GRCh38DH.20150826.GBR.exome.bam --reference /home/shlee/Documents/ref/hg38/GRCh38_full_analysis_set_plus_decoy_hla.fa --interval-merging-rule OVERLAPPING_ONLY --output HG00133.test.csv
    21:52:20.611 INFO  NativeLibraryLoader - Loading libgkl_compression.so from jar:file:/home/shlee/gatk-4.0.0.0/gatk-package-4.0.0.0-local.jar!/com/intel/gkl/native/libgkl_compression.so
    21:52:20.934 INFO  CollectFragmentCounts - ------------------------------------------------------------
    21:52:20.934 INFO  CollectFragmentCounts - The Genome Analysis Toolkit (GATK) v4.0.0.0
    21:52:20.934 INFO  CollectFragmentCounts - For support and documentation go to https://software.broadinstitute.org/gatk/
    21:52:20.935 INFO  CollectFragmentCounts - Executing as shlee@lettuce on Linux v4.13.0-32-generic amd64
    21:52:20.935 INFO  CollectFragmentCounts - Java runtime: OpenJDK 64-Bit Server VM v1.8.0_151-8u151-b12-0ubuntu0.16.04.2-b12
    21:52:20.935 INFO  CollectFragmentCounts - Start Date/Time: February 7, 2018 9:52:20 PM UTC
    21:52:20.935 INFO  CollectFragmentCounts - ------------------------------------------------------------
    21:52:20.935 INFO  CollectFragmentCounts - ------------------------------------------------------------
    21:52:20.935 INFO  CollectFragmentCounts - HTSJDK Version: 2.13.2
    21:52:20.935 INFO  CollectFragmentCounts - Picard Version: 2.17.2
    ...
    21:58:23.363 INFO  ProgressMeter -       chrX:156006075              6.0              29232067        4890909.1
    21:58:23.364 INFO  ProgressMeter - Traversal complete. Processed 29232067 total reads in 6.0 minutes.
    21:58:23.364 INFO  CollectFragmentCounts - Writing fragment counts to HG00133.test.csv
    log4j:WARN No appenders could be found for logger (org.broadinstitute.hdf5.HDF5Library).
    log4j:WARN Please initialize the log4j system properly.
    log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
    21:58:24.234 INFO  CollectFragmentCounts - Shutting down engine
    [February 7, 2018 9:58:24 PM UTC] org.broadinstitute.hellbender.tools.copynumber.CollectFragmentCounts done. Elapsed time: 6.06 minutes.
    Runtime.totalMemory()=3406823424
    Tool returned:
    SUCCESS
    

    However, when I run the WDL script, whether or not I employ Docker (either original script pointing to an installed Docker image or modified script that removes any mention of Docker from the WDL and JSON as suggested above), I get the following error:

    Call input and runtime attributes evaluation failed for CollectCounts:
    :
    java.lang.IllegalArgumentException: Could not find suitable filesystem among Default to parse 'gs://shlee-dev/1kg/exome_GRCh38DH/bam_bifem/HG01927.alt_bwamem_GRCh38DH.20150826.PEL.exome.bam'.
        Could not find suitable filesystem among Default to parse 'gs://shlee-dev/1kg/exome_GRCh38DH/bam_bifem/HG01927.alt_bwamem_GRCh38DH.20150826.PEL.exome.bam'.
    cromwell.engine.workflow.lifecycle.execution.job.preparation.JobPreparationActor$$anonfun$1$$anon$1: Call input and runtime attributes evaluation failed for CollectCounts:
    :
    java.lang.IllegalArgumentException: Could not find suitable filesystem among Default to parse 'gs://shlee-dev/1kg/exome_GRCh38DH/bam_bifem/HG01927.alt_bwamem_GRCh38DH.20150826.PEL.exome.bam'.
        Could not find suitable filesystem among Default to parse 'gs://shlee-dev/1kg/exome_GRCh38DH/bam_bifem/HG01927.alt_bwamem_GRCh38DH.20150826.PEL.exome.bam'.
        at cromwell.engine.workflow.lifecycle.execution.job.preparation.JobPreparationActor$$anonfun$1.applyOrElse(JobPreparationActor.scala:62)
        at cromwell.engine.workflow.lifecycle.execution.job.preparation.JobPreparationActor$$anonfun$1.applyOrElse(JobPreparationActor.scala:58)
        at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:34)
        at akka.actor.FSM.processEvent(FSM.scala:665)
        at akka.actor.FSM.processEvent$(FSM.scala:662)
        at cromwell.engine.workflow.lifecycle.execution.job.preparation.JobPreparationActor.processEvent(JobPreparationActor.scala:37)
        at akka.actor.FSM.akka$actor$FSM$$processMsg(FSM.scala:659)
        at akka.actor.FSM$$anonfun$receive$1.applyOrElse(FSM.scala:653)
        at akka.actor.Actor.aroundReceive(Actor.scala:514)
        at akka.actor.Actor.aroundReceive$(Actor.scala:512)
        at cromwell.engine.workflow.lifecycle.execution.job.preparation.JobPreparationActor.aroundReceive(JobPreparationActor.scala:37)
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:527)
        at akka.actor.ActorCell.invoke(ActorCell.scala:496)
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
        at akka.dispatch.Mailbox.run(Mailbox.scala:224)
        at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
        at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
    
    [2018-02-07 21:49:49,85] [info] WorkflowManagerActor WorkflowActor-1edb06f4-39ae-49d1-acdd-edb70f135e95 is in a terminal state: WorkflowFailedState
    [2018-02-07 21:49:58,93] [info] SingleWorkflowRunnerActor workflow finished with status 'Failed'.
    [2018-02-07 21:49:58,97] [info] Message [cromwell.core.actor.StreamActorHelper$StreamFailed] without sender to Actor[akka://cromwell-system/deadLetters] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
    [2018-02-07 21:49:58,97] [info] Message [cromwell.core.actor.StreamActorHelper$StreamFailed] without sender to Actor[akka://cromwell-system/deadLetters] was not delivered. [2] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
    Workflow 1edb06f4-39ae-49d1-acdd-edb70f135e95 transitioned to state Failed
    [2018-02-07 21:49:58,99] [info] Automatic shutdown of the async connection
    [2018-02-07 21:49:58,99] [info] Gracefully shutdown sentry threads.
    [2018-02-07 21:49:58,99] [info] Shutdown finished.
    

    Thanks.

  • shleeshlee CambridgeMember, Broadie, Moderator

    Alright, I took advice to follow instructions at http://cromwell.readthedocs.io/en/develop/backends/HPC/#additional-filesystems to add the following stanzas to a my.conf file:

    backend.providers.MyHPCBackend {
      filesystems {
        gcs {
          # A reference to a potentially different auth for manipulating files via engine functions.
          auth = "application-default"
        }  
      }
    }
    

    such that the conf file contains:

    # This line is required. It pulls in default overrides from the embedded cromwell `application.conf` needed for proper
    # performance of cromwell.
    include required(classpath("application"))
    
    backend.providers.MyHPCBackend {
      filesystems {
        gcs {
          # A reference to a potentially different auth for manipulating files via engine functions.
          auth = "application-default"
        }
      }
    }
    

    I then add this to my cromwell run just so:

    java -Dconfig.file=my.conf -jar /home/shlee/cromwell-30.2.jar run cnv_somatic_panel_workflow.wdl --inputs cnv_somatic_panel_workflow_ponC.json.txt 
    

    But now I get a different error:

    [2018-02-08 00:43:02,66] [info] Running with database db.url = jdbc:hsqldb:mem:777c2fed-798b-4043-80e2-1cf542885871;shutdown=false;hsqldb.tx=mvcc
    [2018-02-08 00:43:07,84] [info] Running migration RenameWorkflowOptionsInMetadata with a read batch size of 100000 and a write batch size of 100000
    [2018-02-08 00:43:07,85] [info] [RenameWorkflowOptionsInMetadata] 100%
    [2018-02-08 00:43:07,96] [info] Running with database db.url = jdbc:hsqldb:mem:669fa147-7609-499f-90a7-3e4e61e09521;shutdown=false;hsqldb.tx=mvcc
    Exception in thread "main" java.lang.ExceptionInInitializerError
        at cromwell.server.CromwellSystem.$init$(CromwellSystem.scala:48)
        at cromwell.CromwellEntryPoint$$anon$2.<init>(CromwellEntryPoint.scala:63)
        at cromwell.CromwellEntryPoint$.$anonfun$buildCromwellSystem$1(CromwellEntryPoint.scala:63)
        at scala.util.Try$.apply(Try.scala:209)
        at cromwell.CromwellEntryPoint$.buildCromwellSystem(CromwellEntryPoint.scala:63)
        at cromwell.CromwellEntryPoint$.runSingle(CromwellEntryPoint.scala:47)
        at cromwell.CommandLineParser$.runCromwell(CommandLineParser.scala:95)
        at cromwell.CommandLineParser$.delayedEndpoint$cromwell$CommandLineParser$1(CommandLineParser.scala:105)
        at cromwell.CommandLineParser$delayedInit$body.apply(CommandLineParser.scala:8)
        at scala.Function0.apply$mcV$sp(Function0.scala:34)
        at scala.Function0.apply$mcV$sp$(Function0.scala:34)
        at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
        at scala.App.$anonfun$main$1$adapted(App.scala:76)
        at scala.collection.immutable.List.foreach(List.scala:389)
        at scala.App.main(App.scala:76)
        at scala.App.main$(App.scala:74)
        at cromwell.CommandLineParser$.main(CommandLineParser.scala:8)
        at cromwell.CommandLineParser.main(CommandLineParser.scala)
    Caused by: com.typesafe.config.ConfigException$Missing: No configuration setting found for key 'actor-factory'
        at com.typesafe.config.impl.SimpleConfig.findKeyOrNull(SimpleConfig.java:152)
        at com.typesafe.config.impl.SimpleConfig.findOrNull(SimpleConfig.java:170)
        at com.typesafe.config.impl.SimpleConfig.find(SimpleConfig.java:184)
        at com.typesafe.config.impl.SimpleConfig.find(SimpleConfig.java:189)
        at com.typesafe.config.impl.SimpleConfig.getString(SimpleConfig.java:246)
        at cromwell.engine.backend.BackendConfiguration$.$anonfun$AllBackendEntries$1(BackendConfiguration.scala:30)
        at scala.collection.immutable.List.map(List.scala:283)
        at cromwell.engine.backend.BackendConfiguration$.<init>(BackendConfiguration.scala:26)
        at cromwell.engine.backend.BackendConfiguration$.<clinit>(BackendConfiguration.scala)
        ... 18 more
    

    Any clue how I should fix this? Thanks.

  • shleeshlee CambridgeMember, Broadie, Moderator

    When I change backend.providers.MyHPCBackend to backend.providers.Local, the original error I posted returns:

    java.lang.IllegalArgumentException: Could not find suitable filesystem among Default to parse
    
  • kshakirkshakir Broadie, Dev

    The backend.providers.MyHPCBackend is an example. For those using a Local backend, the MyHPCBackend should be Local.

    Assuming one's config file is picked up by cromwell this should allow the Local backend to also access GCS. NOTE: If one doesn't have application default credentials (ADC) enabled via the gcloud command they may receive another error with further instructions for how to setup ADC. If one is already running on a GCE instance this extra configuration setup should not be necessary.

    Here are two example runs from my laptop that use the same WDL to access GCS using ADC. The first uses a conf file with an override, the second sets the same property enabling GCS directly in the java command line options.

    gs_example.wdl

    workflow w { call t }
    task t {
      File gs_file = "gs://gatk-best-practices/somatic-hg38/hcc1143_N_clean.bai"
      command { du -h ${gs_file} }
      output { String out = read_string(stdout()) }
    }
    

    local_with_gcs.conf

    include required(classpath("application"))
    backend.providers.Local.config.filesystems.gcs.auth = "application-default"
    

    example run using the conf file

    docker \
      run \
      --rm \
      -w $PWD \
      -v $HOME:$HOME \
      -e JAVA_OPTS='-Dconfig.file=local_with_gcs.conf' \
      -e GOOGLE_APPLICATION_CREDENTIALS="$HOME/.config/gcloud/application_default_credentials.json" \
      broadinstitute/cromwell:30-16f3632 \
      run gs_example.wdl
    

    example run using the gcs filesystem as a command line property

    docker \
      run \
      --rm \
      -w $PWD \
      -v $HOME:$HOME \
      -e JAVA_OPTS='-Dbackend.providers.Local.config.filesystems.gcs.auth=application-default' \
      -e GOOGLE_APPLICATION_CREDENTIALS="$HOME/.config/gcloud/application_default_credentials.json" \
      broadinstitute/cromwell:30-16f3632 \
      run gs_example.wdl
    
  • shleeshlee CambridgeMember, Broadie, Moderator

    java -Dbackend.providers.Local.config.filesystems.gcs.auth=application-default -jar /home/shlee/cromwell-30.2.jar run cnv_somatic_panel_workflow.wdl --inputs cnv_somatic_panel_workflow_ponC.json.txt runs without any of the above errors. Thank you!

Sign In or Register to comment.