issues submitting with the 'submit' subcommand of the cromwell jar file


I tried submitting a workflow like this:

java -jar cromwell-36.jar submit bwa-memWorkflow.cwl -i bwa-memWorkflowInputs.yml -p . -h http://localhost:8000

But got an error that '.' is a directory. However, the help (produced by java -jar cromwell-36.jar) says that the -p flag to the submit subcommand is " A directory or zipfile to search for workflow imports." Can either the help message or the functionality be changed so the two are consistent?

Different problem, here is another submit command:

java -jar 3-step-workflow/cromwell-36.jar submit workflows/wdl/hello_world_workflow.wdl -i inputs/hello_world_inputs.json -h http://fieldroast:8000

This actually submitted the job successfully as I could see from looking at the workflow status/metadata in the swagger UI given the workflow ID that the command returned.
However, the command returned a bunch of other output that looks like errors, and returned an error code of 130 instead of 0. All of this sort of obscured the fact that the workflow submission was successful.

Here is the output. Is this reporting an actual error that I as the end user need to know about?

[2018-12-21 11:26:06,81] [info] Slf4jLogger started
[2018-12-21 11:26:08,87] [info] Workflow 374f1084-3041-460e-9385-817f894ee33b submitted to http://fieldroast:8000
[ERROR] [12/21/2018 11:26:08.934] [] [akka://SubmitSystem/system/pool-master] connection pool for PoolGateway(hcps = HostConnectionPoolSetup(fieldroast,8000,ConnectionPoolSetup(ConnectionPoolSettings(4,0,5,32,1,30 seconds,ClientConnectionSettings(Some(User-Agent: akka-http/10.1.5),40 seconds,1 minute,512,None,WebSocketSettings(<function0>,ping,Duration.Inf,akka.http.impl.settings.WebSocketSettingsImpl$$$Lambda$444/[email protected]),List(),ParserSettings(2048,16,64,64,8192,64,8388608,8388608,256,1048576,Strict,RFC6265,true,Set(),Full,Error,Map(If-Range -> 0, If-Modified-Since -> 0, If-Unmodified-Since -> 0, default -> 12, Content-MD5 -> 0, Date -> 0, If-Match -> 0, If-None-Match -> 0, User-Agent -> 32),false,true,akka.util.ConstantFun$$$Lambda$297/[email protected],akka.util.ConstantFun$$$Lambda$297/[email protected],akka.util.ConstantFun$$$Lambda$298/[email protected]),None,TCPTransport),New,1 second),[email protected],[email protected]))) has shut down unexpectedly


  • RuchiRuchi Member, Broadie, Moderator, Dev admin

    Hey @dtenenba

    That's a good point, the instructions need to be clarified. I believe the -p argument really wants a zip, otherwise Cromwell defaults to using the PWD to import dependencies.

    I dont believe I've ever seen an error like that, and I suspect it has something to do with your environment/setup. Would you mind describing where/how you're running Cromwell? Is it being run in server mode?

  • dtenenbadtenenba Member

    Hi Ruchi,

    I am submitting the job using cromwell's submit mode, to another cromwell server running on a different machine. That server is running with the following config file (I've edited it to remove account numbers and bucket names):


    include required(classpath("application"))
    call-caching {
        enabled = true
        invalidate-bad-cache-results = true
    database {
      # Store metadata in a file on disk that can grow much larger than RAM limits.
        profile = "slick.jdbc.HsqldbProfile$"
        db {
          driver = "org.hsqldb.jdbcDriver"
          url = "jdbc:hsqldb:file:aws-database;shutdown=false;hsqldb.tx=mvcc"
          connectionTimeout = 3000
    aws {
      application-name = "cromwell"
      auths = [
          name = "default"
          scheme = "default"
            name = "assume-role-based-on-another"
            scheme = "assume_role"
            base-auth = "default"
            role-arn = "arn:aws:iam::XXXXXXXXX:role/bucket-name"
    engine {
      filesystems {
        s3 {
          auth = "assume-role-based-on-another"
    backend {
      default = "AWSBATCH"
      providers {
        AWSBATCH {
          actor-factory = ""
          config {
            // Base bucket for workflow executions
            root = "s3://bucket-name"
            // A reference to an auth defined in the `aws` stanza at the top.  This auth is used to create
            // Jobs and manipulate auth JSONs.
            auth = "default"
            numSubmitAttempts = 1
            numCreateDefinitionAttempts = 1
            default-runtime-attributes {
              queueArn: "arn:aws:batch:us-west-2:XXXX:job-queue/GenomicsHighPriorityQue-XXX"
            filesystems {
              s3 {
                // A reference to a potentially different auth for manipulating files via engine functions.
                auth = "default"
Sign In or Register to comment.