S3 Example fails

Trying to follow the example at https://docs.opendata.aws/genomics-workflows/cromwell/cromwell-examples/#using-data-on-s3
I have copied the meats.txt file to my bucket (specified in aws.conf see, below).
The error I get is:

cat: /cromwell_root/aws-cromwell-test-us-east-1/meats.txt: No such file or directory

I wondered if this is some kind of region issue. Originally my region was set to us-west-2 because pretty much all of our AWS infrastructure is in us-west-2. However, I commented out the line which set the region to us-west-2 (setting the region back to the default, us-east-1) and got the same result.

Any idea why this failed? So far I've not been able to use S3 in Cromwell on AWS Batch so getting past this would be a major step.

Thanks.

Contents of aws.conf:

include required(classpath("application"))

aws {

  application-name = "cromwell"

  auths = [
    {
      name = "default"
      scheme = "default"
    }
  ]

  // diff 1:
//////////////  region = "us-west-2" // uses region from ~/.aws/config set by aws configure command,
                     // or us-east-1 by default
}

engine {
  filesystems {
    s3 {
      auth = "default"
    }
  }
}

backend {
  default = "AWSBATCH"
  providers {
    AWSBATCH {
      actor-factory = "cromwell.backend.impl.aws.AwsBatchBackendLifecycleActorFactory"
      config {
        // Base bucket for workflow executions
        root = "s3://fh-div-adm-scicomp-cromwell-tests/"

        // A reference to an auth defined in the `aws` stanza at the top.  This auth is used to create
        // Jobs and manipulate auth JSONs.
        auth = "default"

        // diff 2:
        numSubmitAttempts = 1
        // diff 3:
        numCreateDefinitionAttempts = 1

        default-runtime-attributes {
          queueArn: "arn:aws:batch:us-west-2:064561331775:job-queue/GenomicsHighPriorityQue-b5b20a668266f48"
        }

        filesystems {
          s3 {
            // A reference to a potentially different auth for manipulating files via engine functions.
            auth = "default"
          }
        }
      }
    }
  }
}

Tagged:

Best Answer

  • dtenenbadtenenba
    Accepted Answer

    Boy am I dumb. I had not edited s3inputs.json to use my bucket name, so it was still using s3://aws-cromwell-test-us-east-1/meats.txt. I edited it and everything works fine. It would probably be good for the example to mention explicitly that you need to edit this json file.

Answers

  • dtenenbadtenenba Member
    Accepted Answer

    Boy am I dumb. I had not edited s3inputs.json to use my bucket name, so it was still using s3://aws-cromwell-test-us-east-1/meats.txt. I edited it and everything works fine. It would probably be good for the example to mention explicitly that you need to edit this json file.

Sign In or Register to comment.