We've moved!
For WDL questions, see the WDL specification and WDL docs.
For Cromwell questions, see the Cromwell docs and please post any issues on Github.

How are files and auto-scaling supposed to work on AWS?

I have been playing with AWS BATCH and submitting some standard and simple genomics workflows. For any real workflows, I get disk-full errors. It appears that the /cromwell_root directory is mounted into the root volume that is only 8GB in size and is not tracked for auto-scaling purposes. However, /scratch is tracked and when I manually write large files there, auto-scaling occurs correctly. If I rm /cromwell_root and then ln -s /scratch /cromwell_root, I seem to get the correct behavior in that the cromwell_root files then trigger the /scratch auto-scaling.

In short, it seems that the auto-scaling storage that is set up in the custom AMI does not match up with the usage. But, it is quite possible I am missing some configuration or detail in the setup.

Sign In or Register to comment.