This section of the forum is now closed; we are working on a new support model for WDL that we will share here shortly. For Cromwell-specific issues, see the Cromwell docs and post questions on Github.
How are files and auto-scaling supposed to work on AWS?
I have been playing with AWS BATCH and submitting some standard and simple genomics workflows. For any real workflows, I get disk-full errors. It appears that the /cromwell_root directory is mounted into the root volume that is only 8GB in size and is not tracked for auto-scaling purposes. However, /scratch is tracked and when I manually write large files there, auto-scaling occurs correctly. If I
rm /cromwell_root and then
ln -s /scratch /cromwell_root, I seem to get the correct behavior in that the cromwell_root files then trigger the /scratch auto-scaling.
In short, it seems that the auto-scaling storage that is set up in the custom AMI does not match up with the usage. But, it is quite possible I am missing some configuration or detail in the setup.