Thanks for getting back to us. As it has been a while since you originally encountered the error, and we have had a number of updates to the system, would you try running the API call again? If you're still hitting the same error then we will need to file a bug report, it sounds like.
This message is given when there is a slowdown in the FC database, but should go away after refreshing the page in 5 to 10min . Are you still having this problem? If so what workspace are you accessing and would it be possible to share it with [email protected]?
Thanks for letting us know Glad to hear it is a non-issue now.
Geraldine has just pushed the 3.8-1 docker Sorry for the delay.
I think this thread has some helpful tips.
Thanks for the explanation. I understand better what you're looking for and why. It's possible that cumulative data on CPU and IO usage is something that cloud providers store and report on, as a way to let users get deeper insights on resource usage. I can bring this up with our Google contacts to see if this is something they'd be interested in providing, or this is something Cromwell can help gather.
Thanks for coming to office hours! I'll record the resolution of our discussion here for forum users.
Firecloud only allows outputs to the same entity that the method is run on. The input expression this.case_sample.gatk4cnv_target_bed_capture is valid but the output expression this.case_sample.gatk4cnv_target_tsv_capture is not.
If you'd like to make a feature request to enable this, we have a form here.
I'll file a bug ticket to explain this situation better in the error message.
The GATK4 Mutect2 does a realignment just like HaplotypeCaller, so there is no need to run the Indel Realignment step.
What is the storage setup you have currently? Are these paths gs:// paths or are they stored on a local file system C:// or cluster (like gsa)?
If they are gs:// paths, then it is simply a matter of setting up your data model in FireCloud. These sound like samples, so you'd include the gs:// paths in your sample metadata table. You can read here about how to set that up.
If they are local or on a cluster, then you will need to upload the data to a Google bucket to get a gs:// path. Then you follow the instructions above to set up your data model. You can read about how to upload your data to a bucket here.
1.deleting the workflow execution directory right after running the workflow would invalidate call-caching
Yes, that's correct. Since Cromwell v23, any cache hit that Cromwell fails to copy is invalidated by default -- this was introduced as a config option call-caching.invalidate-bad-cache-result, release notes here.
I'm looking for a way to delete old executions, for example, a daily cron job deleting all executions older than 1 year. If I just go and delete the executions directories, would the mysql database get corrupted?
This Cron job sounds like a great solution -- and the database would adjust itself eventually when a task tries to copy outputs and they're no longer accessible.
Does cromwell provide any features to help with this? E.g. I imagine a DELETE workflow API endpoint would be useful here.
Unfortunately, while this doesn't exist today, it's very much on our roadmap. We have an issue in our backlog, and I urge you to review it, add your own thoughts, and upvote it if you'd like.