The Frontline Support team will be offline February 18 for President's Day but will be back February 19th. Thank you for your patience as we get to all of your questions!
Release Notes can be found here.
What is the limit on the jobs that can be launched while analyzing a large data set
I am analyzing a data set that has 78,000 samples where my method runs five serial tasks on a MAF (i.e. no scattering). As of now I am running my analysis task in batches where I do not run the next batch until the previous analysis has completed, and while I am finding success in running analyses, I am curious how large I can make my batches in this scenario. I recall that a few months ago the reported limit was 60,000 jobs, but I was not sure if that meant 60,000 jobs launched all at once or if it means that if I add 60,000+ jobs to the queue it will exceed FireCloud's limit. I am under the impression that the number of jobs I could run at once would be limited by my VM quotas, which are well below 60,000. This would mean that while I have many jobs in the queue, the number of tasks that would actually be running at once would not push FireCloud's limit. Am I correct in this assumption, or should I ensure that the number of workflows launched not push the total number of workflows over 60,000? What would be your recommended batch size in this case? Thank you!