limit number of files during large workflow

Hi,

I'm trying to run a joint genotyping workflow based of the example production workflows on github. I'm genotyping about 2500 exomes across the whole genome.
During the genotyping I started noticing a stall in the workflow. After consulting with the HPC admins, this is because the maximum number of inodes is reached during the workflow.
My question now is:

  • How can I make cromwell pick up job failures when GATK fails because of a No space left on device errors
  • Are there any config params I can tweak to limit the number of files generated/left over during the workflow? Is it possible, for example, to clear or tar the working directory after a job finishes successfully, keeping only the outputs?

Thanks
M

Best Answer

Answers

  • RuchiRuchi Member, Broadie, Moderator, Dev admin

    Hey @matdmset,

    Have you tried using hard/linking as a localization strategy, instead of making copies? https://github.com/broadinstitute/cromwell/blob/develop/cromwell.examples.conf#L346-L348 This should be feasible if your inputs live on a shared filesystem. However, given the amount of data, its still possible even reducing some redundant file inputs still takes up a lot of space.

    Do you mean you want Cromwell to start the workflow from the last point it failed at? If you have call caching enabled -- then that means Cromwell keeps track of all jobs that succeeded, and can start off form the jobs that failed.

    Hope this helps

  • matdmsetmatdmset GhentMember
    edited October 1

    Hi @Ruchi

    Thanks for the reply! I may have been a but unclear about the nature of the error. Disk space is nu issue, so symlinks, hardlinks or copies make no real difference. It's the sheer number of inodes used by cromwell that's giving the shared FS a hard time. Our HPC admins have had to bump our inode limit twice for so the workflow could complete.

    I do have call caching enabled. But does that mean I can just go ahead and remove the files used/created by previous jobs?

    Thanks again
    M

  • matdmsetmatdmset GhentMember

    Hi @Ruchi

    Great! I didn't know that hard links share an inode. I asked our admins and they said it wouldn't make a difference. Good to know it will. I'll keep this in mind for future reference.

    Thanks!
    M

Sign In or Register to comment.