We've moved!
For WDL questions, see the WDL specification and WDL docs.
For Cromwell questions, see the Cromwell docs and please post any issues on Github.

Slurm backend


I am try to integrate cromwell to a slurm hpc cluster and need to provide a difference filesystem to save all the execution scripts, stdout, stderr etc files since the compute nodes have different nfs mounts. Is there a way to do this? If so, how can I specific the backend storage location on the compute nodes to save this info?

Thanks in advance for your guidance.


  • mfranklinmfranklin Member
    If I understand correctly, you sort of want to rewrite a part of `${cwd}` as the compute nodes have a different mount point. Not so much a different file system (as they're both unix-addressable file systems).

    Most of the configs for HPC (Slurm, PBS, etc) all require the use of a SharedFileSystem (SFS). They basically just require them to be locally addressable where ever you are. During this submit, you have access to a bash environment to transform the URL.

    For example:

    backend {
    default = slurm

    providers {
    slurm {
    actor-factory = "cromwell.backend.impl.sfs.config.ConfigBackendLifecycleActorFactory"
    config {
    submit = """
    newcwd=${cwd} # make some transformation in BASH
    sbatch \
    --wait \
    -J ${job_name} \
    -D $newcwd \
    -o $newcwd/execution/stdout \
    -e $newcwd/execution/stderr \
    --wrap "/bin/bash ${script}"
Sign In or Register to comment.