Heads up:
We’re moving the GATK website, docs and forum to a new platform. Read the full story and breakdown of key changes on this blog.
Update: July 26, 2019
This section of the forum is now closed; we are working on a new support model for WDL that we will share here shortly. For Cromwell-specific issues, see the Cromwell docs and post questions on Github.

Slurm backend

Hello,

I am try to integrate cromwell to a slurm hpc cluster and need to provide a difference filesystem to save all the execution scripts, stdout, stderr etc files since the compute nodes have different nfs mounts. Is there a way to do this? If so, how can I specific the backend storage location on the compute nodes to save this info?

Thanks in advance for your guidance.
-J

Comments

  • mfranklinmfranklin Member
    If I understand correctly, you sort of want to rewrite a part of `${cwd}` as the compute nodes have a different mount point. Not so much a different file system (as they're both unix-addressable file systems).

    Most of the configs for HPC (Slurm, PBS, etc) all require the use of a SharedFileSystem (SFS). They basically just require them to be locally addressable where ever you are. During this submit, you have access to a bash environment to transform the URL.


    For example:

    ```
    backend {
    default = slurm

    providers {
    slurm {
    actor-factory = "cromwell.backend.impl.sfs.config.ConfigBackendLifecycleActorFactory"
    config {
    submit = """
    newcwd=${cwd} # make some transformation in BASH
    sbatch \
    --wait \
    -J ${job_name} \
    -D $newcwd \
    -o $newcwd/execution/stdout \
    -e $newcwd/execution/stderr \
    --wrap "/bin/bash ${script}"
    """
    ```
Sign In or Register to comment.