Issue with Mutect2

Hi, Firecloud team

I've tried Mutect2 for some times recently on my WGS data, but they all failed. I know some of the early failure are associated with some old scripts, but now I change to most updated NIO version, but unfortunately, they still failed with the general error like:
Job Mutect2.M2:16:1 exited with return code 1 which has not been declared as a valid return code. See 'continueOnReturnCode' runtime attribute for more details.

The details error are something associated with google compute engine or storages, see below as example

[June 1, 2018 6:48:12 AM UTC] org.broadinstitute.hellbender.tools.walkers.mutect.Mutect2 done. Elapsed time: 501.93 minutes. Runtime.totalMemory()=6155141120 code: 0 message: ComputeEngineCredentials cannot find the metadata server. This is likely because code is not running on Google Compute Engine. reason: null location: null retryable: false com.google.cloud.storage.StorageException: ComputeEngineCredentials cannot find the metadata server. This is likely because code is not running on Google Compute Engine. at com.google.cloud.storage.spi.v1.HttpStorageRpc.translate(HttpStorageRpc.java:189) at com.google.cloud.storage.spi.v1.HttpStorageRpc.read(HttpStorageRpc.java:515) at com.google.cloud.storage.BlobReadChannel$1.call(BlobReadChannel.java:127) at com.google.cloud.storage.BlobReadChannel$1.call(BlobReadChannel.java:124) at shaded.cloud_nio.com.google.api.gax.retrying.DirectRetryingExecutor.submit(DirectRetryingExecutor.java:94) at com.google.cloud.RetryHelper.runWithRetries(RetryHelper.java:54) at com.google.cloud.storage.BlobReadChannel.read(BlobReadChannel.java:124) at com.google.cloud.storage.contrib.nio.CloudStorageReadChannel.read(CloudStorageReadChannel.java:114) at htsjdk.samtools.reference.IndexedFastaSequenceFile.readFromPosition(IndexedFastaSequenceFile.java:292) at htsjdk.samtools.reference.IndexedFastaSequenceFile.getSubsequenceAt(IndexedFastaSequenceFile.java:244) at org.broadinstitute.hellbender.utils.fasta.CachingIndexedFastaSequenceFile.getSubsequenceAt(CachingIndexedFastaSequenceFile.java:309) at org.broadinstitute.hellbender.engine.ReferenceFileSource.queryAndPrefetch(ReferenceFileSource.java:80) at org.broadinstitute.hellbender.engine.ReferenceDataSource.queryAndPrefetch(ReferenceDataSource.java:64) at org.broadinstitute.hellbender.engine.ReferenceContext.getBases(ReferenceContext.java:166) at org.broadinstitute.hellbender.engine.ReferenceContext.getBase(ReferenceContext.java:367) at org.broadinstitute.hellbender.tools.walkers.mutect.Mutect2Engine.isActive(Mutect2Engine.java:224) at org.broadinstitute.hellbender.engine.AssemblyRegionIterator.loadNextAssemblyRegion(AssemblyRegionIterator.java:159) at org.broadinstitute.hellbender.engine.AssemblyRegionIterator.next(AssemblyRegionIterator.java:135) at org.broadinstitute.hellbender.engine.AssemblyRegionIterator.next(AssemblyRegionIterator.java:34) at org.broadinstitute.hellbender.engine.AssemblyRegionWalker.processReadShard(AssemblyRegionWalker.java:290) at org.broadinstitute.hellbender.engine.AssemblyRegionWalker.traverse(AssemblyRegionWalker.java:271) at org.broadinstitute.hellbender.engine.GATKTool.doWork(GATKTool.java:892) at org.broadinstitute.hellbender.cmdline.CommandLineProgram.runTool(CommandLineProgram.java:134) at org.broadinstitute.hellbender.cmdline.CommandLineProgram.instanceMainPostParseArgs(CommandLineProgram.java:179) at org.broadinstitute.hellbender.cmdline.CommandLineProgram.instanceMain(CommandLineProgram.java:198) at org.broadinstitute.hellbender.Main.runCommandLineProgram(Main.java:160) at org.broadinstitute.hellbender.Main.mainEntry(Main.java:203) at org.broadinstitute.hellbender.Main.main(Main.java:289) Caused by: java.io.IOException: ComputeEngineCredentials cannot find the metadata server. This is likely because code is not running on Google Compute Engine. at shaded.cloud_nio.com.google.auth.oauth2.ComputeEngineCredentials.refreshAccessToken(ComputeEngineCredentials.java:106) at shaded.cloud_nio.com.google.auth.oauth2.OAuth2Credentials.refresh(OAuth2Credentials.java:149) at shaded.cloud_nio.com.google.auth.oauth2.OAuth2Credentials.getRequestMetadata(OAuth2Credentials.java:135) at shaded.cloud_nio.com.google.auth.http.HttpCredentialsAdapter.initialize(HttpCredentialsAdapter.java:96) at com.google.cloud.http.HttpTransportOptions$1.initialize(HttpTransportOptions.java:157) at shaded.cloud_nio.com.google.api.client.http.HttpRequestFactory.buildRequest(HttpRequestFactory.java:93) at shaded.cloud_nio.com.google.api.client.googleapis.services.AbstractGoogleClientRequest.buildHttpRequest(AbstractGoogleClientRequest.java:300) at shaded.cloud_nio.com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:419) at shaded.cloud_nio.com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:352) at shaded.cloud_nio.com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeMedia(AbstractGoogleClientRequest.java:380) at shaded.cloud_nio.com.google.api.services.storage.Storage$Objects$Get.executeMedia(Storage.java:6133) at com.google.cloud.storage.spi.v1.HttpStorageRpc.read(HttpStorageRpc.java:494) ... 26 more Caused by: java.net.UnknownHostException: metadata at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:589) at sun.net.NetworkClient.doConnect(NetworkClient.java:175) at sun.net.www.http.HttpClient.openServer(HttpClient.java:463) at sun.net.www.http.HttpClient.openServer(HttpClient.java:558) at sun.net.www.http.HttpClient.<init>(HttpClient.java:242) at sun.net.www.http.HttpClient.New(HttpClient.java:339) at sun.net.www.http.HttpClient.New(HttpClient.java:357) at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1202) at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1138) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1032) at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:966) at shaded.cloud_nio.com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:93) at shaded.cloud_nio.com.google.api.client.http.HttpRequest.execute(HttpRequest.java:972) at shaded.cloud_nio.com.google.auth.oauth2.ComputeEngineCredentials.refreshAccessToken(ComputeEngineCredentials.java:104) ... 37 more

and

java.lang.RuntimeException: java.util.concurrent.ExecutionException: com.google.cloud.storage.StorageException: www.googleapis.com at org.broadinstitute.hellbender.utils.nio.SeekableByteChannelPrefetcher.read(SeekableByteChannelPrefetcher.java:309) at htsjdk.samtools.seekablestream.SeekablePathStream.read(SeekablePathStream.java:86) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at htsjdk.samtools.seekablestream.SeekableBufferedStream.read(SeekableBufferedStream.java:104) at htsjdk.samtools.util.BlockCompressedInputStream.readBytes(BlockCompressedInputStream.java:571) at htsjdk.samtools.util.BlockCompressedInputStream.readBytes(BlockCompressedInputStream.java:560) at htsjdk.samtools.util.BlockCompressedInputStream.processNextBlock(BlockCompressedInputStream.java:510) at htsjdk.samtools.util.BlockCompressedInputStream.nextBlock(BlockCompressedInputStream.java:468) at htsjdk.samtools.util.BlockCompressedInputStream.seek(BlockCompressedInputStream.java:380) at htsjdk.tribble.readers.TabixReader$IteratorImpl.next(TabixReader.java:427) at htsjdk.tribble.readers.TabixIteratorLineReader.readLine(TabixIteratorLineReader.java:46) at htsjdk.tribble.TabixFeatureReader$FeatureIterator.readNextRecord(TabixFeatureReader.java:170) at htsjdk.tribble.TabixFeatureReader$FeatureIterator.<init>(TabixFeatureReader.java:159) at htsjdk.tribble.TabixFeatureReader.query(TabixFeatureReader.java:133) at org.broadinstitute.hellbender.engine.FeatureDataSource.refillQueryCache(FeatureDataSource.java:555) at org.broadinstitute.hellbender.engine.FeatureDataSource.queryAndPrefetch(FeatureDataSource.java:524) at org.broadinstitute.hellbender.engine.FeatureManager.getFeatures(FeatureManager.java:308) at org.broadinstitute.hellbender.engine.FeatureContext.getValues(FeatureContext.java:163) at org.broadinstitute.hellbender.engine.FeatureContext.getValues(FeatureContext.java:115) at org.broadinstitute.hellbender.engine.FeatureContext.getValues(FeatureContext.java:230) at org.broadinstitute.hellbender.tools.walkers.mutect.SomaticGenotypingEngine.callMutations(SomaticGenotypingEngine.java:144) at org.broadinstitute.hellbender.tools.walkers.mutect.Mutect2Engine.callRegion(Mutect2Engine.java:182) at org.broadinstitute.hellbender.tools.walkers.mutect.Mutect2.apply(Mutect2.java:183) at org.broadinstitute.hellbender.engine.AssemblyRegionWalker.processReadShard(AssemblyRegionWalker.java:295) at org.broadinstitute.hellbender.engine.AssemblyRegionWalker.traverse(AssemblyRegionWalker.java:271) at org.broadinstitute.hellbender.engine.GATKTool.doWork(GATKTool.java:892) at org.broadinstitute.hellbender.cmdline.CommandLineProgram.runTool(CommandLineProgram.java:134) at org.broadinstitute.hellbender.cmdline.CommandLineProgram.instanceMainPostParseArgs(CommandLineProgram.java:179) at org.broadinstitute.hellbender.cmdline.CommandLineProgram.instanceMain(CommandLineProgram.java:198) at org.broadinstitute.hellbender.Main.runCommandLineProgram(Main.java:160) at org.broadinstitute.hellbender.Main.mainEntry(Main.java:203) at org.broadinstitute.hellbender.Main.main(Main.java:289) Caused by: java.util.concurrent.ExecutionException: com.google.cloud.storage.StorageException: www.googleapis.com at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:192) at org.broadinstitute.hellbender.utils.nio.SeekableByteChannelPrefetcher$WorkUnit.getBuf(SeekableByteChannelPrefetcher.java:136) at org.broadinstitute.hellbender.utils.nio.SeekableByteChannelPrefetcher.fetch(SeekableByteChannelPrefetcher.java:255) at org.broadinstitute.hellbender.utils.nio.SeekableByteChannelPrefetcher.read(SeekableByteChannelPrefetcher.java:300) ... 33 more Caused by: com.google.cloud.storage.StorageException: www.googleapis.com at com.google.cloud.storage.spi.v1.HttpStorageRpc.translate(HttpStorageRpc.java:189) at com.google.cloud.storage.spi.v1.HttpStorageRpc.read(HttpStorageRpc.java:515) at com.google.cloud.storage.BlobReadChannel$1.call(BlobReadChannel.java:127) at com.google.cloud.storage.BlobReadChannel$1.call(BlobReadChannel.java:124) at shaded.cloud_nio.com.google.api.gax.retrying.DirectRetryingExecutor.submit(DirectRetryingExecutor.java:94) at com.google.cloud.RetryHelper.runWithRetries(RetryHelper.java:54) at com.google.cloud.storage.BlobReadChannel.read(BlobReadChannel.java:124) at com.google.cloud.storage.contrib.nio.CloudStorageReadChannel.read(CloudStorageReadChannel.java:114) at org.broadinstitute.hellbender.utils.nio.SeekableByteChannelPrefetcher$WorkUnit.call(SeekableByteChannelPrefetcher.java:131) at org.broadinstitute.hellbender.utils.nio.SeekableByteChannelPrefetcher$WorkUnit.call(SeekableByteChannelPrefetcher.java:104) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.net.UnknownHostException: www.googleapis.com at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:589) at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:668) at sun.net.NetworkClient.doConnect(NetworkClient.java:175) at sun.net.www.http.HttpClient.openServer(HttpClient.java:463) at sun.net.www.http.HttpClient.openServer(HttpClient.java:558) at sun.net.www.protocol.https.HttpsClient.<init>(HttpsClient.java:264) at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:367) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:191) at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1138) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1032) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:177) at sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:153) at shaded.cloud_nio.com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:93) at shaded.cloud_nio.com.google.api.client.http.HttpRequest.execute(HttpRequest.java:972) at shaded.cloud_nio.com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:419) at shaded.cloud_nio.com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:352) at shaded.cloud_nio.com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeMedia(AbstractGoogleClientRequest.java:380) at shaded.cloud_nio.com.google.api.services.storage.Storage$Objects$Get.executeMedia(Storage.java:6133) at com.google.cloud.storage.spi.v1.HttpStorageRpc.read(HttpStorageRpc.java:494) ... 12 more

I dont know what's happening here and could you please suggest some solutions.

Thank you very much

Tagged:

Comments

  • zwzhangzwzhang Member, Broadie
    edited June 2018

    Hi, abaumann

    Thanks very much and I have a following question about call caching:

    It seems it will copy whatever it finished last time to current run if I keep the script unchanged. However, it seems to be not that case to me.

    I have run Mutect2, one of the GATK tool on fireclouds. one of the major step is it splits to 50 jobs based on some interval and runs separately. Most jobs are successful done but a few fail because of those transient error. But if I relanuch the job, it splits to 50 jobs based on the same interval, but it is still running all 50 jobs simultaneously without knowing some of the jobs have run successfully last time (of course, I enabled call caching). So I ends up spending more and more money and time on it.

    Could you please advise on this ?

    Thanks

  • zwzhangzwzhang Member, Broadie

    I also realize once i re-launch the workflow,

    the early task has recognized the cached results and it will copy from previous run (Hit)! for example: the calculate contamination task:

    But it reaches to M2 task, which splits the job based on scatter count (in my case, it is 50, basically, each subjob will only take care of a fraction of genome). I think each sub-job should be the same across different runs because no parameter has changed. But quite unexpectedly, the subjob cant recognize previous run (Miss). for example:

    If I can't copy whatever successfully finished in previous subjobs, i have to start the whole 50 subjobs every time, it will dramatically increase my cost and time and there is no guarantee that new job will finish successfully because of those transient error.

    Maybe there is something I am missing to set up call caching correctly, but as a newbie, I can't figure out myself.

    Thank you very much

  • abaumannabaumann Broad DSDEMember, Broadie ✭✭✭

    Assuming nothing changed, they should hit in scatters as well. Can you paste in the workflow id so I can take a look to see why it might have missed?

  • zwzhangzwzhang Member, Broadie

    @abaumann said:
    Assuming nothing changed, they should hit in scatters as well. Can you paste in the workflow id so I can take a look to see why it might have missed?

    Hi, Abaumann
    basically i see this happen for all my samples. I will take one sample as example:

    1) this is the earlier run which has one failed subjob in Mutect.M2 task. The workflow id is: 6223b053-e548-486f-a7dc-6edc92f3d8e1
    2) This is the later run after the previous run failed, it is running now and I can see those scatters miss call caches: the workflow id is: 583fbfb6-9d32-49ba-9bc8-a11929576268

    Thank you very much

  • abaumannabaumann Broad DSDEMember, Broadie ✭✭✭

    I checked an endpoint we don't expose yet that compares what is different between two calls when there is a miss, and it is saying that the input for String intervals is different. Are you sure the intervals file isn't changing in some way?

  • zwzhangzwzhang Member, Broadie

    @abaumann said:
    I checked an endpoint we don't expose yet that compares what is different between two calls when there is a miss, and it is saying that the input for String intervals is different. Are you sure the intervals file isn't changing in some way?

    Hi, abaumann

    I checked those two runs in details. The interval files are generated from the previous step called Mutect.Splitintervals.

    This is what I saw: The interval file name itself is not changed, but every time i launch an analysis, a different bucket is created. For example,:

    In earlier run:
    scatter0: ["gs://fc-e048a56e-db66-4fc2-8a9c-57f1c67b625b/cb78d442-8f1b-416e-a31e-971a974a966a/Mutect2/6223b053-e548-486f-a7dc-6edc92f3d8e1/call-SplitIntervals/glob-6f4bc12a708659d4f5f3eecd1cdffff7/0000-scattered.intervals"
    scatter1: "gs://fc-e048a56e-db66-4fc2-8a9c-57f1c67b625b/cb78d442-8f1b-416e-a31e-971a974a966a/Mutect2/6223b053-e548-486f-a7dc-6edc92f3d8e1/call-SplitIntervals/glob-6f4bc12a708659d4f5f3eecd1cdffff7/0001-scattered.intervals"

    in the later run:
    scatter0: ["gs://fc-e048a56e-db66-4fc2-8a9c-57f1c67b625b/846180d8-f2aa-4162-a2b2-df0c6d202e8a/Mutect2/583fbfb6-9d32-49ba-9bc8-a11929576268/call-SplitIntervals/glob-6f4bc12a708659d4f5f3eecd1cdffff7/0000-scattered.intervals"
    scatter1: "gs://fc-e048a56e-db66-4fc2-8a9c-57f1c67b625b/846180d8-f2aa-4162-a2b2-df0c6d202e8a/Mutect2/583fbfb6-9d32-49ba-9bc8-a11929576268/call-SplitIntervals/glob-6f4bc12a708659d4f5f3eecd1cdffff7/0001-scattered.intervals"

    I am not sure if that's the reason, but if so, this is going to be an serious intrinsic problem, because every time you repeat, you create a new bucket for those files despite the file name is the same. so you will never be able to cache this step because it is always in different bucket.

    How do I solve this problem?

    Thanks very much!

  • abaumannabaumann Broad DSDEMember, Broadie ✭✭✭

    Each time a call of a task runs (not a cache hit), it will output to a different place. We do this for provenance reasons and to protect results for call caching in case you deleted old runs.

    If the upstream task (Mutect.Splitintervals) ran successfully, and downstream tasks that used those output files succeeded, then those will also cache hit. So this means that your previously succeeded shards should be getting cache hits. I feel like something else must be changing between runs, but I'd need to dig in to understand better.

    Would you mind sharing this workspace (reader access is good) with [email protected] so I can take a look?

  • zwzhangzwzhang Member, Broadie

    @abaumann said:
    Each time a call of a task runs (not a cache hit), it will output to a different place. We do this for provenance reasons and to protect results for call caching in case you deleted old runs.

    If the upstream task (Mutect.Splitintervals) ran successfully, and downstream tasks that used those output files succeeded, then those will also cache hit. So this means that your previously succeeded shards should be getting cache hits. I feel like something else must be changing between runs, but I'd need to dig in to understand better.

    Would you mind sharing this workspace (reader access is good) with [email protected] so I can take a look?

    I just give you access, sorry about my messy workspace (many failures and aborted)
    Thank you very much in advance.

  • abaumannabaumann Broad DSDEMember, Broadie ✭✭✭

    Oh hm I didn't get a sharing email for this - what is the URL to the workspace? I have several hundred workspaces I have access to and can't figure out which one it is :smile:

  • zwzhangzwzhang Member, Broadie

    @abaumann said:
    Oh hm I didn't get a sharing email for this - what is the URL to the workspace? I have several hundred workspaces I have access to and can't figure out which one it is :smile:

    it is this: https://portal.firecloud.org/#workspaces/meyerson-lab/si-net-10x

  • abaumannabaumann Broad DSDEMember, Broadie ✭✭✭

    OK figured it out - this is a problem with how Mutect2 works with respect to call caching. The issue is that M2 takes in the intervals as a String and streams that data using Java NIO directly from the bucket. Since it's a String, the path is different due to how FireCloud is setup for call caching (makes copies of files).

    There is a fix on the way in Cromwell to allow Files to be used instead in these cases but to NOT localize the files so that you can use a File for streaming using NIO rather than String: https://github.com/broadinstitute/cromwell/pull/3738. Tagging @Geraldine_VdAuwera to make her aware of that fix and that this limitation came up in FireCloud with the mutect2_gatk method config from https://portal.firecloud.org/#workspaces/help-gatk/Somatic-SNVs-Indels-GATK4/method-configs/gatk/mutect2-gatk4

  • zwzhangzwzhang Member, Broadie

    @abaumann said:
    OK figured it out - this is a problem with how Mutect2 works with respect to call caching. The issue is that M2 takes in the intervals as a String and streams that data using Java NIO directly from the bucket. Since it's a String, the path is different due to how FireCloud is setup for call caching (makes copies of files).

    There is a fix on the way in Cromwell to allow Files to be used instead in these cases but to NOT localize the files so that you can use a File for streaming using NIO rather than String: https://github.com/broadinstitute/cromwell/pull/3738. Tagging @Geraldine_VdAuwera to make her aware of that fix and that this limitation came up in FireCloud with the mutect2_gatk method config from https://portal.firecloud.org/#workspaces/help-gatk/Somatic-SNVs-Indels-GATK4/method-configs/gatk/mutect2-gatk4

    Hi, @abaumann
    Thank you very much for figuring it out!

    Just make sure, this is going to be a fix handled by @Geraldine_VdAuwera. I can't simply edit my wdl script to fix it right? So I probably need to wait at the moment.

  • KateNKateN Cambridge, MAMember, Broadie, Moderator admin

    Hi @zwzhang, I just wanted to follow up on what @abaumann said earlier. The Cromwell team is implementing a fix for the issue you've been discussing, so you will need to wait on that. It will be something announced in the release notes for FireCloud.

    I believe Alex tagged Geraldine to simply make her aware that this limitation exists. We like to keep track of limitations so we can monitor the status of the fix and have an answer for other folks who may encounter it in the meantime.

  • Geraldine_VdAuweraGeraldine_VdAuwera Cambridge, MAMember, Administrator, Broadie admin

    Just to confirm: there is indeed some new Cromwell and WDL functionality coming soon that will address this limitation. We (my team) are planning to update the GATK pipelines accordingly once all that is in place. I expect it will take a few weeks to get there.

    In the meantime, it would in fact be possible to edit the WDLs to disable the streaming feature that is breaking call cache recognition. This would make the pipelines less efficient (mainly more expensive) but would get past the call caching limitation. However we are not planning to do this work because we need to focus on updating to use the new functionality— we don’t have the bandwidth to do both.

  • cbaocbao Member, Broadie ✭✭

    Thanks @zwzhang !
    I just got similar ERROR messages on my WGS data:

    ... ComputeEngineCredentials cannot find the metadata server ...

  • davidbendavidben BostonMember, Broadie, Dev ✭✭✭

    @KateN for a short-term fix, would all NIO inputs -- bams, vcfs, and intervals -- have to be disabled, or would it be enough just to localize vcfs and intervals?

  • KateNKateN Cambridge, MAMember, Broadie, Moderator admin

    All NIO inputs would have to be disabled while we work to fix the issue with file streaming and call caching.

Sign In or Register to comment.