The current GATK version is 3.7-0
Examples: Monday, today, last week, Mar 26, 3/26/04

#### Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

You can opt in to receive email notifications, for example when your questions get answered or when there are new announcements, by following the instructions given here.

#### ☞ Did you remember to?

1. Search using the upper-right search box, e.g. using the error message.
3. Include tool and Java versions.
4. Tell us whether you are following GATK Best Practices.
5. Include relevant details, e.g. platform, DNA- or RNA-Seq, WES (+capture kit) or WGS (PCR-free or PCR+), paired- or single-end, read length, expected average coverage, somatic data, etc.
6. For tool errors, include the error stacktrace as well as the exact command.
7. For format issues, include the result of running ValidateSamFile for BAMs or ValidateVariants for VCFs.
8. For weird results, include an illustrative example, e.g. attach IGV screenshots according to Article#5484.
9. For a seeming variant that is uncalled, include results of following Article#1235.

#### ☞ Formatting tip!

Wrap blocks of code, error messages and BAM/VCF snippets--especially content with hashes (#)--with lines with three backticks (  ) each to make a code block as demonstrated here.

GATK 3.7 is here! Be sure to read the Version Highlights and optionally the full Release Notes.

# Downsampling with HaplotypeCaller

Washington DCMember Posts: 56

Hello,

I'm running the Haplotype Caller on 80 bams, multithreaded (-nt 14), with either vcfs or bed files as the intervals file. I am trying to downsample to approximately 200 reads (-dcov 200). However, my output vcfs clearly have much higher depth (over 300-500 reads) in some areas. Also, HC drastically slows down whenever it hits a region of very deep pile-ups (over 1000 reads).

Is there any way to speed up HC on regions of very deep pileups? Is there a way to make downsampling stricter?

Elise

Tagged:

• Washington DCMember Posts: 56

I am running GATK v2.6-5-gba531bd, compiled 2013/07/18.

Hi Elise,

To clarify, the downsampling is applied per sample, so if you have 80 samples you should downsample quite a bit more than to -dcov 200, since that allows up to 200 reads per sample.

Geraldine Van der Auwera, PhD

• Washington DCMember Posts: 56

Sorry, I wasn't clear. I am seeing over 300-500 reads per sample. And the regions of very deep pileups that I mentioned are over 1000 reads per sample.

I see. That shouldn't happen; downsampling can lead to a somewhat smaller number of reads than the targeted coverage in some conditions, but never more. Can you tell me how you're assessing the depth in the VCF? Also, were your bam files compressed with ReduceReads?

Geraldine Van der Auwera, PhD

• Washington DCMember Posts: 56

I'm assessing the depth by using the DP annotation in the sample columns of the VCF. They seem to match with the AD columns as well. Can you see the screenshot that I uploaded with my original post? There were some examples there.

I thought that the problem might have been that there were multiple bams per sample, and HC was taking 200 reads per bam. However, even my samples with only one bam associated with them have coverage over 500 in some locations.

Can you also post your command line and console output?

Geraldine Van der Auwera, PhD

• Washington DCMember Posts: 56

Can you see the photo now? It's a png.

Yes, I see it now. Can you also post your command line and console output?

Geraldine Van der Auwera, PhD

• Washington DCMember Posts: 56

Here they are. There was an error which we have been intermittently getting in the console output (identical runs will produce it one time but not another), but that's an issue for another day. :-)

OK, you might have a bug here. Could you isolate a snippet of the bam file that reproduces the downsampling issue and upload it to our FTP server? We'll need to take a closer look at this.

Geraldine Van der Auwera, PhD

• Washington DCMember Posts: 56

Yeah, sure. I'll upload that later today.

• Washington DCMember Posts: 56

Okay, six .bam snippets, a ped file, a vcf intervals file, and the command line argument I used are all uploaded in the folder Downsampling_with_HC on the ftp server. I reproduced the errors myself with those six bam files in their entirety using that VCF as the intervals file, so I think you should be able to reproduce them as well.

Please let me know if there is anything else that you need, and thanks again for your help.

• Washington DCMember Posts: 56

Did you find the files on the ftp-server? Any progress on the issue?

Ack, my bad, I failed to log these into the tracker. Will do asap. Sorry for the delay.

Geraldine Van der Auwera, PhD

@eflynn90, I found your files but it seems the bams are invalid. Samtools view returns

[bam_header_read] EOF marker is absent. The input is probably truncated.
`

The files must have been damaged in transfer. Can you please re-upload these, but this time zip the whole lot of them into an archive?

Geraldine Van der Auwera, PhD

• Washington DCMember Posts: 56

Hi, sorry, my fault! I sent you unheadered files, just with the reads of interest. I'll upload headered files by the end of the day.

Ah, no worries. I probably won't get to them anymore today, but I'll process them first thing on Monday.

Geraldine Van der Auwera, PhD

• Washington DCMember Posts: 56

Oops, forgot to do this Friday afternoon. It should all be on the ftp server now though. Downsampling_with_HC.tar.gz

It looks like there's a problem with your new files too. I tried to generate index files for the bams since you didn't upload any, but both samtools and Picard flipped out over them. Even Picard ValidateSAMFile errors out on the files. Please validate and test your files fully before uploading them to us.

Geraldine Van der Auwera, PhD

• Washington DCMember Posts: 56

Sorry for wasting your time. I checked the new files so they should work, and I included the index files. Downsampling_with_HC.tar.gz

I think your new upload might not have overwritten the previous one, because the archive named "Downsampling_with_HC.tar.gz" still contains broken bams without index files. Can you please try uploading again with a different file name?

Geraldine Van der Auwera, PhD

• Washington DCMember Posts: 56

Oh, my. Alright, Downsampling_with_HC_2013-08-27.tar.gz

Hurray, they work

OK, I've been able to replicate your issue, so I'll have the devs look at it more closely. I'll let you know what they find.

Geraldine Van der Auwera, PhD

• Washington DCMember Posts: 56

Yayy! Thank you!

Hi @eflynn90,

Sorry it's taken a while, but we have finally found the time to confirm that this is indeed a bug in downsampling that specifically affects HaplotypeCaller processing. We'll let you know when we fix the issue.

Geraldine Van der Auwera, PhD

• Member Posts: 18

I can see in the documentation (https://www.broadinstitute.org/gatk/gatkdocs/org_broadinstitute_gatk_tools_walkers_haplotypecaller_HaplotypeCaller.php) that HaplotypeCaller has a "--maxReadsInRegionPerSample 1000" default setting. However, changing this parameter has no effect on the output. Is this a work in progress parameter?