Notice:
If you happen to see a question you know the answer to, please do chime in and help your fellow community members. We appreciate your help!

Test-drive the GATK tools and Best Practices pipelines on Terra


Check out this blog post to learn how you can get started with GATK and try out the pipelines in preconfigured workspaces (with a user-friendly interface!) without having to install anything.

HaplotypeScore and nearby variants

pdexheimerpdexheimer Member ✭✭✭✭
edited August 2013 in Ask the GATK team

I've run into a situation with HaplotypeScore that I just don't quite understand. This experiment consists of exome sequencing on 32 subjects with our phenotype of interest (no "normals"). A subset of the subjects have known variants that were identified by Sanger sequencing. One of those SNVs was called by UnifiedGenotyper, but then filtered out by VQSR. Here's the INFO for it:

AC=2;AF=0.031;AN=64;BaseQRankSum=10.532;DP=4793;Dels=0.00;FS=0.000;HaplotypeScore=11.1957;InbreedingCoeff=-0.0323;MLEAC=2;MLEAF=0.031;MQ=58.98;MQ0=0;MQRankSum=1.801;QD=12.47;ReadPosRankSum=0.696;VQSLOD=2.37;culprit=HaplotypeScore

As you can see, it's generally very good except that HaplotypeScore is high (the PASSing tranche in this dataset has a minimum VQSLOD of 3.17, so this is relatively close). When I started investigating in IGV, I noticed that a different subject carries a 3bp deletion 9 bases upstream (that is, nucleotides -11 through -9 relative to this SNV are deleted). My understanding of HaplotypeScore (which was confirmed by ebanks) is that it is calculated on each sample separately, and then averaged into a cohort score - so that deletion in a different sample shouldn't penalize my overall score.

But when I dropped the sample with the deletion and re-called everything, the HaplotypeScore dropped from 11.2 to 3.9. The other metrics didn't change substantially, and it now has a passing LOD. This says to me that the sample-level HaplotypeScore on that dropped sample was very high - but it doesn't look abnormal in IGV (the indel has no HS, of course).

Have I hit a dead end? I can't think of any other ways to look at this, and the answer may be that this is just an unlucky variant that can't be reliably classified (even with a process as cool as VQSR, there's bound to be a few of those). I'm about to re-call everything with HaplotypeCaller, do you have any other ideas of things I can try?

Tagged:

Best Answer

Answers

  • ebanksebanks Broad InstituteMember, Broadie, Dev ✭✭✭✭

    Ah, that's interesting. What I said in the other thread was true, but I should probably give some more details.

    The first step is to construct a list of all possible haplotypes, and that list is generated from the reads for all samples. So even though no other sample contains that deletion, the corresponding haplotype is used when calculating the overall score in step two.

    The second step is to calculate a per-sample score by iterating over all the haplotypes from step 1 with each sample's reads. And then the final score is the average of the various per-sample scores.

    So somehow the inclusion of the deletion in the global list of haplotypes is lowering the score. Are you sure there's no evidence for the deletion in the other samples? Perhaps the problem is a bug: maybe the fact that the deletion just partially overlaps the 10 base window used for the Haplotype Score is making that haplotype look too much like the reference...?

  • pdexheimerpdexheimer Member ✭✭✭✭

    Oh, I see - score is computed per-sample, but the potential haplotypes are determined globally.

    I think this explains what I'm seeing, then. The global list contains three "good" haplotypes - reference, deletion, and SNV - and a whole bunch of "error" haplotypes that we can ignore. When the score is computed at the site of the SNV, the best two candidates from the global list are chosen to score against (since there's only one alt allele). Given the way quality is calculated for the haplotypes, the reference will be the overwhelming favorite and the second choice will be one of the other two "good" candidates. Which means that when we encounter the sample that actually carries the #3 haplotype, it will get a poor score because half of it's reads don't match either of the two best candidates. By removing the sole sample with the deletion, I resolved the conflict for the #2 haplotype and ensured that every sample was actually drawn from the haplotypes being scored against.

    In effect, I think this penalizes regions that are variable across a cohort. That probably explains why I see so few calls in the MHC pass filter (it always seemed like more got filtered there than I expected). The only solution I can come up with is to determine the candidate haplotypes per-sample as well, but that may have unintended consequences I haven't considered.

  • pdexheimerpdexheimer Member ✭✭✭✭

    Indeed, HaplotypeCaller is running now. I'm getting ever closer to making that my default caller - will probably do so after I finally upgrade to Java 7 and get to the latest GATK version. Thanks for your help and explanations.

Sign In or Register to comment.