It looks like you're new here. If you want to get involved, click one of these buttons!

- 5.2K All Categories
- 143 Announcements
- 4.1K Ask the GATK team
- 396 GATK Documentation Guide
- 44 FAQs
- 11 Common Problems
- 21 Tutorials
- 12 Presentations
- 31 Methods and Algorithms
- 9 Dictionary
- 0 Pipelining with Queue
- 29 Developer Zone
- 225 Tool Bulletin
- 13 Archive
- 329 Cancer Tools
- 273 Ask the Cancer team
- 10 MuTect Documentation
- 23 Oncotator Documentation
- 4 ReCapSeg Documentation
- 209 Third-party Tools
- 180 GenomeSTRiP
- 25 XHMM
- 3 Firepony Base Recalibrator

Powered by **Vanilla.**
Made with **Bootstrap.**

zevkronenberg
Posts: **4**Member ✭

Greetings,

I am trying to incorporate genotype likelihoods into a downstream analysis. I have two questions:

1) Why is the most likely genotype scaled to a Phred score of zero?

2) Is there a way to undo the scaling? I have seen downstream tools undo the scaling, but I don't know how they do it. Is there an equation that will return an estimated genotype likelihood from the scaled genotype likelihoods?

Thank you for your time.

Zev Kronenberg

Tagged:

## Answers

690Member, Administrator, GATK Dev, Broadie, Moderator, DSDE Dev, GP adminHi Zev,

1) This is just a normalization (not a scaling) and does not affect the actual posterior probabilities at all. This isn't the appropriate forum to go over the mathematical rationale though so you'll either need to take my word for it or ask for an explanation on somewhere like seqanswers.

2) There is no need to undo the normalization and I cannot imagine that any downstream tools are actually doing this (again see #1). The likelihoods in the VCFs are not "scaled" or "estimated" and should be taken as accurate representations of the data.

Hope that helps!

Eric Banks, PhD -- Director, Data Sciences and Data Engineering, Broad Institute of Harvard and MIT

4Member ✭I am going to try and clarify my question:

I completely trust the genotype calculations, but I am still having trouble incorporating PL into a population genetics measure. My problem is the normalization:

The normalization sets the most likely genotype to a phred scaled likelihood of 0 / a p-value of 1.

"Normalized, Phred-scaled likelihoods for genotypes as defined in the VCF specification"

"The most likely genotype (given in the GT field) is scaled so that it's P = 1.0 (0 when Phred-scaled), and the other likelihoods reflect their Phred-scaled likelihoods relative to this most likely genotype."

So in the case of a terrible het call the genotype likelihoods will be something like (2, 0, 1). AA AB BB.

The problem is assessing the uncertainly of the het call with a p-value of 1 / phred score of zero.

When I integrate over the other genotypes AA & AB I am concerned I am introducing a bias.

Maybe I don't need to worry about it. I just noticed that other tools, like BEAGLE, that use GATK VCFs, have a modified PL where the most likely genotype is not required to have a phred score of zero.

Thanks.

4Member ✭I think the easiest way around this is:?

phred / sum(phreds)

that will somewhat undo the normalization.

690Member, Administrator, GATK Dev, Broadie, Moderator, DSDE Dev, GP adminOkay, I think I understand the disconnect now. It is critical to understand that likelihoods are different than probabilities. With likelihoods really only the relative values matter, so 20 vs. 10 is the same as 10 vs. 0; that is why we don't lose any information during the normalization process. With that in mind the GATK does not need to normalize the likelihoods (and that's why e.g. Beagle doesn't require it) - we just do it because it's cleaner (and that's the convention). So there is no bias involved in the normalization process.

I do have to say though that I'm concerned that you aren't quite understanding what phred-scaled likelihoods are. The "fix" that you propose above is not good. The likelihoods are in log-space and need to be converted to real-space before you can create normalized posterior probabilities from them. I don't mean to put you down (I am sure you are very competent) but please make sure you understand what data you have in hand before trying to manipulate it!

Good luck!

Eric Banks, PhD -- Director, Data Sciences and Data Engineering, Broad Institute of Harvard and MIT