We've moved!
This site is now read-only. You can find our new documentation site and support forum for posting questions here.
Be sure to read our welcome blog!

pre-processing reads and non-model BQSR

Hi! I’m trying to plan my GATK pipeline and have a few questions. We currently have an assembled genome for a non-model avian species and would like to align the reads from the single individual to the genome and subsequently mine SNPs. Our end game is to design a Fluidigm SNP chip. Since we are only working with the data from a single individual, I am trying to figure out how to best minimize false positive SNP calls.

1) Prior to beginning mapping with BWA, is any pre-processing of the reads recommended? The tutorials say to use raw reads… do we not even need to remove adaptors?

2) Does GATK have any specific recommendations for non-model BQSR (i.e., when no reference set of high-quality SNPs is available)? I’ve found a couple references to this process (http://gatkforums.broadinstitute.org/discussion/3286/quality-score-recalibration-for-non-model-organisms, https://gist.github.com/brantfaircloth/4315737). One user indicated they were using SNPs with quality scores greater than 30 as their reference SNPs for BQSR, another 90. Does GATK have any other thoughts, or is this largely a judgment call?

Tagged:

Best Answers

Answers

  • ritamichelleritamichelle IndianaMember

    Thanks for the info! Just to better my understanding, how would more extensive trimming influence the BQSR process? For example, our lab typically trims poor quality bases (less than Phred-20) from the 5' and 3' ends of Illumina reads (and sometimes does more extensive trimming depending on the downstream analyses to be performed). Would this interfere with base quality score recalibration? In other words, if you alter the initial distribution of the data, does it negatively impact the BQSR correction?

  • ritamichelleritamichelle IndianaMember
Sign In or Register to comment.