We've moved!
This site is now read-only. You can find our new documentation site and support forum for posting questions here.
Be sure to read our welcome blog!

Window size for samples of varying coverage

pjmtelepjmtele Member

The documentation of the CNVDiscoveryPipeline makes it clear that several run parameters should be adjusted based on sequencing coverage. My samples are of variable coverage, but generally fall into 3 bins: ~25% are ~50x, ~25% are ~25x, and the remainder are ~10x coverage. We have about ~400 remaining samples that we intend to sequence to ~10x.

I'm wondering what peoples' thoughts/experiences are with setting window size for a set of samples with variable coverage. I considered setting parameters based on the average coverage across samples, but this may not be optimal for any particular sample. I also considered using optimal parameters for the high coverage samples (smaller windows), which would sacrifice processing time for sensitivity, if I'm reading the documentation correctly. However, it sounds like window sizes that are too small will also reduce sensitivity, in the extreme. If I set parameters with respect to my high coverage samples (30-50x), would these parameters be considered extremely too small for samples at 10x?

Best Answer


  • bhandsakerbhandsaker Member, Broadie ✭✭✭✭

    You don't say how many samples you have in total, but I would be tempted to run discovery in several batches grouped by sequencing depth with different window sizes, then filter and re-genotype the discovered sites across all batches. This is assuming you have at least 100 samples or so in each batch.

    For the 25x batch, I would use default parameters.

    For the 50x batch, you could try cutting the initial window size in half to 500bp. You may be able to genotype any very short calls that you get in the 25x samples and perhaps even in the 10x samples with some loss of genotype accuracy. Genotyping, with enough accuracy to detect an association or even to do imputation, is an easier problem than discovery so projecting like this from higher-depth samples into lower depth samples isn't a crazy strategy.

    For the 10x batch, you could try the default parameters, but you will probably have to double them (e.g. 2000bp initial windows). If the window size is too small, you will get an overwhelming number of small calls, most of which will be just technical fluctuations, which will slow the processing down and these calls will just have to be filtered out during QC anyway.

  • pjmtelepjmtele Member

    Thanks a lot for the info. That is really helpful.

    Unfortunately, I'll fall short of having 100 samples in each batch. I have 99 samples total, 31 of which are at ~40 - 50x, 31 samples are at ~20 - 25x, and the remaining 37 are at 10x.

Sign In or Register to comment.