Service notice: Several of our team members are on vacation so service will be slow through at least July 13th, possibly longer depending on how much backlog accumulates during that time. This means that for a while it may take us more time than usual to answer your questions. Thank you for your patience.

Window size for samples of varying coverage

The documentation of the CNVDiscoveryPipeline makes it clear that several run parameters should be adjusted based on sequencing coverage. My samples are of variable coverage, but generally fall into 3 bins: ~25% are ~50x, ~25% are ~25x, and the remainder are ~10x coverage. We have about ~400 remaining samples that we intend to sequence to ~10x.

I'm wondering what peoples' thoughts/experiences are with setting window size for a set of samples with variable coverage. I considered setting parameters based on the average coverage across samples, but this may not be optimal for any particular sample. I also considered using optimal parameters for the high coverage samples (smaller windows), which would sacrifice processing time for sensitivity, if I'm reading the documentation correctly. However, it sounds like window sizes that are too small will also reduce sensitivity, in the extreme. If I set parameters with respect to my high coverage samples (30-50x), would these parameters be considered extremely too small for samples at 10x?

Answers

  • bhandsakerbhandsaker Member, Broadie, Moderator

    You don't say how many samples you have in total, but I would be tempted to run discovery in several batches grouped by sequencing depth with different window sizes, then filter and re-genotype the discovered sites across all batches. This is assuming you have at least 100 samples or so in each batch.

    For the 25x batch, I would use default parameters.

    For the 50x batch, you could try cutting the initial window size in half to 500bp. You may be able to genotype any very short calls that you get in the 25x samples and perhaps even in the 10x samples with some loss of genotype accuracy. Genotyping, with enough accuracy to detect an association or even to do imputation, is an easier problem than discovery so projecting like this from higher-depth samples into lower depth samples isn't a crazy strategy.

    For the 10x batch, you could try the default parameters, but you will probably have to double them (e.g. 2000bp initial windows). If the window size is too small, you will get an overwhelming number of small calls, most of which will be just technical fluctuations, which will slow the processing down and these calls will just have to be filtered out during QC anyway.

  • pjmtelepjmtele Member

    Thanks a lot for the info. That is really helpful.

    Unfortunately, I'll fall short of having 100 samples in each batch. I have 99 samples total, 31 of which are at ~40 - 50x, 31 samples are at ~20 - 25x, and the remaining 37 are at 10x.

  • bhandsakerbhandsaker Member, Broadie, Moderator

    Since you are going to sequence the remaining samples to 10x, you could consider just basing all of the calling at 10x resolution. You could also do a small run on the other deeper samples with a smaller window size to see if you can pick up additional sites you could genotype in the lower coverage samples. Small batches of 31 samples aren't ideal, but they are ok, you just might want to filter more aggressively during QC.

Sign In or Register to comment.