We’re moving the GATK website, docs and forum to a new platform. Read the full story and breakdown of key changes on this blog.
If you happen to see a question you know the answer to, please do chime in and help your fellow community members. We encourage our fourm members to be more involved, jump in and help out your fellow researchers with their questions. GATK forum is a community forum and helping each other with using GATK tools and research is the cornerstone of our success as a genomics research community.We appreciate your help!
Test-drive the GATK tools and Best Practices pipelines on Terra
Check out this blog post to learn how you can get started with GATK and try out the pipelines in preconfigured workspaces (with a user-friendly interface!) without having to install anything.
Using the GATK Unified on a pooling sample

I've got a pool of 80 individuals sequencing data.
Each individual doesn't have index, so that i have just one fastq file and bam file that i cannot sort any sample data from it.
I try to use GATK UnifiedGenotyper v3.3 including "-ploid" option
There're some questions.
- command :
java -Xmx100g -jar /ruby/Tools/GATK/GenomeAnalysisTK-3.3/GenomeAnalysisTK.jar -T UnifiedGenotyper -R ./Ref.fasta -I 1.bam -o 1_unified.vcf --sample_ploidy 160 -minIndelFrac 0.05 --genotype_likelihoods_model BOTH -pnrm EXACT_GENERAL_PLOIDY -nct 4 -nt 10
--sample_ploidy = 160 = 80 (pooling 80 individuals) * ploid ( diploid, 2 )
bamfile size = 3.4GB
SERVER SPEC :
CPU core = 40x
memory = 256G
I've started the process 4 days ago.
The progress percent is 0.3%, and remain time is 176.9 weeks.
I think it's too slow to complete.
I just wonder how long takes time to process GATK UnifiedGenotyper on that data.
Is there any recommendations to improve this job?
Best Answers
-
Geraldine_VdAuwera Cambridge, MA admin
Hi Hubert, the genotyping engine is not designed to handle such a large ploidy. Consider the resulting number of genotype combinations; this leads to an astronomical number of calculations. That is why it is taking so long. To be frank it is not realistic to try to genotype such a large pool, at least not with GATK tools. Your only option is to reduce the ploidy, with the understanding that this reduces your ability to discover minor alleles. But at least you will be able to capture majority alleles. What are you trying to study?
-
Hubert South Korea ✭
@Geraldine_VdAuwera said:
Hi Hubert, the genotyping engine is not designed to handle such a large ploidy. Consider the resulting number of genotype combinations; this leads to an astronomical number of calculations. That is why it is taking so long. To be frank it is not realistic to try to genotype such a large pool, at least not with GATK tools. Your only option is to reduce the ploidy, with the understanding that this reduces your ability to discover minor alleles. But at least you will be able to capture majority alleles. What are you trying to study?I'm very appreciated it. Actually, I'm not familiar with NGS, so that i try to find the way to use NGS data on my population study. I've tried reduce the ploidy, and then i've got vcf file. ( Very thanks!!
) However, i can't classify alleles where they come from. I've got one more question. In that case, should i have pooled 80 samples with index? ( I think that's reasonable... )
-
Sheila Broad Institute admin
@Hubert
Hi,Yes, if you want to know the exact individuals the alleles come from, you will need the individuals to be barcoded. I'm not sure if that is possible in your case however, since you already have the pooled unbarcoded data.
Have a look at this thread for more information.
-Sheila
Answers
Hi Hubert, the genotyping engine is not designed to handle such a large ploidy. Consider the resulting number of genotype combinations; this leads to an astronomical number of calculations. That is why it is taking so long. To be frank it is not realistic to try to genotype such a large pool, at least not with GATK tools. Your only option is to reduce the ploidy, with the understanding that this reduces your ability to discover minor alleles. But at least you will be able to capture majority alleles. What are you trying to study?
I'm very appreciated it. Actually, I'm not familiar with NGS, so that i try to find the way to use NGS data on my population study. I've tried reduce the ploidy, and then i've got vcf file. ( Very thanks!!
) However, i can't classify alleles where they come from. I've got one more question. In that case, should i have pooled 80 samples with index? ( I think that's reasonable... )
@Hubert
Hi,
Yes, if you want to know the exact individuals the alleles come from, you will need the individuals to be barcoded. I'm not sure if that is possible in your case however, since you already have the pooled unbarcoded data.
Have a look at this thread for more information.
-Sheila
That's so kind of you, @Sheila. I should make barcoded pooling sample in next time. The link is also very helpful for me. Always very thanks to GATK team!