The current GATK version is 3.7-0
Examples: Monday, today, last week, Mar 26, 3/26/04

#### Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

#### ☞ Get notifications!

You can opt in to receive email notifications, for example when your questions get answered or when there are new announcements, by following the instructions given here.

#### ☞ Got a problem?

1. Search using the upper-right search box, e.g. using the error message.
2. Try the latest version of tools.
3. Include tool and Java versions.
4. Tell us whether you are following GATK Best Practices.
5. Include relevant details, e.g. platform, DNA- or RNA-Seq, WES (+capture kit) or WGS (PCR-free or PCR+), paired- or single-end, read length, expected average coverage, somatic data, etc.
6. For tool errors, include the error stacktrace as well as the exact command.
7. For format issues, include the result of running ValidateSamFile for BAMs or ValidateVariants for VCFs.
8. For weird results, include an illustrative example, e.g. attach IGV screenshots according to Article#5484.
9. For a seeming variant that is uncalled, include results of following Article#1235.

#### ☞ Did we ask for a bug report?

Then follow instructions in Article#1894.

#### ☞ Formatting tip!

Wrap blocks of code, error messages and BAM/VCF snippets--especially content with hashes (#)--with lines with three backticks (  ) each to make a code block as demonstrated here.

Picard 2.10.4 has MAJOR CHANGES that impact throughput of pipelines. Default compression is now 1 instead of 5, and Picard now handles compressed data with the Intel Deflator/Inflator instead of JDK.
GATK version 4.beta.2 (i.e. the second beta release) is out. See the GATK4 BETA page for download and details.

# Reduce Reads error

DenverMember

I recently upgraded from GATK 2.5 to the latest 2.74 stable release, but the Reduce Reads throws the following error when I try to run it with a Bam file that was produced by "PrintReads" (3 samples merged in one Bam file).

MESSAGE: Bad input: Reduce Reads is not meant to be run for more than 1 sample at a time except for the specific case of tumor/normal >pairs in cancer analysis

java -Xmx6g -Djava.awt.headless=true -jar \$CLASSPATH/GenomeAnalysisTK.jar \ -T ReduceReads \ -R ../GATK_ref/hg19.fasta \ -I ../GATK/BQSR/all3Samples2.recal.bam \ -o ../GATK/BQSR/all3Samples.recal.compressed.bam`

It used to work with the old version of GATK, but it does not work now. Where could it be wrong?

Tagged:

Hi @rcholic,

The error message tells you why it is not working. The fact that it worked before is because we had not yet built in safety measures to prevent people from doing what you are trying to do.

• Member

I was wondering if you could confirm where the ReduceReads walker looks to determine if a bam is composed of 2 or more disparate samples. I presume it looks at the SM field for each of the read groups?

I've pushed quite a few bams through our analysis workflow and there are a couple hundred that fail due to the above error. All of the reads are in from the same individual, but it appears the sequencing center that generated the data was not consistent with their SM fields in the read groups (some are full sample names and some are abbreviated). Assuming a user is certain that all reads are from the same individual/sample, would a simple solution be to specify the -cancer-mode flag (rather than re-header)?

Thanks for your help.

• Member, Dev

Yes, all the GATK tools that operate on the sample level use the SM field. I have no idea if the cancer-mode flag will do what you want, but you absolutely need to fix your bams. Even if you can hack through this step, you'll get more problems in variant calling, and will face an absolute nightmare in the future if you ever come back to this data and "a couple hundred" bams are wrong. It will be well worth your time now to fix the bams and have everything just work. I would also keep a record of exactly what I changed in which files, just in case you ever need to troubleshoot

Hi @jpitt, you should definitely fix things so that the read groups, sample info etc are all correct and consistent -- preferably from the start of your pipeline. Using the cancer mode flag is not the right way to deal with this.