US Holiday notice: this Thursday and Friday (Nov 25-26) the forum will be unattended. Normal service will resume Monday Nov 29. Happy Thanksgiving!

Can compression of VCFs (when specifying output file as .vcf.gz) be improved?

bpowbpow Posts: 6Member

This is not a bug per se in that it does not cause incorrect output, but I think it would be accurately described as an "unintended consequence" of very poorly compressed VCF output files.

GATK allows for output VCF files to be written using Picard's BlockCompressedOutputStream when the the output file is specified with the extension .vcf.gz, which I consider to be very good behavior. However, I noticed after doing some minor external manipulation that the files produced this way are "suboptimally" compressed. By suboptimal, I mean that sometimes the files are even larger than the uncompressed VCF files.

Since the problem occurs in GATK-Lite, I was able to look through the source code to see what is going on. From what I can tell, the issue is that VCFWriter calls mWriter.flush() at the end of VCFWriter.add() for each variant. Per the documentation for BlockCompressedOutputStream.flush():

WARNING: flush() affects the output format, because it causes the current contents of uncompressedBuffer to be compressed and written, even if it isn't full.

As a result, instead of the default of blocks of about 64k, the bgzf-formatted .vcf.gz files produced by GATK have blocks for each line. That reduces the amount repetition for gzip to take advantage of. Not being sure what issues led to requiring a call to flush after every variant, I'm not sure how to best address this, but it may be necessary to wrap BlockCompressedOutputStream when used by VCFWriter to catch this flush in order to get effective compression.

Of course, it is possible to simply write the file and then compress it in a separate step, but this leads to disk IO that should be preventable.

Tagged:

Best Answer

Answers

Sign In or Register to comment.