Support Questions

Find answers, ask questions, and share your expertise

Snappy vs. Zlib - Pros and Cons for each compression in Hive/ Orc files

avatar

I had couple of questions on the file compression. We plan on using ORC format for a data zone that will be heavily accessed by the end-users via Hive/JDBC.

What is the recommendation when it comes to compressing ORC files?

Do you think Snappy is a better option (over ZLIB) given Snappy’s better read-performance? (Snappy is more performant in a read-often scenario, which is usually the case for Hive data.) When would you choose zlib?

As a side note: Compression is a double-edged sword, as you can go also have performance issue going from larger file sizes spread among multiple nodes to the smaller size & HDFS block size interactions. You can blunt this by using compression strategy.

1 ACCEPTED SOLUTION

avatar
Expert Contributor

David's post is from 2014. Since then we switched away from standard Zlib in ORC.

See the slides from ORC 2015: Faster, Better, Smaller

Each column type (like string, int etc) get different Zlib compatible algorithms for compression (i.e different trade-offs of RLE/Huffman/LZ77).

ORC+Zlib after the columnar improvements no longer has the historic weaknesses of Zlib, so it is faster than SNAPPY to read, smaller than SNAPPY on disk and only ~10% slower than SNAPPY to write it out.

View solution in original post

9 REPLIES 9

avatar
Master Mentor

@Ancil McBarnett Performance! Performance! and performance! 🙂

ORC + Zlib is the way go.

Here are the details based on a test done in my env.

run 1 vs. run 2

481-screen-shot-2015-11-16-at-34624-pm.png

avatar

Thanks for sharing! How many datasets were in the Links table? Is the dataset in Links a subset from the ABC dataset?

avatar
Master Mentor

ABC and Links were separate tables. @Jonas Straub

avatar

ORC+ZLib seems to have the better performance. ZLib is also the default compression option, however there are definitely valid cases for Snappy.

I like the comment from David (2014, before ZLib Update) "SNAPPY for time based performance, ZLIB for resource performance (Drive Space)." Make sure you checkout David's post: https://streever.atlassian.net/wiki/display/HADOOP/Optimizing+ORC+Files+for+Query+Performance

As @gopal pointed out in the comment, we have switched to a new ZLib algorithm, hence the combination ORC + (new) ZLib is the way to go. The performance difference of ZLib and Snappy regarding disk writes is rather small.

Btw. ZLib is not always the better option, when it comes to HBase, Snappy is usually better 🙂

avatar
Expert Contributor

David's post is from 2014. Since then we switched away from standard Zlib in ORC.

See the slides from ORC 2015: Faster, Better, Smaller

Each column type (like string, int etc) get different Zlib compatible algorithms for compression (i.e different trade-offs of RLE/Huffman/LZ77).

ORC+Zlib after the columnar improvements no longer has the historic weaknesses of Zlib, so it is faster than SNAPPY to read, smaller than SNAPPY on disk and only ~10% slower than SNAPPY to write it out.

avatar

Thanks @gopal. In this case we should definitely use ORC+(new)Zlib. I'll edit my answer 🙂

avatar
Explorer

@gopal just to confirm, these improvements would require HDP 2.3.x and later correct?

avatar
Master Guru

Any updates for 2016

avatar
Expert Contributor

ORC is considering adding a faster decompression in 2016 - zstd (ZStandard). The enum values for that has already been reserved, but until we work through the trade-offs involved in ZStd - more on that sometime later this year.

https://issues.apache.org/jira/browse/ORC-46

But bigger wins are in motion for ORC with LLAP, the in-memory format for LLAP isn't compressed at all - so it performs like ORC without compression overheads, while letting the cold data on disk sit around in Zlib.