Support Questions
Find answers, ask questions, and share your expertise

Data Compression Doesn't work in ORC with SNAPPY Compression

I have a hive managed partition table (4 partitions) which has 2TB of data and it is stored as ORC tables with no compression. Now I have created a duplicate table with ORC -- SNAPPY compression and inserted the data from old table into the duplicate table. I noticed that it took more loading time than usual I believe that's because of enabling the compression. Then i have checked the file size in duplicate table with snappy compression and it shows somewhere around 2.6TB. Verified the count of both the tables and it remains the same. Any idea why the difference in size even after enabling the snappy compression in ORC?

1 ACCEPTED SOLUTION

Master Collaborator

Are you sure that the ORC tables you created were with no compression. By default hive.exec.orc.default.compress is set to ZLIB, perhaps your original table is with zlib compression.

There are some interesting threads to read:

https://community.hortonworks.com/questions/4067/snappy-vs-zlib-pros-and-cons-for-each-compression.h...

https://community.hortonworks.com/articles/49252/performance-comparison-bw-orc-snappy-and-zlib-in-h....

View solution in original post

2 REPLIES 2

Master Collaborator

Are you sure that the ORC tables you created were with no compression. By default hive.exec.orc.default.compress is set to ZLIB, perhaps your original table is with zlib compression.

There are some interesting threads to read:

https://community.hortonworks.com/questions/4067/snappy-vs-zlib-pros-and-cons-for-each-compression.h...

https://community.hortonworks.com/articles/49252/performance-comparison-bw-orc-snappy-and-zlib-in-h....

Thanks @Deepesh. You are right default compression is ZLIB and that causes the difference in compression.

Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.