Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Data Compression Doesn't work in ORC with SNAPPY Compression

avatar

I have a hive managed partition table (4 partitions) which has 2TB of data and it is stored as ORC tables with no compression. Now I have created a duplicate table with ORC -- SNAPPY compression and inserted the data from old table into the duplicate table. I noticed that it took more loading time than usual I believe that's because of enabling the compression. Then i have checked the file size in duplicate table with snappy compression and it shows somewhere around 2.6TB. Verified the count of both the tables and it remains the same. Any idea why the difference in size even after enabling the snappy compression in ORC?

1 ACCEPTED SOLUTION

avatar

Are you sure that the ORC tables you created were with no compression. By default hive.exec.orc.default.compress is set to ZLIB, perhaps your original table is with zlib compression.

There are some interesting threads to read:

https://community.hortonworks.com/questions/4067/snappy-vs-zlib-pros-and-cons-for-each-compression.h...

https://community.hortonworks.com/articles/49252/performance-comparison-bw-orc-snappy-and-zlib-in-h....

View solution in original post

2 REPLIES 2

avatar

Are you sure that the ORC tables you created were with no compression. By default hive.exec.orc.default.compress is set to ZLIB, perhaps your original table is with zlib compression.

There are some interesting threads to read:

https://community.hortonworks.com/questions/4067/snappy-vs-zlib-pros-and-cons-for-each-compression.h...

https://community.hortonworks.com/articles/49252/performance-comparison-bw-orc-snappy-and-zlib-in-h....

avatar

Thanks @Deepesh. You are right default compression is ZLIB and that causes the difference in compression.