- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Data Compression Doesn't work in ORC with SNAPPY Compression
- Labels:
-
Apache Hadoop
-
Apache Hive
Created ‎03-23-2017 12:58 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have a hive managed partition table (4 partitions) which has 2TB of data and it is stored as ORC tables with no compression. Now I have created a duplicate table with ORC -- SNAPPY compression and inserted the data from old table into the duplicate table. I noticed that it took more loading time than usual I believe that's because of enabling the compression. Then i have checked the file size in duplicate table with snappy compression and it shows somewhere around 2.6TB. Verified the count of both the tables and it remains the same. Any idea why the difference in size even after enabling the snappy compression in ORC?
Created ‎03-23-2017 04:06 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Are you sure that the ORC tables you created were with no compression. By default hive.exec.orc.default.compress is set to ZLIB, perhaps your original table is with zlib compression.
There are some interesting threads to read:
Created ‎03-23-2017 04:06 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Are you sure that the ORC tables you created were with no compression. By default hive.exec.orc.default.compress is set to ZLIB, perhaps your original table is with zlib compression.
There are some interesting threads to read:
Created ‎03-23-2017 05:01 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks @Deepesh. You are right default compression is ZLIB and that causes the difference in compression.
