Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

LZMA compression codec support

avatar
Rising Star

Hi experts!

 

seems that LZMA algorithm could be pretty siutable for some Hadoop cases (like storing historical inmutable data). Does someone know is it possible to implement it somehow or reuse some library? 

 

any ideas are very welcome!

 

thanks!

1 ACCEPTED SOLUTION

avatar
Mentor
The hadoop-xz project (github) does not require you to rebuild your CDH.
Just build that project and use the produced jar with the suggested config
change (add "io.sensesecure.hadoop.xz.XZCodec" to io.compression.codecs).

View solution in original post

5 REPLIES 5

avatar
Mentor
You may want to read FB's experience with that algo:
https://issues.apache.org/jira/browse/HADOOP-6837?focusedCommentId=13687660&page=com.atlassian.jira....

It looks like you can try https://github.com/yongtang/hadoop-xz (although
it seems like pure-java instead of native-extended, but not necessarily a
bad thing given LZMA's true goals).

avatar
Rising Star

Thank you for your reply!

seems promissing, but as far as i understand it require to rebuld your Hadoop distribution pack.

what if i just have CDH pack and want to plug this like extention (for example like lzo does through the parcels)...

thanks!

avatar
Mentor
The hadoop-xz project (github) does not require you to rebuild your CDH.
Just build that project and use the produced jar with the suggested config
change (add "io.sensesecure.hadoop.xz.XZCodec" to io.compression.codecs).

avatar
Rising Star

many thanks!

avatar
Rising Star

so, i've started to play with this and met interesting thing. When I try to proceed data with lzma i read in two times more data then i'm actually have on the HDFS.

For example, hadoop client (hadoop fs -du) shows some numbers like 100GB.

then i run MR (like select count(1) ) over this data and check MR counters and find "HDFS bytes read" two times more (like 200GB).

In case of gzip and bzip2 codecs hadoop client file size and MR counters are the similar