Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

LZMA compression codec support

Solved Go to solution

LZMA compression codec support

Rising Star

Hi experts!

 

seems that LZMA algorithm could be pretty siutable for some Hadoop cases (like storing historical inmutable data). Does someone know is it possible to implement it somehow or reuse some library? 

 

any ideas are very welcome!

 

thanks!

1 ACCEPTED SOLUTION

Accepted Solutions
Highlighted

Re: LZMA compression codec support

Master Guru
The hadoop-xz project (github) does not require you to rebuild your CDH.
Just build that project and use the produced jar with the suggested config
change (add "io.sensesecure.hadoop.xz.XZCodec" to io.compression.codecs).
5 REPLIES 5

Re: LZMA compression codec support

Master Guru
You may want to read FB's experience with that algo:
https://issues.apache.org/jira/browse/HADOOP-6837?focusedCommentId=13687660&page=com.atlassian.jira....

It looks like you can try https://github.com/yongtang/hadoop-xz (although
it seems like pure-java instead of native-extended, but not necessarily a
bad thing given LZMA's true goals).

Re: LZMA compression codec support

Rising Star

Thank you for your reply!

seems promissing, but as far as i understand it require to rebuld your Hadoop distribution pack.

what if i just have CDH pack and want to plug this like extention (for example like lzo does through the parcels)...

thanks!

Highlighted

Re: LZMA compression codec support

Master Guru
The hadoop-xz project (github) does not require you to rebuild your CDH.
Just build that project and use the produced jar with the suggested config
change (add "io.sensesecure.hadoop.xz.XZCodec" to io.compression.codecs).

Re: LZMA compression codec support

Rising Star

many thanks!

Re: LZMA compression codec support

Rising Star

so, i've started to play with this and met interesting thing. When I try to proceed data with lzma i read in two times more data then i'm actually have on the HDFS.

For example, hadoop client (hadoop fs -du) shows some numbers like 100GB.

then i run MR (like select count(1) ) over this data and check MR counters and find "HDFS bytes read" two times more (like 200GB).

In case of gzip and bzip2 codecs hadoop client file size and MR counters are the similar