Reply
Expert Contributor
Posts: 87
Registered: ‎09-17-2014
Accepted Solution

LZMA compression codec support

Hi experts!

 

seems that LZMA algorithm could be pretty siutable for some Hadoop cases (like storing historical inmutable data). Does someone know is it possible to implement it somehow or reuse some library? 

 

any ideas are very welcome!

 

thanks!

Posts: 1,827
Kudos: 406
Solutions: 292
Registered: ‎07-31-2013

Re: LZMA compression codec support

You may want to read FB's experience with that algo:
https://issues.apache.org/jira/browse/HADOOP-6837?focusedCommentId=13687660&page=com.atlassian.jira....

It looks like you can try https://github.com/yongtang/hadoop-xz (although
it seems like pure-java instead of native-extended, but not necessarily a
bad thing given LZMA's true goals).
Expert Contributor
Posts: 87
Registered: ‎09-17-2014

Re: LZMA compression codec support

Thank you for your reply!

seems promissing, but as far as i understand it require to rebuld your Hadoop distribution pack.

what if i just have CDH pack and want to plug this like extention (for example like lzo does through the parcels)...

thanks!

Highlighted
Posts: 1,827
Kudos: 406
Solutions: 292
Registered: ‎07-31-2013

Re: LZMA compression codec support

The hadoop-xz project (github) does not require you to rebuild your CDH.
Just build that project and use the produced jar with the suggested config
change (add "io.sensesecure.hadoop.xz.XZCodec" to io.compression.codecs).
Expert Contributor
Posts: 87
Registered: ‎09-17-2014

Re: LZMA compression codec support

many thanks!

Expert Contributor
Posts: 87
Registered: ‎09-17-2014

Re: LZMA compression codec support

so, i've started to play with this and met interesting thing. When I try to proceed data with lzma i read in two times more data then i'm actually have on the HDFS.

For example, hadoop client (hadoop fs -du) shows some numbers like 100GB.

then i run MR (like select count(1) ) over this data and check MR counters and find "HDFS bytes read" two times more (like 200GB).

In case of gzip and bzip2 codecs hadoop client file size and MR counters are the similar

Announcements