Created on 01-09-2016 05:09 PM - edited 09-16-2022 02:56 AM
seems that LZMA algorithm could be pretty siutable for some Hadoop cases (like storing historical inmutable data). Does someone know is it possible to implement it somehow or reuse some library?
any ideas are very welcome!
Created 01-09-2016 09:46 PM
Created 01-11-2016 07:19 PM
Thank you for your reply!
seems promissing, but as far as i understand it require to rebuld your Hadoop distribution pack.
what if i just have CDH pack and want to plug this like extention (for example like lzo does through the parcels)...
Created 01-12-2016 02:58 PM
Created 01-13-2016 02:10 PM
so, i've started to play with this and met interesting thing. When I try to proceed data with lzma i read in two times more data then i'm actually have on the HDFS.
For example, hadoop client (hadoop fs -du) shows some numbers like 100GB.
then i run MR (like select count(1) ) over this data and check MR counters and find "HDFS bytes read" two times more (like 200GB).
In case of gzip and bzip2 codecs hadoop client file size and MR counters are the similar