Support Questions

Find answers, ask questions, and share your expertise

Reducer is running very slow in hbase bulk load

avatar
Rising Star
1 ACCEPTED SOLUTION

avatar
Guru

There are a couple of optimizations you can try (below) but they almost certainly will not reduce a job duration from > 24 hours to a few hours. It likely is that your cluster is too small for the amount of processing you are doing. In that case, your best bet is to break your 200GB data set into smaller chunks and bulk load each sequentially (or preferably, add more nodes to your cluster). Also, be sure that you are not bulk loading when the scheduled major compaction is occurring.

Optimizations: in addition to looking at your log, go to Ambari and see what is maxing out ... memory? CPU?

This link gives a good overview for optimizing hbase loads.

https://www.ibm.com/support/knowledgecenter/SSPT3X_3.0.0/com.ibm.swg.im.infosphere.biginsights.analy...

It is not focused on bulkloading specifically, but does still come into play.

Note: for each property mentioned, set it in your importtsv script as

-D<property>=<value> \

One thing that usually helps map-reduce jobs is compressing the map output so travels across the wire faster to the reducer

-Dmapred.compress.map.output=true\

-Dmapred.map.output.compression.code=org.apache.hadoop.io.compress.GzipCodec\

As mentioned though, it is likely that your cluster is not scaled properly for your workload.

View solution in original post

1 REPLY 1

avatar
Guru

There are a couple of optimizations you can try (below) but they almost certainly will not reduce a job duration from > 24 hours to a few hours. It likely is that your cluster is too small for the amount of processing you are doing. In that case, your best bet is to break your 200GB data set into smaller chunks and bulk load each sequentially (or preferably, add more nodes to your cluster). Also, be sure that you are not bulk loading when the scheduled major compaction is occurring.

Optimizations: in addition to looking at your log, go to Ambari and see what is maxing out ... memory? CPU?

This link gives a good overview for optimizing hbase loads.

https://www.ibm.com/support/knowledgecenter/SSPT3X_3.0.0/com.ibm.swg.im.infosphere.biginsights.analy...

It is not focused on bulkloading specifically, but does still come into play.

Note: for each property mentioned, set it in your importtsv script as

-D<property>=<value> \

One thing that usually helps map-reduce jobs is compressing the map output so travels across the wire faster to the reducer

-Dmapred.compress.map.output=true\

-Dmapred.map.output.compression.code=org.apache.hadoop.io.compress.GzipCodec\

As mentioned though, it is likely that your cluster is not scaled properly for your workload.