- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Reducer is running very slow in hbase bulk load
- Labels:
-
Apache Hadoop
-
Apache HBase
Created ‎11-28-2016 09:44 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Please find all details here
Created ‎11-28-2016 09:40 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
There are a couple of optimizations you can try (below) but they almost certainly will not reduce a job duration from > 24 hours to a few hours. It likely is that your cluster is too small for the amount of processing you are doing. In that case, your best bet is to break your 200GB data set into smaller chunks and bulk load each sequentially (or preferably, add more nodes to your cluster). Also, be sure that you are not bulk loading when the scheduled major compaction is occurring.
Optimizations: in addition to looking at your log, go to Ambari and see what is maxing out ... memory? CPU?
This link gives a good overview for optimizing hbase loads.
It is not focused on bulkloading specifically, but does still come into play.
Note: for each property mentioned, set it in your importtsv script as
-D<property>=<value> \
One thing that usually helps map-reduce jobs is compressing the map output so travels across the wire faster to the reducer
-Dmapred.compress.map.output=true\ -Dmapred.map.output.compression.code=org.apache.hadoop.io.compress.GzipCodec\
As mentioned though, it is likely that your cluster is not scaled properly for your workload.
Created ‎11-28-2016 09:40 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
There are a couple of optimizations you can try (below) but they almost certainly will not reduce a job duration from > 24 hours to a few hours. It likely is that your cluster is too small for the amount of processing you are doing. In that case, your best bet is to break your 200GB data set into smaller chunks and bulk load each sequentially (or preferably, add more nodes to your cluster). Also, be sure that you are not bulk loading when the scheduled major compaction is occurring.
Optimizations: in addition to looking at your log, go to Ambari and see what is maxing out ... memory? CPU?
This link gives a good overview for optimizing hbase loads.
It is not focused on bulkloading specifically, but does still come into play.
Note: for each property mentioned, set it in your importtsv script as
-D<property>=<value> \
One thing that usually helps map-reduce jobs is compressing the map output so travels across the wire faster to the reducer
-Dmapred.compress.map.output=true\ -Dmapred.map.output.compression.code=org.apache.hadoop.io.compress.GzipCodec\
As mentioned though, it is likely that your cluster is not scaled properly for your workload.
