Support Questions

Find answers, ask questions, and share your expertise
Celebrating as our community reaches 100,000 members! Thank you!

HBase slow bulk loading using Hive

Super Collaborator



We are facing some performance issue while loading data into HBase (using Hive queries).

The Hive query is quite simple : INSERT INTO TABLE <hive_table_name_targeting_hbase_table> SELECT * FROM <hive_table>


The table "<hive_table_name_targeting_hbase_table>" is an Hive table using the HBaseStorageHandler (So there is a Hbase table as the storage).

The table "<hive_table>" is a regular Hive table.


There is millions of lines in the <hive_table> and the <hive_table_name_targeting_hbase_table> is empty.


When running the query we can see that the Yarn job generate "177 mapper" (less or more depending on the data size in <hive_table>). This part is quite "normal".


But when I check the execution log of each mapper, I can see that some mapper take A LOT MORE TIME than others.

Some mapper can take up to an hour (whereas the normal time of a mapper is around 10 minutes).


In the log file of the "slow" mappers I can see a lot of retry on HBase operation (and finaly some exception about NotServingHBaseRegion.

After some time (and a lot of retry) it's OK. But unfortunatly, this is slowing down the treatment a lot.


Does someone has already encounter this ? (while loading a HBase table using Hive queries) ?

Could it be related to region being split during the write ? If yes,  why ?

Is there some bug in the HBaseStorageHandler with too much data ?


Of course the HBase table is online and can accessed normaly after loading the data. So no HBase configuration issue here (at least not a basic one).

HBase compaction is set to 0 (and is launched manualy).


Log sample :

2016-08-08 10:18:25,962 INFO [htable-pool1-t31] org.apache.hadoop.hbase.client.AsyncProcess: #2, table=prd_piste_audit_gsie_traite_001, attempt=13/35 failed=28ops, last exception: null on <a_host>,60020,1467474218569, tracking started null, retrying after=20126ms, replay=28ops
2016-08-08 10:18:46,091 INFO [htable-pool1-t31] org.apache.hadoop.hbase.client.AsyncProcess: #2, table=prd_piste_audit_gsie_traite_001, attempt=14/35 failed=28ops, last exception: org.apache.hadoop.hbase.NotServingRegionException: org.apache.hadoop.hbase.NotServingRegionException: Region prd_piste_audit_gsie_traite_001,15a55dd4-5c6e-41b3-9d2e-304015aae5e9,1470642880612.e8868eaa5ac33c4612632c2c89474ecc. is not online on <a_host>,60020,1467474218569
        at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(
        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(
        at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(
        at org.apache.hadoop.hbase.ipc.RpcExecutor$
 on <a_host>,60020,1467474218569, tracking started null, retrying after=20099ms, replay=28ops








This problem has been solved!

Want to get a detailed solution you have to login/registered on the community



This problem has been solved!

Want to get a detailed solution you have to login/registered on the community


Super Collaborator

Thank you for this explanation.

This will help me a lot for the next steps.

Super Collaborator

Ok, since the default behaviour is unefficient I have search for a way to make the "bulk load" more efficient.


I think I found a more efficient way, but there seems to be a blocker bug on that (referenced here : )


1- The point is to set these two properties before runing the insert command :

SET hive.hbase.generatehfiles=true;


2- Then run the insert query which will prepare HFile at the designated location (instead of directly loading the HBase table).


3- And then only, performe a bulkload on HBase using the HFiles prepared.

export HADOOP_CLASSPATH=`hbase classpath`
yarn jar /usr/hdp/current/hbase-client/lib/hbase-server.jar completebulkload /<a_path>/<thecolumn_family_name>


Problem, the query creating the HFile is failing because it "found" multiple column family because it look at the wrong folder.

I'm doing my test on CDH5.7.1


Does someone already test this method ? If yes, is there some properties to set I have forgotten ?

Or is this really a blocker issue ? Then I'll raise this to the support.




Super Collaborator

For those interested :

the issue was confirmed by the support with no workaround until the jira ticket listed is fixed.

Super Collaborator

Ok, I managed to make a HBase Bulk Load using Hive.


There is a wiki article on that :

The procedure described there do not work. I guess it was made for older version of hive and HBase.


With some work in order to adapt the procedure I managed to load an HBase table using the completebulkload.


Here comes a working sample on that matter :


sudo -u hdfs hdfs dfs -put -f /opt/cloudera/parcels/CDH/lib/hive/lib/hbase-client.jar /user/hive/
sudo -u hdfs hdfs dfs -put -f /opt/cloudera/parcels/CDH/lib/hive/lib/hbase-server.jar /user/hive/
sudo -u hdfs hdfs dfs -put -f /opt/cloudera/parcels/CDH/lib/hive/lib/hbase-common.jar /user/hive/
sudo -u hdfs hdfs dfs -put -f /opt/cloudera/parcels/CDH/lib/hive/lib/hbase-protocol.jar /user/hive/
sudo -u hdfs hdfs dfs -put -f /opt/cloudera/parcels/CDH/lib/hive/lib/hive-hbase-handler.jar /user/hive/

# These JARs need to be added to HiveServer2 with the property hive.aux.jars.path

sudo -u hdfs hdfs dfs -chmod 554 /user/hive/*.jar
sudo -u hdfs hdfs dfs -chown hive:hive /user/hive/*.jar

total=`beeline -n sp35517 -p "" -u "jdbc:hive2://dn060001:10000/default" --outputformat=csv2 --silent=true -e "SELECT count(*) FROM default.operation_client_001;"`
total=`echo $total | cut -d ' ' -f 2- `

hdfs dfs -rm -r /tmp/hb_range_keys
hdfs dfs -mkdir /tmp/hb_range_keys

beeline -n sp35517 -p "" -u "jdbc:hive2://dn060001:10000/default" -e "CREATE EXTERNAL TABLE IF NOT EXISTS default.hb_range_keys(transaction_id_range_start string) row format serde 'org.apache.hadoop.hive.serde2.binarysortable.BinarySortableSerDe' stored as inputformat 'org.apache.hadoop.mapred.TextInputFormat' outputformat '' location '/tmp/hb_range_keys';"

beeline -n sp35517 -p "" -u "jdbc:hive2://dn060001:10000/default" -e "add jar /opt/cloudera/parcels/CDH/lib/hive/lib/hive-contrib.jar; create temporary function row_sequence as 'org.apache.hadoop.hive.contrib.udf.UDFRowSequence'; INSERT OVERWRITE TABLE default.hb_range_keys SELECT FROM ( SELECT row_sequence() as num, FROM default.operation_client_001 t order by a WHERE ( a.num % ( round( ${total} / 12) ) ) = 0;"

hdfs dfs -rm -r /tmp/hb_range_key_list;
hdfs dfs -cp /tmp/hb_range_keys/* /tmp/hb_range_key_list;
hdfs dfs -rm -r /tmp/hbsort;
hdfs dfs -mkdir /tmp/hbsort;

beeline -n sp35517 -p "" -u "jdbc:hive2://dn060001:10000/default" -e "set mapred.reduce.tasks=12; set hive.mapred.partitioner=org.apache.hadoop.mapred.lib.TotalOrderPartitioner; set total.order.partitioner.path=/tmp/hb_range_key_list; set hfile.compression=gz; CREATE TABLE IF NOT EXISTS default.hbsort (id string, id_courtier string, cle_recherche string, cle_recherche_contrat string, nom_sous string, nom_d_usage string, prenom_sous string, date_naissance_sous string, id_contrat string, num_contrat string, produit string, fiscalite string, dt_maj string, souscription timestamp, epargne double, dt_ope_ct timestamp, type_ope_ct string, montant string, frais string, dt_ope_ct_export string, souscription_export string, montant_export string, frais_export string, montant_encours_gbl_ct_export string ) STORED AS INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.hbase.HiveHFileOutputFormat' TBLPROPERTIES ('' = '/tmp/hbsort/ti');"

beeline -n sp35517 -p "" -u "jdbc:hive2://dn060001:10000/default" -e "INSERT OVERWRITE TABLE hbsort select t.* from default.operation_client_001 t cluster by;"
sudo -u hdfs hdfs dfs -chgrp -R hbase /tmp/hbsort
sudo -u hdfs hdfs dfs -chmod -R 775 /tmp/hbsort

export HADOOP_CLASSPATH=`hbase classpath`
hadoop jar /opt/cloudera/parcels/CDH/lib/hive/lib/hbase-server.jar completebulkload /tmp/hbsort default_operation_client_001 c