Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

HBase slow bulk loading using Hive

avatar
Super Collaborator

Hi,

 

We are facing some performance issue while loading data into HBase (using Hive queries).

The Hive query is quite simple : INSERT INTO TABLE <hive_table_name_targeting_hbase_table> SELECT * FROM <hive_table>

 

The table "<hive_table_name_targeting_hbase_table>" is an Hive table using the HBaseStorageHandler (So there is a Hbase table as the storage).

The table "<hive_table>" is a regular Hive table.

 

There is millions of lines in the <hive_table> and the <hive_table_name_targeting_hbase_table> is empty.

 

When running the query we can see that the Yarn job generate "177 mapper" (less or more depending on the data size in <hive_table>). This part is quite "normal".

 

But when I check the execution log of each mapper, I can see that some mapper take A LOT MORE TIME than others.

Some mapper can take up to an hour (whereas the normal time of a mapper is around 10 minutes).

 

In the log file of the "slow" mappers I can see a lot of retry on HBase operation (and finaly some exception about NotServingHBaseRegion.

After some time (and a lot of retry) it's OK. But unfortunatly, this is slowing down the treatment a lot.

 

Does someone has already encounter this ? (while loading a HBase table using Hive queries) ?

Could it be related to region being split during the write ? If yes,  why ?

Is there some bug in the HBaseStorageHandler with too much data ?

 

Of course the HBase table is online and can accessed normaly after loading the data. So no HBase configuration issue here (at least not a basic one).

HBase compaction is set to 0 (and is launched manualy).

 

Log sample :

2016-08-08 10:18:25,962 INFO [htable-pool1-t31] org.apache.hadoop.hbase.client.AsyncProcess: #2, table=prd_piste_audit_gsie_traite_001, attempt=13/35 failed=28ops, last exception: null on <a_host>,60020,1467474218569, tracking started null, retrying after=20126ms, replay=28ops
2016-08-08 10:18:46,091 INFO [htable-pool1-t31] org.apache.hadoop.hbase.client.AsyncProcess: #2, table=prd_piste_audit_gsie_traite_001, attempt=14/35 failed=28ops, last exception: org.apache.hadoop.hbase.NotServingRegionException: org.apache.hadoop.hbase.NotServingRegionException: Region prd_piste_audit_gsie_traite_001,15a55dd4-5c6e-41b3-9d2e-304015aae5e9,1470642880612.e8868eaa5ac33c4612632c2c89474ecc. is not online on <a_host>,60020,1467474218569
        at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2786)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:922)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:1893)
        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32213)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2034)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
        at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
        at java.lang.Thread.run(Thread.java:745)
 on <a_host>,60020,1467474218569, tracking started null, retrying after=20099ms, replay=28ops

 

 

 

 

 

1 ACCEPTED SOLUTION

avatar
Mentor

> Could it be related to region being split during the write ? If yes, why ?

 

This is the right question to ask. The NotServingRegion exception occurs when the region for a given key is not online yet on a particular RS. Note that clients reach a specific RS by first querying meta, and if the sought RS claims its not serving the region then it could mean two things: (1) The region was recently moved around by the HBase Balancer, or (2) The region experienced a split and there are two parts in it now.

 

In either case, the client treats the error as a cue to refetch meta content and then retry based on newer information present in it, which leads it to the newer location for the sought row key.

 

Given your workload is insert and you only see this happen when you insert, its likely your problem is (2). Its easy to prove that by simply measuring how many regions were at the beginning (before the query runs) and how many there are at end (after the query completes or stalls) via the HMaster's Table UI or its equivalent page inside CM's HBase -> Tables area.

 

Splitting is a good thing for scalability but it can certainly hamper your insert rate cause when the split is occurring all writes are temporarily rejected (this usually is only for a short period of time) and there is also an eventual major compaction that needs to occur later to truly split the singular region data store of the parent into two stores on HDFS (adds additional I/O expense).

 

If by your observation of the numbers splits are indeed occurring, you can prevent them by reviewing and raising the split size attribute on the table, to have larger but fewer regions each. Splits are done usually when a size bound is hit. If your table is not pre-split, then consider pre-splitting it. The HBase documentation covers best practices around this: http://archive.cloudera.com/cdh5/cdh/5/hbase/book.html#manual_region_splitting_decisions (and other topics)

View solution in original post

5 REPLIES 5

avatar
Mentor

> Could it be related to region being split during the write ? If yes, why ?

 

This is the right question to ask. The NotServingRegion exception occurs when the region for a given key is not online yet on a particular RS. Note that clients reach a specific RS by first querying meta, and if the sought RS claims its not serving the region then it could mean two things: (1) The region was recently moved around by the HBase Balancer, or (2) The region experienced a split and there are two parts in it now.

 

In either case, the client treats the error as a cue to refetch meta content and then retry based on newer information present in it, which leads it to the newer location for the sought row key.

 

Given your workload is insert and you only see this happen when you insert, its likely your problem is (2). Its easy to prove that by simply measuring how many regions were at the beginning (before the query runs) and how many there are at end (after the query completes or stalls) via the HMaster's Table UI or its equivalent page inside CM's HBase -> Tables area.

 

Splitting is a good thing for scalability but it can certainly hamper your insert rate cause when the split is occurring all writes are temporarily rejected (this usually is only for a short period of time) and there is also an eventual major compaction that needs to occur later to truly split the singular region data store of the parent into two stores on HDFS (adds additional I/O expense).

 

If by your observation of the numbers splits are indeed occurring, you can prevent them by reviewing and raising the split size attribute on the table, to have larger but fewer regions each. Splits are done usually when a size bound is hit. If your table is not pre-split, then consider pre-splitting it. The HBase documentation covers best practices around this: http://archive.cloudera.com/cdh5/cdh/5/hbase/book.html#manual_region_splitting_decisions (and other topics)

avatar
Super Collaborator

Thank you for this explanation.

This will help me a lot for the next steps.

avatar
Super Collaborator

Ok, since the default behaviour is unefficient I have search for a way to make the "bulk load" more efficient.

 

I think I found a more efficient way, but there seems to be a blocker bug on that (referenced here : https://issues.apache.org/jira/browse/HIVE-13539 )

 

1- The point is to set these two properties before runing the insert command :

SET hive.hbase.generatehfiles=true;
SET hfile.family.path=/<a_path>/<thecolumn_family_name>;

 

2- Then run the insert query which will prepare HFile at the designated location (instead of directly loading the HBase table).

 

3- And then only, performe a bulkload on HBase using the HFiles prepared.

export HADOOP_CLASSPATH=`hbase classpath`
yarn jar /usr/hdp/current/hbase-client/lib/hbase-server.jar completebulkload /<a_path>/<thecolumn_family_name>

 

Problem, the query creating the HFile is failing because it "found" multiple column family because it look at the wrong folder.

I'm doing my test on CDH5.7.1

 

Does someone already test this method ? If yes, is there some properties to set I have forgotten ?

Or is this really a blocker issue ? Then I'll raise this to the support.

 

regards,

mathieu

avatar
Super Collaborator

For those interested :

the issue was confirmed by the support with no workaround until the jira ticket listed is fixed.

avatar
Super Collaborator

Ok, I managed to make a HBase Bulk Load using Hive.

 

There is a wiki article on that : https://cwiki.apache.org/confluence/display/Hive/HBaseBulkLoad

The procedure described there do not work. I guess it was made for older version of hive and HBase.

 

With some work in order to adapt the procedure I managed to load an HBase table using the completebulkload.

 

Here comes a working sample on that matter :

 

sudo -u hdfs hdfs dfs -put -f /opt/cloudera/parcels/CDH/lib/hive/lib/hbase-client.jar /user/hive/
sudo -u hdfs hdfs dfs -put -f /opt/cloudera/parcels/CDH/lib/hive/lib/hbase-server.jar /user/hive/
sudo -u hdfs hdfs dfs -put -f /opt/cloudera/parcels/CDH/lib/hive/lib/hbase-common.jar /user/hive/
sudo -u hdfs hdfs dfs -put -f /opt/cloudera/parcels/CDH/lib/hive/lib/hbase-protocol.jar /user/hive/
sudo -u hdfs hdfs dfs -put -f /opt/cloudera/parcels/CDH/lib/hive/lib/hive-hbase-handler.jar /user/hive/

# These JARs need to be added to HiveServer2 with the property hive.aux.jars.path

sudo -u hdfs hdfs dfs -chmod 554 /user/hive/*.jar
sudo -u hdfs hdfs dfs -chown hive:hive /user/hive/*.jar

total=`beeline -n sp35517 -p "" -u "jdbc:hive2://dn060001:10000/default" --outputformat=csv2 --silent=true -e "SELECT count(*) FROM default.operation_client_001;"`
total=`echo $total | cut -d ' ' -f 2- `

hdfs dfs -rm -r /tmp/hb_range_keys
hdfs dfs -mkdir /tmp/hb_range_keys

beeline -n sp35517 -p "" -u "jdbc:hive2://dn060001:10000/default" -e "CREATE EXTERNAL TABLE IF NOT EXISTS default.hb_range_keys(transaction_id_range_start string) row format serde 'org.apache.hadoop.hive.serde2.binarysortable.BinarySortableSerDe' stored as inputformat 'org.apache.hadoop.mapred.TextInputFormat' outputformat 'org.apache.hadoop.hive.ql.io.HiveNullValueSequenceFileOutputFormat' location '/tmp/hb_range_keys';"

beeline -n sp35517 -p "" -u "jdbc:hive2://dn060001:10000/default" -e "add jar /opt/cloudera/parcels/CDH/lib/hive/lib/hive-contrib.jar; create temporary function row_sequence as 'org.apache.hadoop.hive.contrib.udf.UDFRowSequence'; INSERT OVERWRITE TABLE default.hb_range_keys SELECT a.id FROM ( SELECT row_sequence() as num, t.id FROM default.operation_client_001 t order by t.id) a WHERE ( a.num % ( round( ${total} / 12) ) ) = 0;"

hdfs dfs -rm -r /tmp/hb_range_key_list;
hdfs dfs -cp /tmp/hb_range_keys/* /tmp/hb_range_key_list;
hdfs dfs -rm -r /tmp/hbsort;
hdfs dfs -mkdir /tmp/hbsort;

beeline -n sp35517 -p "" -u "jdbc:hive2://dn060001:10000/default" -e "set mapred.reduce.tasks=12; set hive.mapred.partitioner=org.apache.hadoop.mapred.lib.TotalOrderPartitioner; set total.order.partitioner.path=/tmp/hb_range_key_list; set hfile.compression=gz; CREATE TABLE IF NOT EXISTS default.hbsort (id string, id_courtier string, cle_recherche string, cle_recherche_contrat string, nom_sous string, nom_d_usage string, prenom_sous string, date_naissance_sous string, id_contrat string, num_contrat string, produit string, fiscalite string, dt_maj string, souscription timestamp, epargne double, dt_ope_ct timestamp, type_ope_ct string, montant string, frais string, dt_ope_ct_export string, souscription_export string, montant_export string, frais_export string, montant_encours_gbl_ct_export string ) STORED AS INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.hbase.HiveHFileOutputFormat' TBLPROPERTIES ('hfile.family.path' = '/tmp/hbsort/ti');"

beeline -n sp35517 -p "" -u "jdbc:hive2://dn060001:10000/default" -e "INSERT OVERWRITE TABLE hbsort select t.* from default.operation_client_001 t cluster by t.id;"
sudo -u hdfs hdfs dfs -chgrp -R hbase /tmp/hbsort
sudo -u hdfs hdfs dfs -chmod -R 775 /tmp/hbsort

export HADOOP_CLASSPATH=`hbase classpath`
hadoop jar /opt/cloudera/parcels/CDH/lib/hive/lib/hbase-server.jar completebulkload /tmp/hbsort default_operation_client_001 c