I wanted to copy data from one HBase table to another, but it's taking Huuuuge time to do the same and meanwhile other jobs in cluster are failing due to same. I tried below 3 techniques and still time taken is on a higher side. Can you suggest some efficient approach for the same. 1) org.apache.hadoop.hbase.mapreduce.CopyTable 2)org.apache.hadoop.hbase.mapreduce.export 3)take snapshot of the table, then clone the snapshot into another table Thanks in advance..
Hbase snapshot is the best method for disaster, backup and recovery procedure,
snapshot 'sourceTable', 'sourceTable-snapshot' clone_snapshot 'sourceTable-snapshot', 'newTable'
This is simplest and best method for this. But you can HTable API for backup as well,
HTable API (such as a custom Java application)
As is always the case with Hadoop, you can always write your own custom application that utilizes the publicAPI and queries the table directly. You can do this through MapReduce jobs in order to utilize that framework’s distributed batch processing advantages, or through any other means of your own design. However, this approach requires a deep understanding of Hadoop development and all the APIs and performance implications of using them in your production cluster.
I am sorry Nitin, don't have much understanding for HTableAPI, if you could suggest some different command please.
or is it possible to copy few data(say few columns ir rows) using snapshot so that I can copy in 4-5 iterations.
We cant run snapshot in multiple iterations but we can use copytable and copy data from one timestamp to other timestamp like,
CopyTable is a utility that can copy part or of all of a table, either to the same cluster or another cluster. The usage is as follows: