Community Articles
Find and share helpful community-sourced technical articles.
Cloudera Employee

There are several options available to achieve this use case. The easiest and the best approach would be HBase snapshots method to transfer the data. 


Note: All actions need to be performed as the HBase user only, to ensure correct permissions.

  1. On source cluster:
    #hbase shell> snapshot ‘Test’_Table’, ‘Test_Table_SS’
    #hbase org.apache.hadoop.hbase.snapshot.SnapshotInfo -snapshot ‘Test_Table_SS’ -files -stats 
    #hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot -snapshot 'Test_Table_SS' -copy-to hdfs://<Destination_NN_hostname>:8020/hbase -mappers 16 -bandwidth 200
  2. On destination cluster:
    #hbase org.apache.hadoop.hbase.snapshot.SnapshotInfo -snapshot ‘Test_Table_SS’ -files -stats
    #hbase shell> clone_snapshot ‘Test_Table_SS’, ‘Test_Table’
    #hbase shell> major_compact 'Test_Table'
  3. Once done, you can choose to delete the snapshots on both ms05 and as05:
    #hbase shell> delete_snapshot 'Test_Table_SS'​
  4. But If you plan to use CopyTable, this will not work without additional configuration.

Communication between an older client and a newer server is not guaranteed, there is currently a workaround to allow this to work by adding the following property to your client configuration:

On the client being used to launch the CopyTable, you can do either:

Command Line:












0 Kudos
; ;