Member since
09-12-2016
39
Posts
45
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2616 | 09-20-2016 12:17 PM | |
18815 | 09-19-2016 11:18 AM | |
2041 | 09-15-2016 09:54 AM | |
3681 | 09-15-2016 07:39 AM |
09-20-2016
10:03 AM
Are you copying data on same cluster or different cluster?
... View more
09-20-2016
10:02 AM
This is simplest and best method for this. But you can HTable API for backup as well, HTable API (such as a custom Java application) As is always the case with Hadoop, you can always write your own custom application that utilizes the publicAPI and queries the table directly. You can do this through MapReduce jobs in order to utilize that framework’s distributed batch processing advantages, or through any other means of your own design. However, this approach requires a deep understanding of Hadoop development and all the APIs and performance implications of using them in your production cluster.
... View more
09-20-2016
09:31 AM
4 Kudos
@Dheeraj, Hbase snapshot is the best method for disaster, backup and recovery procedure, snapshot 'sourceTable', 'sourceTable-snapshot'
clone_snapshot 'sourceTable-snapshot', 'newTable'
... View more
09-20-2016
07:26 AM
2 Kudos
@muthyalapaa, You can solve it by increasing the value of "mapreduce.map.memory.mb". Dont know this will solve your problem or not for sure.
... View more
09-19-2016
01:04 PM
2 Kudos
@AMIT, Before using all the methods please take a backup of destination clusters Table by using Snapshot method like, On destination cluster, =>hbase shell =>snapshot "DEST_TABLE_NAME","SNAPSHOT_DEST_TABLE_NAME" So that your data on DESTINATION cluster will not be lost. To keep your data safe on Destination clutser you can use this this method. After your use you can revert it back as, =>hbase shell =>disable "DEST_TABLE_NAME" =>restore_snapshot "SNAPSHOT_DEST_TABLE_NAME"
... View more
09-19-2016
11:18 AM
5 Kudos
@ARUN, Both the mathods "Copytable" and "Import/Export of table" are good for this but they will degrade the performance of regionserver while copying. I would preffer "Snapshot" mathod best for Backup and Recovery. Note:- Snapshot method will only work if both cluster are of same version of Hbase. I tried it. If your both cluster hbase versions are different then you can use Copytable method. Snapshot method, Go to hbase-shell and Take a snapshot of table, =>hbase shell =>snapshot "SOURCE_TABLE_NAME","SNAPSHOT_TABLE_NAME" Then you can Export that snapshot to other cluster like, =>bin/hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot -snapshot SNAPSHOT_TABLE_NAME -copy-to hdfs://DESTINATION_CLUSTER_ACTIVE_NAMENODE_ADDRESS:8020/hbase -mappers 16 After this you can restore the table on DESTINATION Cluster as,On Dest_Cluster, =>hbase shell =>disable "DEST_TABLENAME" =>restore_snapshot "SNAPSHOT_TABLE_NAME" Done your table will be copied.
... View more
09-15-2016
10:07 AM
1 Kudo
Add these configurations in command as well, -D mapreduce.output.fileoutputformat.compress=true -D mapreduce.output.fileoutputformat.compress.type=BLOCK -D mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
... View more
09-15-2016
09:54 AM
2 Kudos
@Arkaprova, Please use, --compress --compression-codec org.apache.hadoop.io.compress.SnappyCodec in the command, you will get the result in proper format.
... View more
09-15-2016
09:48 AM
If this solved your question please accept the answer, it will closed this issue then.
... View more
09-15-2016
08:08 AM
Install slave component on slave node as well.
... View more
- « Previous
-
- 1
- 2
- Next »