Support Questions

Find answers, ask questions, and share your expertise

Hbase Restore using the Backup ID from S3 throws error

avatar

Hi,

Using hbase full backup command I have successfully taken the backup and stored in one of my test bucket in AWS S3. In the S3 bucket the backup name is storing in the format 'PRE backup_x1x2x3x' . Below is the backup command which I ran.

hbase backup create full s3a://$AccessKey:$SecretKey@$BucketPath -set setname

Now while doing restore using below command

hbase restore -set setname s3a://$AccessKey:$SecretKey@BucketPath PRE backup_x1x2x3x -overwrite

I get the error as below :

java.io.IOException: Could not find backup manifest .backup.manifest for PRE backup_x1x2x3x in s3a://AcessKey:SecretKey@BucketPath. Did PRE backup_x1x2x3x correspond to previously taken backup ?
        at org.apache.hadoop.hbase.backup.HBackupFileSystem.getManifestPath(HBackupFileSystem.java:111)
        at org.apache.hadoop.hbase.backup.HBackupFileSystem.getManifest(HBackupFileSystem.java:119)
        at org.apache.hadoop.hbase.backup.HBackupFileSystem.checkImageManifestExist(HBackupFileSystem.java:134)
        at org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restore(RestoreClientImpl.java:95)
        at org.apache.hadoop.hbase.backup.RestoreDriver.parseAndRun(RestoreDriver.java:158)
        at org.apache.hadoop.hbase.backup.RestoreDriver.doWork(RestoreDriver.java:187)
        at org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:112)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
        at org.apache.hadoop.hbase.backup.RestoreDriver.main(RestoreDriver.java:192)

can anybody please help me out in this. @Jay Kumar SenSharma

I have used the above steps from the link :

https://hortonworks.com/blog/coming-hdp-2-5-incremental-backup-restore-apache-hbase-apache-phoenix/

Thanks.

1 ACCEPTED SOLUTION

avatar

1.please avoid putting secrets in your paths; it invariably ends up in a log somewhere. Set the options fs.access.key and fs.secret.key instead.

2. Try backing up to a subdirectory. Root directories are "odd"

3. What happens when you a hadoop fs -ls s3a://bucket/path-to-backup? That should see if the file is there.

View solution in original post

5 REPLIES 5

avatar

1.please avoid putting secrets in your paths; it invariably ends up in a log somewhere. Set the options fs.access.key and fs.secret.key instead.

2. Try backing up to a subdirectory. Root directories are "odd"

3. What happens when you a hadoop fs -ls s3a://bucket/path-to-backup? That should see if the file is there.

avatar

Hi Thanks for the reply @stevel

1. I have added fs.access.key and fs.secret.key in my config file.

2. And just for testing purpose i tried with Root Directory

3. when I execute

hadoop fs -ls s3a://bucket/path-to-backup

i can see the backup created.

avatar
@stevel

@Timothy Spann

The restore command works fine after I removed the slash '/' at the end of my bucket path. The restore is successfull. But I dont see any data in my SYSTEM Tables. The User created tables are restored properly with data as expected but SYSTEM tables doesnt show any records in it.

Am I doing something wrong ?

avatar
Master Guru

avatar
Master Guru

Does your user have permissions?

See also: https://community.hortonworks.com/questions/47197/phoenix-backup.html

https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_data-access/content/ch_hbase_bar.html

You have to specify table names, so specify the SYSTEM ones as well.

Creating and Maintaining a Complete Backup Image

The first step in running the backup-and-restore utilities is to perform a full backup and to store the data in a separate image from the source. At a minimum, you must do this to get a baseline before you can rely on incremental backups.

[Important>Important
[Tip>Tip
Record the backup ID that appears at the end of a successful backup. In case the source cluster fails and you need to recover the dataset with a restore operation, having the backup ID readily available can save time.