Support Questions
Find answers, ask questions, and share your expertise

S3A - hadoop fs -ls and distcp work, but hbase backup and ExportSnapshot fail with 403 error

New Contributor

I'm attempting to export snapshots to S3 using `hbase backup` or ExportSnapshot through MapReduce. However, I'm receiving a 403 error after upgrading to version 2.5.3. Here's a copy of the command and output:

hbase backup create full s3a://<bucket>/backup/ <table>

Backup session finished. Status: FAILURE
2017-03-07 10:07:54,331 ERROR [main] util.AbstractHBaseTool: Error running command-line tool
java.net.SocketTimeoutException: callTimeout=86400, callDuration=92363: 
        at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159)
        at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4390)
        at org.apache.hadoop.hbase.client.HBaseAdmin.backupTablesAsync(HBaseAdmin.java:2713)
        at org.apache.hadoop.hbase.client.HBaseAdmin.backupTables(HBaseAdmin.java:2749)
        at org.apache.hadoop.hbase.client.HBaseBackupAdmin.backupTables(HBaseBackupAdmin.java:215)
        at org.apache.hadoop.hbase.backup.impl.BackupCommands$CreateCommand.execute(BackupCommands.java:197)
        at org.apache.hadoop.hbase.backup.BackupDriver.parseAndRun(BackupDriver.java:111)
        at org.apache.hadoop.hbase.backup.BackupDriver.doWork(BackupDriver.java:126)
        at org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:112)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
        at org.apache.hadoop.hbase.backup.BackupDriver.main(BackupDriver.java:131)
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.nio.file.AccessDeniedException): java.nio.file.AccessDeniedException: s3a://<bucket>/backup/backup_1488910073804/default/<table>: getFileStatus on s3a://<bucket>/backup/backup_1488910073804/default/<table>: com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: FFA1E5A50920A1DC)
        at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:107)
        at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:69)
        at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:1393)
        at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:107)
        at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1443)
        at org.apache.hadoop.fs.s3a.S3AFileSystem.exists(S3AFileSystem.java:1874)
        at org.apache.hadoop.hbase.master.HMaster.backupTables(HMaster.java:2725)
        at org.apache.hadoop.hbase.master.MasterRpcServices.backupTables(MasterRpcServices.java:1120)
        at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:57278)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2127)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
        at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
        at java.lang.Thread.run(Thread.java:745)
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: FFA1E5A50920A1DC)
        at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1182)
        at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:770)
        at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
        at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310)
        at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3785)
        at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1050)
        at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1027)
        at org.apache.hadoop.fs.s3a.S3AFileSystem.getObjectMetadata(S3AFileSystem.java:850)
        at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:1378)
        ... 11 more


        at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1225)
        at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
        at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
        at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.backupTables(MasterProtos.java:60614)
        at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$4.backupTables(ConnectionManager.java:1742)
        at org.apache.hadoop.hbase.client.HBaseAdmin$29.call(HBaseAdmin.java:2720)
        at org.apache.hadoop.hbase.client.HBaseAdmin$29.call(HBaseAdmin.java:2714)
        at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
        ... 10 more

Using ExportSnapshot produces the same errors. I've tried using credentials in the URL and through environment variables to try and authenticate.

Trying to read and write to the same bucket works using hadoop fs and distcp commands. Haven't been able to track down any recent issues involving consistent 403s through either hbase backup or ExportSnapshot. Has anyone else run into a similar situation?

9 REPLIES 9

Super Guru
@Rob M

If you are exporting snapshots to S3, why not try the following

bin/hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot-snapshot MySnapshot-copy-to s3://bucket/hbase -mappers <number of mappers>

New Contributor

Hey @mqureshi, thanks for the quick response. I was originally running our backups through ExportSnapshot, but ran into the same 403 error after the upgrade to 2.5.3, which got me to start poking around. Found the hbase backup command while I was searching and I've been using both hbase backup and ExportSnapshot in my testing after I make any changes.

Super Guru

@Rob M

Have you verified that your access key and secret key are specified correctly and that the path //<bucket>/backup/backup_1488910073804/default/<table> exists (no typo)?

New Contributor

Was anyone ever able to figure out a way around this?

New Contributor

I had the same issue as you @Rob M (authentication not working for hbase backup).

My workaround was quite simple - use AWS IAM roles to grant access to the S3 bucket instead. The AWS Java SDK automatically picks up these credentials so there's no need to specify anything using URLs or environment variables.

Hi @Erik Naslund,

I tried the same. IT didnt'work for me. Do we need to reboot the EC2 instance after adding the role

New Contributor

I actually added the grant to an already existing role the instance had, and no reboot was required. It seems like even if you attach a new IAM role to an already running instance there's still no need for a reboot.

Are you using the s3a:// scheme? Different schemes (s3, s3a, s3n) work a bit differently when it comes to authentication.

Did you ever manage to fix this issue? I have exact same problem.

Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.