Support Questions

Find answers, ask questions, and share your expertise

Hbase Incremental backup to remote hdfs

New Contributor

Hi all!

I'm trying to create hbase backups to another remote hdfs cluster.

Full backup done successfully, but incremental fails:

[hbase@dedrain-workstation dedrain]$ hbase backup create full hdfs://172.25.10.22:8020/backup_fr/10052018
2018-05-10 12:49:37,850 INFO  [main] util.BackupClientUtil: Backup root dir hdfs://172.25.10.22:8020/backup_fr/10052018 does not exist. Will be created.
Backup session backup_1525945778016 finished. Status: SUCCESS
[hbase@dedrain-workstation dedrain]$ hbase backup create incremental hdfs://172.25.10.22:8020/backup_fr/10052018
2018-05-10 12:51:00,116 INFO  [main] util.BackupClientUtil: Using existing backup root dir: hdfs://172.25.10.22:8020/backup_fr/10052018
Backup session finished. Status: FAILURE
2018-05-10 12:52:48,772 ERROR [main] util.AbstractHBaseTool: Error running command-line tool
org.apache.hadoop.ipc.RemoteException(java.lang.IllegalArgumentException): Wrong FS: hdfs://172.25.10.22:8020/backup_fr/10052018/.tmp/backup_1525945860331, expected: hdfs://dedrain-workstation:8020
	at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:666)
	at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:214)
	at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:816)
	at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:812)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:823)
	at org.apache.hadoop.hbase.backup.master.IncrementalTableBackupProcedure.deleteBulkLoadDirectory(IncrementalTableBackupProcedure.java:487)
	at org.apache.hadoop.hbase.backup.master.IncrementalTableBackupProcedure.incrementalCopyHFiles(IncrementalTableBackupProcedure.java:478)
	at org.apache.hadoop.hbase.backup.master.IncrementalTableBackupProcedure.executeFromState(IncrementalTableBackupProcedure.java:287)
	at org.apache.hadoop.hbase.backup.master.IncrementalTableBackupProcedure.executeFromState(IncrementalTableBackupProcedure.java:71)
	at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:107)
	at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:500)
	at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1086)
	at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:888)
	at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:841)
	at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$200(ProcedureExecutor.java:77)
	at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$1.run(ProcedureExecutor.java:443)

Host dedrain-workstation and 172.25.10.22 is different hosts and clusters.

What am i doing wrong?

2 REPLIES 2

Contributor

Can you try using the host name instead of the ip in the hdfs path? Also make sure the user has permissions on that folder on hdfs to create new folder and write data to it.

New Contributor

Directory was created by full backup couple rows above. Remote debugging hbase-release from hortonworks github repo gives that org.apache.hadoop.fs.FileSystem.checkPath comparing authority points from uri. Compare takes uri authority from arguments and Hbase Master conf, which gives "dedrain-workstation:8020" and 172.25.10.22:8020. Obviously comparing gives false.)

Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.