Support Questions

Find answers, ask questions, and share your expertise
Check out our newest addition to the community, the Cloudera Data Analytics (CDA) group hub.

hbase backup incremental on remote hdfs

New Contributor

First I do a full backup with this command :

hbase backup create full hdfs://<hostname>:8020/apps/backup/hbase -all

No problem

When I do an incremental backup

hbase backup create incremental hdfs://<hostname>:8020/apps/backup/hbase -all

I obtain these error

Wrong FS: hdfs://<hostnme>:8020/apps/backup/backup/backup_1542105235709, expected: hdfs://hdcluster

With hostname=cbh530 or cbh530.bdxdom.mck it's the same result, already when I change the backup directory or the tables list and also when I use a backup set.

Can you help me please.


Cloudera Employee

Hi @ANTONIOU Thierry,

Please check under your HDFS configuration this property: dfs.internal.nameservices (it may be under Custom hdfs-site)

then do the full using it:

hbase backup create full hdfs://<nameservice>/apps/backup/hbase -all

and then the incremental

hbase backup create incremental hdfs://<nameservice>/apps/backup/hbase -all

In my case I tried the following:

hbase backup create full hdfs://hanamenode/tmp/backup -t emp

hbase backup create incremental hdfs://hanamenode/tmp/backup -t emp

Please let me know if you were able to run the incremental.



New Contributor

I try these syntax and obtain the error :

hbase@cbh790.bdxdom.mck@/home/hbase>hbase backup create full hdfs://DEV/apps/backup/hbase/POC-ARCHI -set all_tables
2018-11-15 11:45:37,621 ERROR [main] util.AbstractHBaseTool: Error running command-line tool
java.lang.IllegalArgumentException: DEV
at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(
at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(
at org.apache.hadoop.hdfs.DFSClient.<init>(
at org.apache.hadoop.hdfs.DFSClient.<init>(
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(
at org.apache.hadoop.fs.FileSystem.createFileSystem(
at org.apache.hadoop.fs.FileSystem.access$200(
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(
at org.apache.hadoop.fs.FileSystem$Cache.get(
at org.apache.hadoop.fs.FileSystem.get(
at org.apache.hadoop.fs.Path.getFileSystem(
at org.apache.hadoop.hbase.backup.util.BackupClientUtil.checkPathExist(
at org.apache.hadoop.hbase.backup.util.BackupClientUtil.checkTargetDir(
at org.apache.hadoop.hbase.client.HBaseAdmin.backupTablesAsync(
at org.apache.hadoop.hbase.client.HBaseAdmin.backupTables(
at org.apache.hadoop.hbase.client.HBaseBackupAdmin.backupTables(
at org.apache.hadoop.hbase.backup.impl.BackupCommands$CreateCommand.execute(
at org.apache.hadoop.hbase.backup.BackupDriver.parseAndRun(
at org.apache.hadoop.hbase.backup.BackupDriver.doWork(
at org.apache.hadoop.hbase.backup.BackupDriver.main(
Caused by: DEV
... 23 more

Cloudera Employee

Hi @ANTONIOU Thierry

Please try:

hbase backup create full hdfs://hdcluster/tmp/backup -t emp


hbase backup create incremental hdfs://hdcluster/tmp/backup -t emp



New Contributor

Sorry, but I don't want to backup on the same cluster, I want to backup cluster1 in a cluster2 hdfs directory .

New Contributor


sorry but I want to backup my hbase tables on HDFS directory on another cluster, these commands are allow to backup on the same cluster.

Did hbase backup broadcast the lan to search an HDP cluster by it name, I don't think so ?

I can do a FULL backup but not an incremental with the command

hbase backup create incremental hdfs://<Active Namenod>:8020/<HDFS Directory>

Thank's for your help.

Cloudera Employee

Hi @ANTONIOU Thierry,

Please open a case with Hortonworks support, there is a bug associated to this issue (EAR 9561)


Ariel Quilodran

New Contributor

Hi aquilodran,

thank's for this answer, I will look at.


Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.