Created 06-05-2019 12:17 PM
Hi, I'm new in big DATA,
I'm working with HDP 2.6.5, Ambari 2.6.2, HBase 1.1.2
I would like to make a weekly full backups of all tables of HBase (all regionservers and my HBase master)
and daily incremental backups of same things
No one has made a backup, so that's reason I am trying to make one.
HBase is kerberised. so I did a kinit as hbase user
I tried those commands on my hbase master (in local on it) :
hbase backup create full hdfs://[FQDN server]:[port]/ -set /etc/hbase/hbase_backup_full/test -w 3
goal : backup all tables in the file called test (I've created it so I don't understand why it tells me that's empty, and I've made a chown hbase:hbase on it as well as on the directory hbase_backup_full)
issue : 634 ERROR [main] util.AbstractHBaseTool: Error running command-line tool
IOException: Backup set '/etc/hbase/hbase_backup_full/test' is either empty or does not exist
could anyone help me please ?
Do I need to modify some config files to do this ? if yes which ?
Thanks 🙂
Created on 06-05-2019 10:01 PM - edited 08-17-2019 03:07 PM
I have tried as much as possible to explain the process of doing a successful hbase backup. I think for sure is to enable hbase backup by adding some parameters documented below.
There are a couple of things to do like copying the core-site.xml to the hbase/conf directory etc. I hope this process helps you achieve your target. I have not included the incremental that documentation you can easily find
Check the directories in hdfs
[hbase@nanyuki ~]$ hdfs dfs -ls / Found 12 items drwxrwxrwx - yarn hadoop 0 2018-12-17 13:07 /app-logs drwxr-xr-x - hdfs hdfs 0 2018-09-24 00:22 /apps ....... drwxr-xr-x - hdfs hdfs 0 2019-01-29 06:06 /test drwxrwxrwx - hdfs hdfs 0 2019-06-04 23:14 /tmp drwxr-xr-x - hdfs hdfs 0 2018-12-17 13:04 /user
Created a backup directory in hdfs
[root@nanyuki ~]# su - hdfs Last login: Wed Jun 5 20:47:02 CEST 2019 [hdfs@nanyuki ~]$ hdfs dfs -mkdir /backup [hdfs@nanyuki ~]$ hdfs dfs -chown hbase /backup
Validate the backup directory was created with correct permissions
[hdfs@nanyuki ~]$ hdfs dfs -ls / Found 13 items drwxrwxrwx - yarn hadoop 0 2018-12-17 13:07 /app-logs drwxr-xr-x - hdfs hdfs 0 2018-09-24 00:22 /apps drwxr-xr-x - yarn hadoop 0 2018-09-24 00:12 /ats drwxr-xr-x - hbase hdfs 0 2019-06-05 21:11 /backup ....... drwxr-xr-x - hdfs hdfs 0 2018-12-17 13:04 /user
Invoked the hbase shell to check the tables
[hbase@nanyuki ~]$ hbase shell HBase Shell; enter 'help<RETURN>' for list of supported commands. Type "exit<RETURN>" to leave the HBase Shell Version 1.1.2.2.6.5.0-292, r897822d4dd5956ca186974c10382e9094683fa29, Fri May 11 08:00:59 UTC 2018 hbase(main):001:0> list_namespace NAMESPACE PDFTable default hbase 3 row(s) in 4.6610 seconds
Check the tables
hbase(main):002:0> list_namespace_tables 'hbase' TABLE acl meta namespace 3 row(s) in 0.1970 seconds
Hbase need a table called "backup" table in hbase namespace which was missing this table is created if hbase backup is enabled see below so I had to add the below parameters in Custom hbase-site at the same time enabling hbase backup
hbase.backup.enable=true hbase.master.logcleaner.plugins=YOUR_PLUGINS,org.apache.hadoop.hbase.backup.master.BackupLogCleaner hbase.procedure.master.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager hbase.procedure.regionserver.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager hbase.coprocessor.region.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.BackupObserver
After adding the above properties in Custom hbase-site and restarting hbase magically the backup table was created!
[hbase@nanyuki ~]$ hbase shell HBase Shell; enter 'help<RETURN>' for list of supported commands. Type "exit<RETURN>" to leave the HBase Shell Version 1.1.2.2.6.5.0-292, r897822d4dd5956ca186974c10382e9094683fa29, Fri May 11 08:00:59 UTC 2018 hbase(main):001:0> list_namespace_tables 'hbase' TABLE acl backup meta namespace 4 row(s) in 0.3230 seconds
Error with the backup command
[hbase@nanyuki ~]$ hbase backup create "full" hdfs://nanyuki.kenya.ke:8020/backup -set texas 2019-06-05 22:28:37,475 ERROR [main] util.AbstractHBaseTool: Error running command-line tool java.io.IOException: Backup set 'texas' is either empty or does not exist at org.apache.hadoop.hbase.backup.impl.BackupCommands$CreateCommand.execute(BackupCommands.java:182) at org.apache.hadoop.hbase.backup.BackupDriver.parseAndRun(BackupDriver.java:111) at org.apache.hadoop.hbase.backup.BackupDriver.doWork(BackupDriver.java:126) at org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:112) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.hbase.backup.BackupDriver.main(BackupDriver.java:131)
To resolve the error "Backup set 'texas' is either empty or does not exist" I pre-emptively create the directory
[hdfs@nanyuki ~]$ hdfs dfs -touchz /backup/texas [hdfs@nanyuki ~]$ hdfs dfs -ls /backup Found 1 items -rw-r--r-- 3 hdfs hdfs 0 2019-06-05 22:51 /backup/texas
Check the core-site.xml is in /.../hbase/conf directory if not like below I copied the core-site.xml over
[root@nanyuki ~] cp /etc/hadoop/2.6.5.0-292/0/core-site.xml /etc/hbase/conf/
validate the copy
[root@nanyuki conf]# ls -lrt /etc/hbase/conf/ total 60 -rw-r--r-- 1 root root 4537 May 11 2018 hbase-env.cmd -rw-r--r-- 1 hbase hadoop 367 Sep 23 2018 hbase-policy.xml -rw-r--r-- 1 hbase hadoop 4235 Sep 23 2018 log4j.properties -rw-r--r-- 1 hbase root 18 Oct 1 2018 regionservers ............ -rw-r--r-- 1 root root 3946 Jun 5 22:38 core-site.xml
Relaunched the hbase backup, it was like frozen but at last, I got a "SUCCESS"
[hbase@nanyuki ~]$ hbase backup create "full" hdfs://nanyuki.kenya.ke:8020/backup 2019-06-05 22:52:57,024 INFO [main] util.BackupClientUtil: Using existing backup root dir: hdfs://nanyuki.kenya.ke:8020/backup Backup session backup_1559767977522 finished. Status: SUCCESS
Checked the YARN UI some backup process was going on see screenshot below hbase_backup.PNG
After successful backup see below screenshot hbase_backup2.PNG
Validate the backup was done this give some details like time, type "FULL" etc
[hbase@nanyuki hbase-client]$ bin/hbase backup show Unsupported command for backup: show [hbase@nanyuki hbase-client]$ hbase backup history ID : backup_1559767977522 Type : FULL Tables : ATLAS_ENTITY_AUDIT_EVENTS,jina,atlas_titan,PDFTable:DOCID001,PDFTable,testtable3 State : COMPLETE Start time : Wed Jun 05 22:52:57 CEST 2019 End time : Wed Jun 05 23:14:20 CEST 2019 Progress : 100
Backup in the backup directory in hdfs
[hdfs@nanyuki backup]$ hdfs dfs -ls /backup Found 2 items drwxr-xr-x - hbase hdfs 0 2019-06-05 23:11 /backup/backup_1559767977522
Happy hadooping
Reference Hbase backup commands
Created 06-07-2019 06:02 PM
Here are the issues :
[hbase@xxxx ~]$ bin/hbase backup show
-bash: bin/hbase: No such file or directory
[hbase@XXX ~]$ hdfs dfs -ls /$backup-dir
ls: `/-dir': No such file or directory
[hbase@XXXX ~]$ hdfs dfs -ls /backup
Found 3 items
drwx------ - hbase hdfs 0 2019-06-07 09:43 /backup/backup_1559893402505
-rw------- 3 hdfs hdfs 0 2019-06-06 09:14 /backup/test_full
drwx------ - hbase hdfs 0 2019-06-07 12:05 /backup/testesa
[hbase@xxxx ~]$ hbase backup history
ID : backup_1559901919878
Type : FULL
Tables : ATLAS_ENTITY_AUDIT_EVENTS,atlas_titan,newns:ATLAS_ENTITY_AUDIT_EVENTS,newns:atlas_titan
State : FAILED
Start time : Fri Jun 07 12:05:20 CEST 2019
Failed message : Failed of exporting snapshot snapshot_1559901920544_default_ATLAS_ENTITY_AUDIT_EVENTS to hdfs://xxx/backup/testesa/backup_1559901919878/default/ATLAS_ENTITY_AUDIT_EVENTS/ with reason code 1
Progress : 0
ID : backup_1559893402505
Type : FULL
Tables : ATLAS_ENTITY_AUDIT_EVENTS,atlas_titan,newns:ATLAS_ENTITY_AUDIT_EVENTS,newns:atlas_titan
State : FAILED
Start time : Fri Jun 07 09:43:22 CEST 2019
Failed message : Failed of exporting snapshot snapshot_1559893403048_default_ATLAS_ENTITY_AUDIT_EVENTS to hdfs://xxxx/backup/backup_1559893402505/default/ATLAS_ENTITY_AUDIT_EVENTS/ with reason code 1
Progress : 0
[hbase@xxxx ~]$
Created 06-07-2019 06:11 PM
That is not the backup command I am interested in? Those commands are for listing and checking for the backup
$ hbase backup create "full" hdfs://xxxxxxxxxxxxxxxxxxx
Show the command and the location of the backup directory
Created 06-07-2019 06:02 PM
Here are my issues :
[hbase@xxxx ~]$ hbase backup history
ID : backup_1559901919878
Type : FULL
Tables : ATLAS_ENTITY_AUDIT_EVENTS,atlas_titan,newns:ATLAS_ENTITY_AUDIT_EVENTS,newns:atlas_titan
State : FAILED
Start time : Fri Jun 07 12:05:20 CEST 2019
Failed message : Failed of exporting snapshot snapshot_1559901920544_default_ATLAS_ENTITY_AUDIT_EVENTS to hdfs://xxxxx/backup/testesa/backup_1559901919878/default/ATLAS_ENTITY_AUDIT_EVENTS/ with reason code 1
Progress : 0
[hbase@xxxxx ~]$ bin/hbase backup show
-bash: bin/hbase: No such file or directory
[hbase@xxxxx ~]$ hdfs dfs -ls /$backup-dir
ls: `/-dir': No such file or directory
[hbase@xxxxx ~]$ hdfs dfs -ls /backup
Found 2 items
-rw------- 3 hdfs hdfs 0 2019-06-06 09:14 /backup/test_full
drwx------ - hbase hdfs 0 2019-06-07 12:05 /backup/testesa
[hbase@xxxxx ~]$
Created 06-11-2019 09:47 AM
[hbase@server ~]$ hbase backup create "full" hdfs:// server /backup
2019-06-11 10:53:32,100 INFO [main] util.BackupClientUtil: Using existing backup root dir: hdfs:// server /backup
Backup session finished. Status: FAILURE
2019-06-11 10:53:40,555 ERROR [main] util.AbstractHBaseTool: Error running command-line tool
java.io.IOException: Failed of exporting snapshot snapshot_1560243212806_default_ATLAS_ENTITY_AUDIT_EVENTS to hdfs:// server /backup/backup_1560243212277/default/ATLAS_ENTITY_AUDIT_EVENTS/ with reason code 1
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
at org.apache.hadoop.hbase.util.ForeignExceptionUtil.toIOException(ForeignExceptionUtil.java:45)
at org.apache.hadoop.hbase.client.HBaseAdmin$TableBackupFuture.convertResult(HBaseAdmin.java:2787)
at org.apache.hadoop.hbase.client.HBaseAdmin$TableBackupFuture.convertResult(HBaseAdmin.java:2766)
at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.waitProcedureResult(HBaseAdmin.java:4779)
at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.get(HBaseAdmin.java:4713)
at org.apache.hadoop.hbase.client.HBaseAdmin.get(HBaseAdmin.java:2744)
at org.apache.hadoop.hbase.client.HBaseAdmin.backupTables(HBaseAdmin.java:2760)
at org.apache.hadoop.hbase.client.HBaseBackupAdmin.backupTables(HBaseBackupAdmin.java:243)
at org.apache.hadoop.hbase.backup.impl.BackupCommands$CreateCommand.execute(BackupCommands.java:197)
at org.apache.hadoop.hbase.backup.BackupDriver.parseAndRun(BackupDriver.java:111)
at org.apache.hadoop.hbase.backup.BackupDriver.doWork(BackupDriver.java:126)
at org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:112)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.hbase.backup.BackupDriver.main(BackupDriver.java:131)
Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Failed of exporting snapshot snapshot_1560243212806_default_ATLAS_ENTITY_AUDIT_EVENTS to hdfs:// server /backup/backup_1560243212277/default/ATLAS_ENTITY_AUDIT_EVENTS/ with reason code 1
at org.apache.hadoop.hbase.backup.master.FullTableBackupProcedure.snapshotCopy(FullTableBackupProcedure.java:321)
at org.apache.hadoop.hbase.backup.master.FullTableBackupProcedure.executeFromState(FullTableBackupProcedure.java:577)
at org.apache.hadoop.hbase.backup.master.FullTableBackupProcedure.executeFromState(FullTableBackupProcedure.java:69)
at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:107)
at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:500)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1086)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:888)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:841)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$200(ProcedureExecutor.java:77)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$1.run(ProcedureExecutor.java:443)
Created 07-01-2019 03:38 PM
subject closed
Created 05-19-2022 02:47 PM
i got this error
when enable the hbase backup as below on hbase-site.xml
<property>
<name>hbase.backup.enable</name>
<value>true</value>
</property>
<property>
<name>hbase.master.logcleaner.plugins</name>
<value>org.apache.hadoop.hbase.backup.master.BackupLogCleaner,...</value>
</property>
<property>
<name>hbase.procedure.master.classes</name>
<value>org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager,...</value>
</property>
<property>
<name>hbase.procedure.regionserver.classes</name>
<value>org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager,...</value>
</property>
<property>
<name>hbase.coprocessor.region.classes</name>
<value>org.apache.hadoop.hbase.backup.BackupObserver,...</value>
</property>
<property>
<name>hbase.master.hfilecleaner.plugins</name>
<value>org.apache.hadoop.hbase.backup.BackupHFileCleaner,...</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>false</value>
</property>
<property>
<name>hbase.tmp.dir</name>
<value>./tmp</value>
</property>
<property>
<name>hbase.unsafe.stream.capability.enforce</name>
<value>false</value>
</property>
</configuration>