Member since
10-17-2017
13
Posts
0
Kudos Received
0
Solutions
03-15-2018
11:01 AM
Hi , Currently I am writing a python script which is in my local machine and I am calling a shell script which is residing in remote server. The Shell script has the Hbase Backup command to push backup to S3. Shell script works fine taking backup and pushing to s3 when executed in that remote machine. But when I call that script from my local machine through python it errors out. Below is the command which I am running in my shell script. hbase backup create full s3a://$AccessKey:$SecretKey@$BucketPath -set $BackupSet I get this below error ERROR [main] util.AbstractHBaseTool: Error running command-line tool
java.io.IOException: Failed of exporting snapshot snapshot_1521106075558_default_tablename to s3a://AccessKey:SecretKey@swe-backup-test/backup_1521106069410/default/tablename/ with reason code 1
\tat sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
\tat sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
\tat sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
\tat java.lang.reflect.Constructor.newInstance(Constructor.java:526)
\tat org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
\tat org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
\tat org.apache.hadoop.hbase.util.ForeignExceptionUtil.toIOException(ForeignExceptionUtil.java:45)
\tat org.apache.hadoop.hbase.client.HBaseAdmin$TableBackupFuture.convertResult(HBaseAdmin.java:2787)
\tat org.apache.hadoop.hbase.client.HBaseAdmin$TableBackupFuture.convertResult(HBaseAdmin.java:2766)
\tat org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.waitProcedureResult(HBaseAdmin.java:4769)
\tat org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.get(HBaseAdmin.java:4703)
\tat org.apache.hadoop.hbase.client.HBaseAdmin.get(HBaseAdmin.java:2744)
\tat org.apache.hadoop.hbase.client.HBaseAdmin.backupTables(HBaseAdmin.java:2760)
\tat org.apache.hadoop.hbase.client.HBaseBackupAdmin.backupTables(HBaseBackupAdmin.java:243)
\tat org.apache.hadoop.hbase.backup.impl.BackupCommands$CreateCommand.execute(BackupCommands.java:197)
\tat org.apache.hadoop.hbase.backup.BackupDriver.parseAndRun(BackupDriver.java:111)
\tat org.apache.hadoop.hbase.backup.BackupDriver.doWork(BackupDriver.java:126)
\tat org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:112)
\tat org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
\tat org.apache.hadoop.hbase.backup.BackupDriver.main(BackupDriver.java:131)
Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Failed of exporting snapshot snapshot_1521106075558_default_US_tablename to s3a://Accesskey:SecretKey@swe-backup-test/backup_1521106069410/default/tablename/ with reason code 1
\tat org.apache.hadoop.hbase.backup.master.FullTableBackupProcedure.snapshotCopy(FullTableBackupProcedure.java:321)
\tat org.apache.hadoop.hbase.backup.master.FullTableBackupProcedure.executeFromState(FullTableBackupProcedure.java:577)
\tat org.apache.hadoop.hbase.backup.master.FullTableBackupProcedure.executeFromState(FullTableBackupProcedure.java:69)
\tat org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:107)
\tat org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:500)
\tat org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1086)
\tat org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:888)
\tat org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:841)
\tat org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$200(ProcedureExecutor.java:77)
\tat org.apache.hadoop.hbase.procedure2.ProcedureExecutor$1.run(ProcedureExecutor.java:443)
After some research I found the below link which has the same error which I am receiving when I call my python from my local machine. https://issues.apache.org/jira/browse/HBASE-15853 In that link , they say /user/hbase directory should be present in hdfs and should be writable. I already have that directory present in my hdfs and also i have made it writable. But still facing the same error. can anybody please help me out to resolve this issue. It is urgent. @Jay Kumar SenSharma Thanks
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache HBase
02-28-2018
11:45 AM
Hi, Why is it that I am not able to see the tables which I create in Hbase shell in my SYSTEM.CATALOG Table. But Tables created through Phoenix are listed in SYSTEM.CATALOG Table. Can someone please explain why ? Thanks!
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Phoenix
02-28-2018
06:53 AM
@stevel @Timothy Spann The restore command works fine after I removed the slash '/' at the end of my bucket path. The restore is successfull. But I dont see any data in my SYSTEM Tables. The User created tables are restored properly with data as expected but SYSTEM tables doesnt show any records in it. Am I doing something wrong ?
... View more
02-28-2018
06:47 AM
Hi Thanks for the reply @stevel 1. I have added fs.access.key and fs.secret.key in my config file. 2. And just for testing purpose i tried with Root Directory 3. when I execute hadoop fs -ls s3a://bucket/path-to-backup i can see the backup created.
... View more
02-26-2018
06:54 AM
Hi, Using hbase full backup command I have successfully taken the backup and stored in one of my test bucket in AWS S3. In the S3 bucket the backup name is storing in the format 'PRE backup_x1x2x3x' . Below is the backup command which I ran. hbase backup create full s3a://$AccessKey:$SecretKey@$BucketPath -set setname Now while doing restore using below command hbase restore -set setname s3a://$AccessKey:$SecretKey@BucketPath PRE backup_x1x2x3x -overwrite I get the error as below : java.io.IOException: Could not find backup manifest .backup.manifest for PRE backup_x1x2x3x in s3a://AcessKey:SecretKey@BucketPath. Did PRE backup_x1x2x3x correspond to previously taken backup ?
at org.apache.hadoop.hbase.backup.HBackupFileSystem.getManifestPath(HBackupFileSystem.java:111)
at org.apache.hadoop.hbase.backup.HBackupFileSystem.getManifest(HBackupFileSystem.java:119)
at org.apache.hadoop.hbase.backup.HBackupFileSystem.checkImageManifestExist(HBackupFileSystem.java:134)
at org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restore(RestoreClientImpl.java:95)
at org.apache.hadoop.hbase.backup.RestoreDriver.parseAndRun(RestoreDriver.java:158)
at org.apache.hadoop.hbase.backup.RestoreDriver.doWork(RestoreDriver.java:187)
at org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:112)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.hbase.backup.RestoreDriver.main(RestoreDriver.java:192) can anybody please help me out in this. @Jay Kumar SenSharma I have used the above steps from the link : https://hortonworks.com/blog/coming-hdp-2-5-incremental-backup-restore-apache-hbase-apache-phoenix/ Thanks.
... View more
Labels:
- Labels:
-
Apache HBase
11-15-2017
08:07 AM
Hi, I came across two links 1. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.2/bk_support-matrices/content/ch_matrices-ambari.html 2. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.3/bk_support-matrices/content/ch_matrices-ambari.html#ambari_stack The first link shows the stack compatibility , where HDP 2.6 is not compatible with Ambari 2.4.x, but compatible with 2.5. But the second link shows HDP 2.6 is compatible with Ambari 2.4.x, 2.5.x and 2.6.x. May I please know which one is correct ? And whether Ambari 2.4.x supports HDP 2.6 version or not ? Thanks.
... View more
Labels:
10-18-2017
09:19 AM
@Jay SenSharma Hi Jay,
I did not find any errors in the ranger logs. And netstat -tnlpa | grep 6080 did not return any output. Also I tried do telnet from the machine where the browser is opened i,e my local, it says # telnet $RANGER_HOST 6080
Connecting To 10.193.5.45...Could not open connection to the host, on port 6080: Connect failed I actually, tried to ping from my local to those Linux VMs (Ambari, Host, Zookeeper, Node) it says Request timed out, even though all VMs are up and running. Is it related to my local system's firewall issue ?
... View more
10-17-2017
10:52 AM
@Jay SenSharma versions: HDP-2.5.3.0 and Ranger: 0.6.0 and for the permissions i got as below: # ls -lart /etc/ranger/admin/.rangeradmin.jceks.crc
-rw-r----- 1 root root 12 Oct 17 09:30 /etc/ranger/admin/.rangeradmin.jceks.cr
... View more
10-17-2017
10:43 AM
Hi, I am new to Hbase and Ambari. I am exploring options for user authentication in phoenix via Rangers services.Right now I am stuck with Ranger admin UI which is not opening even though the port 6080 is open. Steps: 1. Created test ambari cluster, with one master, one zookeeper, one node. 2. Installed Ranger services through ambari UI 3. Set up mysql in master where Rangers services are installed 4. Created Ranger user and ranger database in mysql Right now , trying to open the Ranger Admin UI from quick links which opens at `http://<masters-hostname>:6080` , but the page does not open (Have stopped the firewall to test if it works, but no luck) says "Site cant be reached, <hostname> is taking too long time to respond". I have checked logs in /var/log/ranger/admin/xa_portal.log . I dont see any error in logs, only below warn i can see. WARN org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker (ChecksumFileSystem.java:165) - Problem opening checksum file: file:/etc/ranger/admin/rangeradmin.jceks. Ignoring exception:
java.io.FileNotFoundException: /etc/ranger/admin/.rangeradmin.jceks.crc (Permission denied)
Could anybody please help me on this issue. Thanks
... View more
Labels:
- Labels:
-
Apache Ranger