Member since
10-17-2017
13
Posts
0
Kudos Received
0
Solutions
05-16-2018
10:33 AM
Hi, we are doing some research on how to go for "Data at Rest Encryption" and "Data on wire Encryption" for Hbase. Right now we have explored Ranger KMS for encrypting data at rest in our hadoop cluster. I have a question that if we are moving to AWS , Since EBS will provide encryption of data at rest, is that sufficient or should Ranger KMS be also present in order to support encryption. Please provide some inputs on this to understand it better as how it works. Thanks
... View more
Labels:
04-27-2018
06:34 AM
Hi, I am using Rangers to test the authorisation of Hbase tables by enabling Hbase plug-in. I have created few test users in my linux box and have used Ranger APIs to create policies to give specific permissions to my test users. Everything works fine. But I want to know If I can grant/revoke permissions through phoenix for those hbase tables. I have tried running grant command from phoenix, but it did not work. It threw me syntax error. I need to understand if this is possible while I have my Rangers running in my cluster. Thanks.
... View more
03-15-2018
11:01 AM
Hi , Currently I am writing a python script which is in my local machine and I am calling a shell script which is residing in remote server. The Shell script has the Hbase Backup command to push backup to S3. Shell script works fine taking backup and pushing to s3 when executed in that remote machine. But when I call that script from my local machine through python it errors out. Below is the command which I am running in my shell script. hbase backup create full s3a://$AccessKey:$SecretKey@$BucketPath -set $BackupSet I get this below error ERROR [main] util.AbstractHBaseTool: Error running command-line tool
java.io.IOException: Failed of exporting snapshot snapshot_1521106075558_default_tablename to s3a://AccessKey:SecretKey@swe-backup-test/backup_1521106069410/default/tablename/ with reason code 1
\tat sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
\tat sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
\tat sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
\tat java.lang.reflect.Constructor.newInstance(Constructor.java:526)
\tat org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
\tat org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
\tat org.apache.hadoop.hbase.util.ForeignExceptionUtil.toIOException(ForeignExceptionUtil.java:45)
\tat org.apache.hadoop.hbase.client.HBaseAdmin$TableBackupFuture.convertResult(HBaseAdmin.java:2787)
\tat org.apache.hadoop.hbase.client.HBaseAdmin$TableBackupFuture.convertResult(HBaseAdmin.java:2766)
\tat org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.waitProcedureResult(HBaseAdmin.java:4769)
\tat org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.get(HBaseAdmin.java:4703)
\tat org.apache.hadoop.hbase.client.HBaseAdmin.get(HBaseAdmin.java:2744)
\tat org.apache.hadoop.hbase.client.HBaseAdmin.backupTables(HBaseAdmin.java:2760)
\tat org.apache.hadoop.hbase.client.HBaseBackupAdmin.backupTables(HBaseBackupAdmin.java:243)
\tat org.apache.hadoop.hbase.backup.impl.BackupCommands$CreateCommand.execute(BackupCommands.java:197)
\tat org.apache.hadoop.hbase.backup.BackupDriver.parseAndRun(BackupDriver.java:111)
\tat org.apache.hadoop.hbase.backup.BackupDriver.doWork(BackupDriver.java:126)
\tat org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:112)
\tat org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
\tat org.apache.hadoop.hbase.backup.BackupDriver.main(BackupDriver.java:131)
Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Failed of exporting snapshot snapshot_1521106075558_default_US_tablename to s3a://Accesskey:SecretKey@swe-backup-test/backup_1521106069410/default/tablename/ with reason code 1
\tat org.apache.hadoop.hbase.backup.master.FullTableBackupProcedure.snapshotCopy(FullTableBackupProcedure.java:321)
\tat org.apache.hadoop.hbase.backup.master.FullTableBackupProcedure.executeFromState(FullTableBackupProcedure.java:577)
\tat org.apache.hadoop.hbase.backup.master.FullTableBackupProcedure.executeFromState(FullTableBackupProcedure.java:69)
\tat org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:107)
\tat org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:500)
\tat org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1086)
\tat org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:888)
\tat org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:841)
\tat org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$200(ProcedureExecutor.java:77)
\tat org.apache.hadoop.hbase.procedure2.ProcedureExecutor$1.run(ProcedureExecutor.java:443)
After some research I found the below link which has the same error which I am receiving when I call my python from my local machine. https://issues.apache.org/jira/browse/HBASE-15853 In that link , they say /user/hbase directory should be present in hdfs and should be writable. I already have that directory present in my hdfs and also i have made it writable. But still facing the same error. can anybody please help me out to resolve this issue. It is urgent. @Jay Kumar SenSharma Thanks
... View more
Labels:
02-28-2018
11:45 AM
Hi, Why is it that I am not able to see the tables which I create in Hbase shell in my SYSTEM.CATALOG Table. But Tables created through Phoenix are listed in SYSTEM.CATALOG Table. Can someone please explain why ? Thanks!
... View more
Labels:
02-28-2018
06:53 AM
@stevel @Timothy Spann The restore command works fine after I removed the slash '/' at the end of my bucket path. The restore is successfull. But I dont see any data in my SYSTEM Tables. The User created tables are restored properly with data as expected but SYSTEM tables doesnt show any records in it. Am I doing something wrong ?
... View more
02-28-2018
06:47 AM
Hi Thanks for the reply @stevel 1. I have added fs.access.key and fs.secret.key in my config file. 2. And just for testing purpose i tried with Root Directory 3. when I execute hadoop fs -ls s3a://bucket/path-to-backup i can see the backup created.
... View more
02-26-2018
06:54 AM
Hi, Using hbase full backup command I have successfully taken the backup and stored in one of my test bucket in AWS S3. In the S3 bucket the backup name is storing in the format 'PRE backup_x1x2x3x' . Below is the backup command which I ran. hbase backup create full s3a://$AccessKey:$SecretKey@$BucketPath -set setname Now while doing restore using below command hbase restore -set setname s3a://$AccessKey:$SecretKey@BucketPath PRE backup_x1x2x3x -overwrite I get the error as below : java.io.IOException: Could not find backup manifest .backup.manifest for PRE backup_x1x2x3x in s3a://AcessKey:SecretKey@BucketPath. Did PRE backup_x1x2x3x correspond to previously taken backup ?
at org.apache.hadoop.hbase.backup.HBackupFileSystem.getManifestPath(HBackupFileSystem.java:111)
at org.apache.hadoop.hbase.backup.HBackupFileSystem.getManifest(HBackupFileSystem.java:119)
at org.apache.hadoop.hbase.backup.HBackupFileSystem.checkImageManifestExist(HBackupFileSystem.java:134)
at org.apache.hadoop.hbase.backup.impl.RestoreClientImpl.restore(RestoreClientImpl.java:95)
at org.apache.hadoop.hbase.backup.RestoreDriver.parseAndRun(RestoreDriver.java:158)
at org.apache.hadoop.hbase.backup.RestoreDriver.doWork(RestoreDriver.java:187)
at org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:112)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.hbase.backup.RestoreDriver.main(RestoreDriver.java:192) can anybody please help me out in this. @Jay Kumar SenSharma I have used the above steps from the link : https://hortonworks.com/blog/coming-hdp-2-5-incremental-backup-restore-apache-hbase-apache-phoenix/ Thanks.
... View more
Labels:
02-19-2018
06:51 AM
Hi, I am trying to take full backup using the command "hbase backup create full </local/path>". In my hbase , for testing purpose there is only one user table created and all others are default hbase tables. I am getting below ERROR when i run the backup command. Can someone please tell me what config i need to add inorder to resolve this timeout issue. Backup session finished. Status: FAILURE
2018-02-19 06:10:24,928 ERROR [main] util.AbstractHBaseTool: Error running command-line tool
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hbase.errorhandling.ForeignException): org.apache.hadoop.hbase.errorhandling.TimeoutException: Timeout elapsed! Source:Timeout caused Foreign Exception Start:1519020556896, End:1519020616896, diff:60000, max:60000 ms
at org.apache.hadoop.hbase.errorhandling.ForeignExceptionDispatcher.rethrowException(ForeignExceptionDispatcher.java:83)
at org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager.execProcedure(LogRollMasterProcedureManager.java:129)
at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.execProcedure(MasterProcedureUtil.java:93)
at org.apache.hadoop.hbase.backup.master.FullTableBackupProcedure.executeFromState(FullTableBackupProcedure.java:525)
at org.apache.hadoop.hbase.backup.master.FullTableBackupProcedure.executeFromState(FullTableBackupProcedure.java:69)
at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:107)
at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:500)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1086)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:888)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:841)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$200(ProcedureExecutor.java:77)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$1.run(ProcedureExecutor.java:443)
I have already tried adding below property in my hbase-site.xml file, but I still get the timeout issue <property>
<name>hbase.snapshot.region.timeout</name>
<value>300000</value>
</property>
... View more
Labels:
12-06-2017
05:53 AM
Hi, I have upgraded my HDP to 2.6.2 version and testing Rangers 0.7. I have added a new user using Rest API from the host machine as below curl -u admin:admin -v -i -s -X POST -H "Accept: application/json" -H "Content-Type: application/json" http://localhost:6080/service/xusers/users -d @/file_path/newuser.json
The above command executed successfully creating new user with and id = 23, so when I used the below curl command to get this user, it shows me the recently added user, But Ranger UI does not show the new user added. Can anyone please tell me why it is happening. curl -u admin:admin -v -i -s -X GET http://localhost:6080/service/xusers/users/23 Thanks.
... View more
Labels:
11-15-2017
08:07 AM
Hi, I came across two links 1. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.2/bk_support-matrices/content/ch_matrices-ambari.html 2. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.3/bk_support-matrices/content/ch_matrices-ambari.html#ambari_stack The first link shows the stack compatibility , where HDP 2.6 is not compatible with Ambari 2.4.x, but compatible with 2.5. But the second link shows HDP 2.6 is compatible with Ambari 2.4.x, 2.5.x and 2.6.x. May I please know which one is correct ? And whether Ambari 2.4.x supports HDP 2.6 version or not ? Thanks.
... View more
10-18-2017
09:19 AM
@Jay SenSharma Hi Jay,
I did not find any errors in the ranger logs. And netstat -tnlpa | grep 6080 did not return any output. Also I tried do telnet from the machine where the browser is opened i,e my local, it says # telnet $RANGER_HOST 6080
Connecting To 10.193.5.45...Could not open connection to the host, on port 6080: Connect failed I actually, tried to ping from my local to those Linux VMs (Ambari, Host, Zookeeper, Node) it says Request timed out, even though all VMs are up and running. Is it related to my local system's firewall issue ?
... View more
10-17-2017
10:52 AM
@Jay SenSharma versions: HDP-2.5.3.0 and Ranger: 0.6.0 and for the permissions i got as below: # ls -lart /etc/ranger/admin/.rangeradmin.jceks.crc
-rw-r----- 1 root root 12 Oct 17 09:30 /etc/ranger/admin/.rangeradmin.jceks.cr
... View more
10-17-2017
10:43 AM
Hi, I am new to Hbase and Ambari. I am exploring options for user authentication in phoenix via Rangers services.Right now I am stuck with Ranger admin UI which is not opening even though the port 6080 is open. Steps: 1. Created test ambari cluster, with one master, one zookeeper, one node. 2. Installed Ranger services through ambari UI 3. Set up mysql in master where Rangers services are installed 4. Created Ranger user and ranger database in mysql Right now , trying to open the Ranger Admin UI from quick links which opens at `http://<masters-hostname>:6080` , but the page does not open (Have stopped the firewall to test if it works, but no luck) says "Site cant be reached, <hostname> is taking too long time to respond". I have checked logs in /var/log/ranger/admin/xa_portal.log . I dont see any error in logs, only below warn i can see. WARN org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker (ChecksumFileSystem.java:165) - Problem opening checksum file: file:/etc/ranger/admin/rangeradmin.jceks. Ignoring exception:
java.io.FileNotFoundException: /etc/ranger/admin/.rangeradmin.jceks.crc (Permission denied)
Could anybody please help me on this issue. Thanks
... View more
Labels: