Member since
10-01-2015
52
Posts
25
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2240 | 09-29-2016 02:09 PM | |
873 | 09-28-2016 12:32 AM | |
3852 | 08-30-2016 09:56 PM |
10-05-2016
07:46 PM
2 Kudos
We tried 3 scenarios and all failed. 1) Creating Hive Table on HBase. (proper hive/hbase polices for user)
hive> CREATE TABLE hbase_table_1(key int,
value string) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH
SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val")
TBLPROPERTIES ("hbase.table.name" = "xyz",
"hbase.mapred.output.outputtable" = "xyz");
FAILED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.DDLTask.
MetaException(message:org.apache.hadoop.hbase.security.AccessDeniedException:
org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient
permissions for user 'abc@ABC.COM' (action=create)
at
org.apache.ranger.authorization.hbase.AuthorizationSession.publishResults(AuthorizationSession.java:254)
at
org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor.authorizeAccess(RangerAuthorizationCoprocessor.java:595)
at
org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor.requirePermission(RangerAuthorizationCoprocessor.java:664)
at
org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor.preCreateTable(RangerAuthorizationCoprocessor.java:769)
at
org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor.preCreateTable(RangerAuthorizationCoprocessor.java:496)
at
org.apache.hadoop.hbase.master.MasterCoprocessorHost$11.call(MasterCoprocessorHost.java:216)
at
org.apache.hadoop.hbase.master.MasterCoprocessorHost.execOperation(MasterCoprocessorHost.java:1140)
at
org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateTable(MasterCoprocessorHost.java:212)
at
org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1533)
at
org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:454)
at
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55401)
at
org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at
org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745)
2) Create Table directly on Hbase hbase(main):001:0> create 'emp', 'personal', 'professional'
ERROR: org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient permissions for user 'abc@ABC.COM' (action=create) 3) Using Grant from HBase Shell
hbase(main):001:0> grant 'abc', 'RWCA'
ERROR: org.apache.hadoop.hbase.coprocessor.CoprocessorException: SSLContext must not be null
at org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor.grant(RangerAuthorizationCoprocessor.java:1171)
at org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService$1.grant(AccessControlProtos.java:9933)
at org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService.callMethod(AccessControlProtos.java:10097)
at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7553)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1878)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1860)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32209)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745) The easy solution we tried is to disable SSL and it worked.
We also tried creating different policies and policy on namespace 'default:*' worked and users were ONLY able to create the TABLE.
They were NOT able to scan the table and issue (3) still had same problem. "
ERROR: org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient permissions for user abc@ABC.COM',action: scannerOpen, tableName:hello1, family:col1.
" Regional Servers were playing a important role here on Ranger Policies.
While HBase Master was able to wirte to Policy Cache the cache for Regional Servers was 0 bytes.(communication blocked) We distributed the keystore and truststore from HBase Master to all the worker nodes running Regional Server and restarted Hbase and this solved the issue. How to do? Ambari-Hbase-Config- Filter "SSL"
1) xasecure.policymgr.clientssl.keystore - Find the /path/keystore.file.name and distribute it to all the machines (keep the path/file name same) 2) xasecure.policymgr.clientssl.truststore - - Find the /path/truststore.file.name and distribute it to all the machines (keep the path/file name same) Hope this will help. Thanks
Mayank
... View more
Labels:
09-29-2016
02:09 PM
Figured it out, I generated a new cert and used Signature Algorithm as sha256RSA (signature hash algorithm as sha256) however the ones I had earlier were SHA1RSA and SHA1 respectively. Seems like ShA1 is week but IE doesn't seem to care, Chrome was not happy about it. If it's a internal ONLY cluster and you are using a local CA authority (internal or self sign) you can still live with Sha1. You will still achieve Secure TLS connection and Secure Resources however with a warning This page is insecure (broken HTTPS). Hope this helps and thanks community to think. Regards
Mayank
... View more
09-27-2016
11:16 PM
We have set a Once way trust from AD to KDC and validated that we were able to get tickets from AD. Kerberos Realm - ABC.NET
AD Domain - XZY.COM Validation kinit ad_user@XYZ.COM List Showed the valid ticket. Now in just few hrs the trust is broken the same command, kinit ad_user@XYZ.COM, gives kinit: Cannot find KDC for realm "XYZ.COM" while getting initial credentials. Thanks in advance. Just want to hit myself on the face.
... View more
Labels:
- Labels:
-
Kerberos
09-23-2016
03:21 PM
Works for IE, however still broken for Chrome. Any advices/help is appreciated.
... View more
09-13-2016
02:47 PM
Ranger 0.5.0.2.4 Will try your suggestion, however since the org error was ((actual: 2465, maximum:2000), do we still need to take it to 4000 thanks
... View more
09-13-2016
02:24 PM
1 Kudo
We are running Ranger on Oracle.
Faced a common error (NN logs and Hive logs) ORA-12899: Value too large for column "RANGERDBA"."XA.AUDIT"."REQUEST.DATA" (actual: 2465, maximum:2000) We changed the REQUEST_DATA to VARCHAR2(3000) from VARCHAR2(2000) Now the new error is ORA-12899: Value too large for column "RANGERDBA"."XA.AUDIT"."REQUEST.DATA" (actual: 3465, maximum:3000)
This a a fresh Installation on Oracle only and not migrated from MySql.
In my past exp we had the same issue where the DB was migrated from MySql and there were some NON ASCII chars .. we fixed it by changing to VARCHAR2 (2000 CHAR) from VARCHAR2(2000). Will the same solution be able to fix this.
What can cause this issue. Thanks
Mayank
... View more
Labels:
- Labels:
-
Apache Ranger
09-08-2016
06:29 PM
We are using a single wildcard cert provided by enterprise CA. Thanks
Mayank
... View more
09-08-2016
06:08 PM
While securing Ambari Sever for Https, we can successfully login to https and default port 8443, however the https is stoked out and says This page is insecure (broken HTTPS). We are using wildcard certs initially in .cer format however have to convert it to .pem format using openssl. What is the preferred format and encryption for the Certs. The current error says 1) SHA-1 Certificate
The certificate for this site expires in 2017 or later, and the certificate chain contains a certificate signed using SHA-1. 2) Certificate Error
There are issues with the site's certificate chain (net::ERR_CERT_COMMON_NAME_INVALID). Thanks Mayank
... View more
Labels:
- Labels:
-
Apache Ambari
08-30-2016
09:56 PM
Found it. I would have left it like that however since it in Production and sometimes you just need to know. For replication, we Falcon needs to be aware of local and remote Cluster ID via below property.
dfs.nameservices mine has value like "prodID, devID" What balancer does is it tries to reach both the nameservices and if I'm running the command in prod cluster as hdfs with proper ticket, it will throw a errror "hdfs-prod" (which is my principal w/o REALM) however it is still balancing the prod cluster, so the error although not clear is actually permission denied on remote name service (which makes sense) as the user still is "hdfs" however the principal is different "hdfs-dev" in my case. I ran the same command in Dev and cluster wwas rebalanced however I got the same error, this time Access denied for user hdfs-dev. Superuser privilege is required. Thanks for support @emaxwell, @mqureshi, @Kuldeep Kulkarni. I hope above answer will help others. (few of other hdfs commands have no effect/errors) Thanks Mayank
... View more