Member since
01-27-2016
14
Posts
8
Kudos Received
0
Solutions
02-21-2017
09:25 PM
1 Kudo
Hi, I'm confused why Ambari needs to start Ranger-Admin before HDFS. Why is there no dependency issue if HDFS is used as Ranger audit sink and HDFS is also acl controlled by Ranger? Can HDFS be started before Ranger, if HDFS plugin is enabled or will HDFS Ranger plugin acls break? I'm wondering why HDFS isn't started before Ranger when its used as Ranger's audit sink. Thanks.
... View more
Labels:
- Labels:
-
Apache Ranger
03-04-2016
07:18 PM
1 Kudo
@vperiasamy This is the blueprint as exported directly: {
"admin-properties" : {
"properties_attributes" : {
"db_root_password" : {
"toMask" : "true"
},
"audit_db_password" : {
"toMask" : "true"
},
"db_password" : {
"toMask" : "true"
}
},
"properties" : {
"audit_db_user" : "rangerlogger",
"db_root_user" : "rangerdba",
"DB_FLAVOR" : "MYSQL",
"db_name" : "ranger",
"policymgr_external_url" : "http://%HOSTGROUP::host_group_1%:6080",
"db_user" : "rangeradmin",
"SQL_CONNECTOR_JAR" : "/usr/share/java/mysql-connector-java.jar",
"db_host" : "localhost",
"audit_db_name" : "ranger_audit"
}
} When in place of { "toMask" : "true"} I put "password," I see formatting errors. What would be the correct format? For now I put the password as the default_password in the template I define which blueprint I use. Thanks!!
... View more
03-01-2016
03:16 AM
1 Kudo
@vperiasamy How should the blueprint json be updated to account for db passwords? Where are you supposed to enter the passwords? I'm not able to register the blueprint when I include the passwords for the db users in the blueprint json.
... View more
02-24-2016
07:26 PM
4 Kudos
Are there still Ranger prerequisites before you can install a fresh cluster with a blueprint which includes Ranger? For example, do we still have to modify permissions on db and set the appropriate jdbc? ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java.jar I notice if I export a blueprint from a cluster which had Ranger installed. If I try to use this blueprint on a new cluster, it will complain about missing jdbc-driver, and mysql permissions issues (basically the prerequisites required for Ranger). Am I missing something which would install the blueprint without extra steps?
... View more
Labels:
- Labels:
-
Apache Ranger
02-11-2016
06:47 AM
1 Kudo
If I previously didn't use Ranger KMS, but used Hadoop KMS to manage my keys: Will I lose my keys in the Hadoop KMS when I start to use Ranger KMS? Will they all be copied over to the Ranger KMS seamlessly during Ranger KMS install? Also, my second question is on trying to set up Ranger KMS. I'm able to see policies in my Ranger KMS UI at 6080 enforced: For example, # after updating ranger kms policy to include public permissions to create keys
>> sudo sudo -u hdfs hadoop key create testkeyfromcli1 -size 256
testkeyfromcli1 has been successfully created with options Options{cipher='AES/CTR/NoPadding', bitLength=256, description='null
KMSClientProvider[http:/XXXXX.com:9292/kms/v1/] has been updated.
# after updating policies to only allow keyadmin permissions to create keys
>> sudo sudo -u hdfs hadoop key create testkeyfromcli2 -size 256
testkeyfromcli2 has not been created. org.apache.hadoop.security.authorize.AuthorizationException: User:hdfs-189 not allowed toeyfromcli2' But when I log into Ranger KMS UI using keyadmin, I notice 1) When I try to view the keys under my kms repo, I see the error: Unauthenticated : Please check the premission in the policy for the use 2) When I try to Test Connection I see: Connection Failed. Unable to connect repository with given config for hdpClusterName_kms. Do you know why I can't connect? My KMS URL is: kms://http@XXXXXX.com:9292/kms. In my kms.log, when I try to view the keys in the repo, I do see:
Caused by: java.lang.IllegalArgumentException: Failed to specify server's Kerberos principal name
at org.apache.hadoop.security.SaslRpcClient.getServerPrincipal(SaslRpcClient.java:322)
at org.apache.hadoop.security.SaslRpcClient.createSaslClient(SaslRpcClient.java:231)
at org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:159)
at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:396)
at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:555)
at org.apache.hadoop.ipc.Client$Connection.access$1800(Client.java:370)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:724)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:720)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:720)
... 30 more Thanks!
... View more
Labels:
- Labels:
-
Apache Ranger
02-10-2016
06:41 PM
Hi @Neeraj Sabharwal I would also be very interested in seeing the use case demo for this, thanks!
... View more
01-27-2016
07:55 PM
Is this issue resolved? I also tried to create a kafka ranger policy to exclude a select user from not creating or deleting topics. But it doesn't get enforced. I see the 200 response in Ranger Audits that Kafka plugin is up.
... View more