Support Questions

Find answers, ask questions, and share your expertise

Ranger policies failed to refresh after implementing Kerberos

avatar
Super Collaborator

Hi guys,

Ranger fails to refresh policies after implementing Kerberos. I implemented Kerberos with new local MIT KDC, and using Ambari Automated Setup. HDFS, Hive and HBase works fine with new authentication method, but there are errors in refreshing policies. Every service where Ranger plugin is enabled gives me error:

2017-03-29 11:24:52,657 ERROR client.RangerAdminRESTClient (RangerAdminRESTClient.java:getServicePoliciesIfUpdated(124)) - Error getting policies. secureMode=true, user=nn/hadoop1.locald@EXAMPLE.COM (auth:KERBEROS), response={"httpStatusCode":401,"statusCode":0}, serviceName=CLUSTER_hadoop
2017-03-29 11:24:52,657 ERROR util.PolicyRefresher (PolicyRefresher.java:loadPolicyfromPolicyAdmin(240)) - PolicyRefresher(serviceName=CLUSTER_hadoop): failed to refresh policies. Will continue to use last known version of policies (3)
java.lang.Exception: HTTP 401
        at org.apache.ranger.admin.client.RangerAdminRESTClient.getServicePoliciesIfUpdated(RangerAdminRESTClient.java:126)
        at org.apache.ranger.plugin.util.PolicyRefresher.loadPolicyfromPolicyAdmin(PolicyRefresher.java:217)
        at org.apache.ranger.plugin.util.PolicyRefresher.loadPolicy(PolicyRefresher.java:185)
        at org.apache.ranger.plugin.util.PolicyRefresher.run(PolicyRefresher.java:158)

Thats for HDFS, for other services the user is different (hive etc.). I am using HDP 2.5 and Ambari 2.4.1.

These users exist in Kerberos (klist):

hive/hadoop1.locald@EXAMPLE.COM
hive/hadoop2.locald@EXAMPLE.COM
hive/hadoop3.locald@EXAMPLE.COM
hive/hadoop4.locald@EXAMPLE.COM
infra-solr/hadoop1.locald@EXAMPLE.COM
jhs/hadoop2.locald@EXAMPLE.COM
jn/hadoop1.locald@EXAMPLE.COM
jn/hadoop2.locald@EXAMPLE.COM
jn/hadoop3.locald@EXAMPLE.COM
kadmin/admin@EXAMPLE.COM
kadmin/changepw@EXAMPLE.COM
kadmin/hadoop1.locald@EXAMPLE.COM
kafka/hadoop1.locald@EXAMPLE.COM
knox/hadoop1.locald@EXAMPLE.COM
krbtgt/EXAMPLE.COM@EXAMPLE.COM
livy/hadoop1.locald@EXAMPLE.COM
livy/hadoop2.locald@EXAMPLE.COM
livy/hadoop4.locald@EXAMPLE.COM
nm/hadoop1.locald@EXAMPLE.COM
nm/hadoop2.locald@EXAMPLE.COM
nm/hadoop3.locald@EXAMPLE.COM
nm/hadoop4.locald@EXAMPLE.COM
nn/hadoop1.locald@EXAMPLE.COM
nn/hadoop2.locald@EXAMPLE.COM


1 ACCEPTED SOLUTION

avatar
Explorer

We were getting the same error and after troubleshooting for some time we found that Ranger policymgr_external_url (in Ambari under Ranger -> Configs -> Advanced -> Ranger Settings -> External URL) was improperly set to the Ranger hosts IP address. We changed that to the FQDN and restarted the effected service (e.g HS2 for hive, NN for HDFS, etc) and the problem was resolved.

Give that a look and shot if applicable.

View solution in original post

26 REPLIES 26

avatar

Edgar Daeds I think most of things seems to be good in your cluster, we need to go more in depth to find the issue , please provide following info:

1) access log in ranger admin, where we will see entries for each policy download call, there we will be seeing some error i guess.

2) then try to do kinit using hdfs keytab and perform the policy download call manually.

avatar
Super Collaborator

Thank you for your interest. I will post access_log in a few minutes. Could you please guide me how to download policies manually? I tried "curl -iv -u hdfs:hdfs -H "Content-Type: application/json" -X GET http://myhost:6080/service/public/api/policy/33" and it went OK.

avatar

new policy download api call will be used if it is secure cluster , in access log you can check which call is used

it should be similar to /service/plugins/secure/policies/download/

avatar

See if ranger admin process user has right permissions to access spnego keytab.

avatar
Explorer

We were getting the same error and after troubleshooting for some time we found that Ranger policymgr_external_url (in Ambari under Ranger -> Configs -> Advanced -> Ranger Settings -> External URL) was improperly set to the Ranger hosts IP address. We changed that to the FQDN and restarted the effected service (e.g HS2 for hive, NN for HDFS, etc) and the problem was resolved.

Give that a look and shot if applicable.

avatar

Yes, that is a possibility. After enabling the kerberos, hostname needs to be used instead of IP address.

avatar
Super Collaborator

@Darryl Stoflet @vperiasamy @Deepak Sharma

Thank you all! That solved my problem. I've had IP address instead of FQDN in External URL property.