Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Ranger HDFS plug in error - No common protection layer between client and server

avatar
Expert Contributor

I have a kerberised cluster that uses AD. I have successfully installed Ranger and synced all users/groups specified.

I'm now trying to configure the HDFS plugin using this guide but it's unable to connect. The error on the Ranger UI is :

Connection Failed.Unable to connect repository with given config for Dagobah_hadoop
Unable to connect repository with given config for Dagobah_hadoop

The xa_portal.log file has the following error:

2016-05-20 11:29:45,091 [timed-executor-pool-0] INFO  org.apache.ranger.plugin.client.BaseClient (BaseClient.java:100) - Init Login: using username/password
2016-05-20 11:29:45,263 [timed-executor-pool-0] WARN  org.apache.hadoop.ipc.Client$Connection$1 (Client.java:685) - Exception encountered while connecting to the server :
javax.security.sasl.SaslException: No common protection layer between client and server

I am following this guide http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.2/bk_Security_Guide/content/hdfs_plugin_kerber...

  1. Created user in AD called rangerrepouser.
  2. Created keytab for rangerrepouser with password 1234
  3. Changed "Ranger repository config password" to 1234
  4. Changed "Ranger service config user" to rangerrepouser@AD.EXAMPLE
  5. Left common.name.for.certificate as empty ""
  6. Left hadoop.rpc.protection as empty ""

In the HDFS policy on Ranger Admin the RPC Protection Type is blank.

Any ideas as to why I am seeing this error? Thanks.

1 ACCEPTED SOLUTION

avatar
Expert Contributor

Simple solution....

On the Ranger UI in the HDFS repo configuration, the username was set to `rangerrepouser` when it should have been set to `rangerrepouser@AD.EXAMPLE`

View solution in original post

9 REPLIES 9

avatar
Expert Contributor

@Dale Bradman

What is the value of hadoop.rpc.protection in your cluster/core-site.xml ? If those are different can you match it in the ranger and then try again ? Thanks !

avatar
Expert Contributor

Thanks for reply @krajguru . There is no hadoop.rpc.protection parameter in the core-site.xml. I added it to the custom configs and left it blank exactly like how it appears in the Advanced ranger-hdfs-plugin-properties file. The same error messages appear.

Should this value be set to anything else? Thank you.

avatar
Expert Contributor

@krajguru So I changed the hadoop.rpc.protection parameter to "Authentication" and that removed the error message from the xa_portal.log file.

But it's still unable to connect. Here's the log output now:

2016-05-20 15:20:42,373 [timed-executor-pool-0] INFO  org.apache.ranger.plugin.client.BaseClient (BaseClient.java:100) - Init Login: using username/password
2016-05-20 15:20:42,587 [timed-executor-pool-0] ERROR apache.ranger.services.hdfs.client.HdfsResourceMgr (HdfsResourceMgr.java:48) - <== HdfsResourceMgr.testConnection Error: org.apache.ranger.plugin.client.HadoopException: listFilesInternal: Unable to get listing of files for directory /null] from Hadoop environment [Dagobah_hadoop].
2016-05-20 15:20:42,588 [timed-executor-pool-0] ERROR org.apache.ranger.services.hdfs.RangerServiceHdfs (RangerServiceHdfs.java:59) - <== RangerServiceHdfs.validateConfig Error:org.apache.ranger.plugin.client.HadoopException: listFilesInternal: Unable to get listing of files for directory /null] from Hadoop environment [Dagobah_hadoop].
2016-05-20 15:20:42,588 [timed-executor-pool-0] ERROR org.apache.ranger.biz.ServiceMgr$TimedCallable (ServiceMgr.java:434) - TimedCallable.call: Error:org.apache.ranger.plugin.client.HadoopException: listFilesInternal: Unable to get listing of files for directory /null] from Hadoop environment [Dagobah_hadoop].
2016-05-20 15:20:42,589 [http-bio-6080-exec-6] ERROR org.apache.ranger.biz.ServiceMgr (ServiceMgr.java:120) - ==> ServiceMgr.validateConfig Error:java.util.concurrent.ExecutionException: org.apache.ranger.plugin.client.HadoopException: listFilesInternal: Unable to get listing of files for directory /null] from Hadoop environment [Dagobah_hadoop].

Why is it looking for a /null directory?

avatar
Expert Contributor

Simple solution....

On the Ranger UI in the HDFS repo configuration, the username was set to `rangerrepouser` when it should have been set to `rangerrepouser@AD.EXAMPLE`

avatar
Super Collaborator

@Dale Bradman , was you able to solve the issue?

I am facing the similar issue.

avatar
Expert Contributor

Also make sure the parameters in Ambari are correct for the plug ins, restart HDFS & Ranger and then make sure the parameters in Ranger UI are correct.

Are you using HDFS HA?

avatar
Super Collaborator

Somehow Ranger is not able to pick rangerrepouser@REALM in my case, it says user does not exist.

I am able to do ldapsearch for this user and kinit with the user.

My HDFS is HA so i am using namenode URL as hdfs://mycluster

avatar
Expert Contributor

Is rangerrepouser listed in Ranger UI?

For HA configuration to work, need to add the below properties in repo config (I.e. additional entries in the advanced section). They can be copied from hdfs-site.xml.

dfs.nameservices = <ha_name>
dfs.ha.namenodes.<ha_name> = <nn1,nn2>
dfs.namenode.rpc-address.<nn1> = <nn1_host:8020>
dfs.namenode.rpc-address.<nn2> = <nn2_host:8020>
dfs.client.failover.proxy.provider.<nn2> = org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider

avatar
Super Collaborator

Yes, adding these properties solved the issue for HDFS plugin.

Now I will check rest of the plugins.