- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Ranger HDFS plug in error - No common protection layer between client and server
- Labels:
-
Apache Ranger
Created ‎05-20-2016 10:44 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have a kerberised cluster that uses AD. I have successfully installed Ranger and synced all users/groups specified.
I'm now trying to configure the HDFS plugin using this guide but it's unable to connect. The error on the Ranger UI is :
Connection Failed.Unable to connect repository with given config for Dagobah_hadoop Unable to connect repository with given config for Dagobah_hadoop
The xa_portal.log file has the following error:
2016-05-20 11:29:45,091 [timed-executor-pool-0] INFO org.apache.ranger.plugin.client.BaseClient (BaseClient.java:100) - Init Login: using username/password 2016-05-20 11:29:45,263 [timed-executor-pool-0] WARN org.apache.hadoop.ipc.Client$Connection$1 (Client.java:685) - Exception encountered while connecting to the server : javax.security.sasl.SaslException: No common protection layer between client and server
I am following this guide http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.2/bk_Security_Guide/content/hdfs_plugin_kerber...
- Created user in AD called rangerrepouser.
- Created keytab for rangerrepouser with password 1234
- Changed "Ranger repository config password" to 1234
- Changed "Ranger service config user" to rangerrepouser@AD.EXAMPLE
- Left common.name.for.certificate as empty ""
- Left hadoop.rpc.protection as empty ""
In the HDFS policy on Ranger Admin the RPC Protection Type is blank.
Any ideas as to why I am seeing this error? Thanks.
Created ‎05-23-2016 08:00 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Simple solution....
On the Ranger UI in the HDFS repo configuration, the username was set to `rangerrepouser` when it should have been set to `rangerrepouser@AD.EXAMPLE`
Created ‎05-20-2016 11:15 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
What is the value of hadoop.rpc.protection in your cluster/core-site.xml ? If those are different can you match it in the ranger and then try again ? Thanks !
Created ‎05-20-2016 01:17 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for reply @krajguru . There is no hadoop.rpc.protection parameter in the core-site.xml. I added it to the custom configs and left it blank exactly like how it appears in the Advanced ranger-hdfs-plugin-properties file. The same error messages appear.
Should this value be set to anything else? Thank you.
Created ‎05-20-2016 02:20 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@krajguru So I changed the hadoop.rpc.protection parameter to "Authentication" and that removed the error message from the xa_portal.log file.
But it's still unable to connect. Here's the log output now:
2016-05-20 15:20:42,373 [timed-executor-pool-0] INFO org.apache.ranger.plugin.client.BaseClient (BaseClient.java:100) - Init Login: using username/password 2016-05-20 15:20:42,587 [timed-executor-pool-0] ERROR apache.ranger.services.hdfs.client.HdfsResourceMgr (HdfsResourceMgr.java:48) - <== HdfsResourceMgr.testConnection Error: org.apache.ranger.plugin.client.HadoopException: listFilesInternal: Unable to get listing of files for directory /null] from Hadoop environment [Dagobah_hadoop]. 2016-05-20 15:20:42,588 [timed-executor-pool-0] ERROR org.apache.ranger.services.hdfs.RangerServiceHdfs (RangerServiceHdfs.java:59) - <== RangerServiceHdfs.validateConfig Error:org.apache.ranger.plugin.client.HadoopException: listFilesInternal: Unable to get listing of files for directory /null] from Hadoop environment [Dagobah_hadoop]. 2016-05-20 15:20:42,588 [timed-executor-pool-0] ERROR org.apache.ranger.biz.ServiceMgr$TimedCallable (ServiceMgr.java:434) - TimedCallable.call: Error:org.apache.ranger.plugin.client.HadoopException: listFilesInternal: Unable to get listing of files for directory /null] from Hadoop environment [Dagobah_hadoop]. 2016-05-20 15:20:42,589 [http-bio-6080-exec-6] ERROR org.apache.ranger.biz.ServiceMgr (ServiceMgr.java:120) - ==> ServiceMgr.validateConfig Error:java.util.concurrent.ExecutionException: org.apache.ranger.plugin.client.HadoopException: listFilesInternal: Unable to get listing of files for directory /null] from Hadoop environment [Dagobah_hadoop].
Why is it looking for a /null directory?
Created ‎05-23-2016 08:00 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Simple solution....
On the Ranger UI in the HDFS repo configuration, the username was set to `rangerrepouser` when it should have been set to `rangerrepouser@AD.EXAMPLE`
Created ‎07-19-2016 12:49 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Dale Bradman , was you able to solve the issue?
I am facing the similar issue.
Created ‎07-19-2016 12:57 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Also make sure the parameters in Ambari are correct for the plug ins, restart HDFS & Ranger and then make sure the parameters in Ranger UI are correct.
Are you using HDFS HA?
Created ‎07-19-2016 02:21 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Somehow Ranger is not able to pick rangerrepouser@REALM in my case, it says user does not exist.
I am able to do ldapsearch for this user and kinit with the user.
My HDFS is HA so i am using namenode URL as hdfs://mycluster
Created ‎07-19-2016 02:35 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Is rangerrepouser listed in Ranger UI?
For HA configuration to work, need to add the below properties in repo config (I.e. additional entries in the advanced section). They can be copied from hdfs-site.xml.
dfs.nameservices = <ha_name> dfs.ha.namenodes.<ha_name> = <nn1,nn2> dfs.namenode.rpc-address.<nn1> = <nn1_host:8020> dfs.namenode.rpc-address.<nn2> = <nn2_host:8020> dfs.client.failover.proxy.provider.<nn2> = org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
Created ‎07-20-2016 08:52 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Yes, adding these properties solved the issue for HDFS plugin.
Now I will check rest of the plugins.
