Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

“No common protection layer between client and server” while trying to communicate with kerberized H

avatar
Explorer

I'm trying to communicate programmatically to a Hadoop cluster which is kerberized (CDH 5.3/HDFS 2.5.0).

I have a valid Kerberos token on the client side. But I'm getting an error as below, "No common protection layer between client and server".

 

What does this error mean and are there any ways to fix or work around it?

 

Is this something related to HDFS-5688? The ticket seems to imply that the property "hadoop.rpc.protection" must be set, presumably to "authentication" (also per e.g. this).

 

Would this need to be set on all servers in the cluster and then the cluster bounced? I don't have easy access to the cluster so I need to understand whether 'hadoop.rpc.protection' is the actual cause. It seems that 'authentication' should be the value used by default, at least according to the core-default.xml documentation.

 

java.io.IOException: Failed on local exception: java.io.IOException: Couldn't setup connection for principal1/server1.acme.net@xxx.acme.net to server2.acme.net/10.XX.XXX.XXX:8020; Host Details : local host is: “some-host.acme.net/168.XX.XXX.XX”; destination host is: “server2.acme.net”:8020;

    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)

    at org.apache.hadoop.ipc.Client.call(Client.java:1415)

    at org.apache.hadoop.ipc.Client.call(Client.java:1364)

    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)

    at com.sun.proxy.$Proxy24.getFileInfo(Unknown Source)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

    at java.lang.reflect.Method.invoke(Method.java:498)

    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)

    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)

    at com.sun.proxy.$Proxy24.getFileInfo(Unknown Source)

    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:707)

    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1785)

    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1068)

    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1064)

    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)

    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1064)

    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1398)

    ... 11 more

Caused by: java.io.IOException: Couldn't setup connection for principal1/server1.acme.net@xxx.acme.net to server2.acme.net/10.XX.XXX.XXX:8020;

    at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:671)

    at java.security.AccessController.doPrivileged(Native Method)

    at javax.security.auth.Subject.doAs(Subject.java:422)

    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)

    at org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:642)

    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:725)

    at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)

    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1463)

    at org.apache.hadoop.ipc.Client.call(Client.java:1382)

    ... 31 more

Caused by: javax.security.sasl.SaslException: No common protection layer between client and server

    at com.sun.security.sasl.gsskerb.GssKrb5Client.doFinalHandshake(GssKrb5Client.java:251)

    at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:186)

    at org.apache.hadoop.security.SaslRpcClient.saslEvaluateToken(SaslRpcClient.java:483)

    at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:427)

    at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:552)

    at org.apache.hadoop.ipc.Client$Connection.access$1800(Client.java:367)

    at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:717)

    at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:713)

    at java.security.AccessController.doPrivileged(Native Method)

    at javax.security.auth.Subject.doAs(Subject.java:422)

    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)

    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:712)

    ... 34 more

 

2 ACCEPTED SOLUTIONS

avatar
Explorer

Thanks, Harsh, very helpful.

 

I've been poking around on an edge node, so I do have access to hdfs-site.xml and core-site.xml. We may have to munge these files before we can use them, as they contain some values such as host names for fs.defaultFS which are cluster internal; we'll have to use different host names to be able to get in from outside the cluster...

 

Since we deal with multiple clusters organized by stage (dev, prod, etc.), we'd have to maintain multiple pairs of core-site.xml, hdfs-site.xml files, and load them dynamically at runtime via Configuration.addResource() method...

 

 

View solution in original post

avatar
Explorer

Because of our requirements (to be able to be targeted toward a different cluster based on deployment, and the HDFS config files potentially having cluster-internal host names in them), we're going with the approach of maintaining the minimalistic set of Configuration properties required to make Kerberos work on the client side. These are, again:

 

* dfs.namenode.kerberos.principal

* hadoop.rpc.protection

 

Having said that, Harsh's comments are all valid and relevant.

View solution in original post

13 REPLIES 13

avatar
New Contributor

Ohhh! thanks i needed this information.

 

Thanks, really.

avatar
New Contributor

Hi Harsh,

 

I am facing the same issue of  “No common protection layer between client and server”, my server has hadoop.ipc.protection as privacy which I can't change to authentication.. 

I also tried using below code in my program,

Configuration config = new Configuration();

config.set("hadoop.rpc.protection","privacy"); but still facing same issue..

 

I have all cluster xmls in my classpath.


Could you please help me in solving the issue

avatar
Explorer

Have you tried setting dfs.namenode.kerberos.principal?

avatar
Explorer

A while back, I made a comment to the effect of "We may have to munge these files before we can use them" 🙂 For those of you giggling about the use of the word munge, it's widely used on the East Coast to mean "edit" or "overwrite".  Typically files are not very edible and XML would not taste good 🙂 🙂

 

Cheers!