<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: “No common protection layer between client and server” while trying to communicate with kerberiz in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/No-common-protection-layer-between-client-and-server-while/m-p/41868#M31257</link>
    <description>&lt;P&gt;&amp;nbsp;HDFS currently is deeply tested only with 2x NameNodes, so while you can technically run 3x NNs, not everything would behave as intended. There is work ongoing to have 2+ NameNodes in future of HDFS.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;HDFS HA architecture is also Active-Standby based, so 2x NNs being active is not possible by at least HDFS HA design. If you're using CDH, then this certainly isn't available, so am unsure what they are trying to mean by 3x Active NameNodes.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;As to HA configuration, it involves a few configurations that are associated to one another. Here's an example core-site.xml and hdfs-site.xml properties that are relevant to HA config description from one such cluster. You can adapt them to your hostnames, but once again I'd like to recommend you obtain a client configuration zip from your administrator to make it easier in deploying with your cluster's configs, vs. hand-setting each relevant property. If you have access to some form of command/gateway/edge host, you can also usually find such config files under its /etc/hadoop/conf/ directory:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;core-site.xml&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;&amp;lt;configuration&amp;gt;
  &amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;fs.defaultFS&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;hdfs://ha-nameservice-name&amp;lt;/value&amp;gt;
  &amp;lt;/property&amp;gt;&lt;BR /&gt;  …
&amp;lt;/configuration&amp;gt;&lt;/PRE&gt;&lt;P&gt;&lt;STRONG&gt;hdfs-site.xml&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;&amp;lt;configuration&amp;gt;
  &amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;dfs.nameservices&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;ha-nameservice-name&amp;lt;/value&amp;gt;
  &amp;lt;/property&amp;gt;
  &amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;dfs.client.failover.proxy.provider.ha-nameservice-name&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider&amp;lt;/value&amp;gt;
  &amp;lt;/property&amp;gt;
  &amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;dfs.ha.automatic-failover.enabled.ha-nameservice-name&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
  &amp;lt;/property&amp;gt;
  &amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;ha.zookeeper.quorum&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;ZKHOST:2181&amp;lt;/value&amp;gt;
  &amp;lt;/property&amp;gt;
  &amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;dfs.ha.namenodes.ha-nameservice-name&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;namenode10,namenode142&amp;lt;/value&amp;gt;
  &amp;lt;/property&amp;gt;
  &amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;dfs.namenode.rpc-address.ha-nameservice-name.namenode10&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;NN1HOST:8020&amp;lt;/value&amp;gt;
  &amp;lt;/property&amp;gt;
  &amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;dfs.namenode.servicerpc-address.ha-nameservice-name.namenode10&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;NN1HOST:8022&amp;lt;/value&amp;gt;
  &amp;lt;/property&amp;gt;
  &amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;dfs.namenode.http-address.ha-nameservice-name.namenode10&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;NN1HOST:20101&amp;lt;/value&amp;gt;
  &amp;lt;/property&amp;gt;
  &amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;dfs.namenode.https-address.ha-nameservice-name.namenode10&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;NN1HOST:20102&amp;lt;/value&amp;gt;
  &amp;lt;/property&amp;gt;
  &amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;dfs.namenode.rpc-address.ha-nameservice-name.namenode142&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;NN2HOST:8020&amp;lt;/value&amp;gt;
  &amp;lt;/property&amp;gt;
  &amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;dfs.namenode.servicerpc-address.ha-nameservice-name.namenode142&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;NN2HOST:8022&amp;lt;/value&amp;gt;
  &amp;lt;/property&amp;gt;
  &amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;dfs.namenode.http-address.ha-nameservice-name.namenode142&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;NN2HOST:20101&amp;lt;/value&amp;gt;
  &amp;lt;/property&amp;gt;
  &amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;dfs.namenode.https-address.ha-nameservice-name.namenode142&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;NN2HOST:20102&amp;lt;/value&amp;gt;
  &amp;lt;/property&amp;gt;&lt;BR /&gt;  …
&amp;lt;/configuration&amp;gt;&lt;/PRE&gt;&lt;P&gt;With this configuration in place, all HDFS URIs must be accessed with FS URI &lt;STRONG&gt;hdfs://ha-nameservice-name&lt;/STRONG&gt;. Ideally you want to use the same name your cluster uses, so remote services can reuse the name too, which is why grabbing an actual cluster client configuration set is important.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Fri, 10 Jun 2016 09:07:56 GMT</pubDate>
    <dc:creator>Harsh J</dc:creator>
    <dc:date>2016-06-10T09:07:56Z</dc:date>
    <item>
      <title>“No common protection layer between client and server” while trying to communicate with kerberized H</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/No-common-protection-layer-between-client-and-server-while/m-p/41814#M31250</link>
      <description>&lt;P&gt;I'm trying to communicate programmatically to a Hadoop cluster which is kerberized (CDH 5.3/HDFS 2.5.0).&lt;/P&gt;&lt;P&gt;I have a valid Kerberos token on the client side. But I'm getting an error as below, "No common protection layer between client and server".&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;What does this error mean and are there any ways to fix or work around it?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Is this something related to &lt;A href="https://issues.apache.org/jira/browse/HDFS-5688" target="_blank" rel="nofollow"&gt;HDFS-5688&lt;/A&gt;? The ticket seems to imply that the property "hadoop.rpc.protection" must be set, presumably to "authentication" (also per e.g. &lt;A href="https://datameer.zendesk.com/hc/en-us/articles/204262060-Errors-when-Executing-Job-Caused-by-javax-security-sasl-SaslException-No-common-protection-layer-between-client-and-server" target="_blank" rel="nofollow"&gt;this&lt;/A&gt;).&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Would this need to be set on all servers in the cluster and then the cluster bounced? I don't have easy access to the cluster so I need to understand whether 'hadoop.rpc.protection' is the actual cause. It seems that 'authentication' should be the value used by default, at least according to the core-default.xml documentation.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;java.io.IOException: Failed on local exception: java.io.IOException: Couldn't setup connection for principal1/server1.acme.net@xxx.acme.net to server2.acme.net/10.XX.XXX.XXX:8020; Host Details : local host is: “some-host.acme.net/168.XX.XXX.XX”; destination host is: “server2.acme.net”:8020;&lt;/P&gt;&lt;PRE&gt;    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)

    at org.apache.hadoop.ipc.Client.call(Client.java:1415)

    at org.apache.hadoop.ipc.Client.call(Client.java:1364)

    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)

    at com.sun.proxy.$Proxy24.getFileInfo(Unknown Source)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

    at java.lang.reflect.Method.invoke(Method.java:498)

    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)

    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)

    at com.sun.proxy.$Proxy24.getFileInfo(Unknown Source)

    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:707)

    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1785)

    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1068)

    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1064)

    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)

    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1064)

    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1398)

    ... 11 more&lt;/PRE&gt;&lt;P&gt;Caused by: java.io.IOException: Couldn't setup connection for principal1/server1.acme.net@xxx.acme.net to server2.acme.net/10.XX.XXX.XXX:8020;&lt;/P&gt;&lt;PRE&gt;    at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:671)

    at java.security.AccessController.doPrivileged(Native Method)

    at javax.security.auth.Subject.doAs(Subject.java:422)

    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)

    at org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:642)

    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:725)

    at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)

    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1463)

    at org.apache.hadoop.ipc.Client.call(Client.java:1382)

    ... 31 more&lt;/PRE&gt;&lt;P&gt;Caused by: javax.security.sasl.SaslException: No common protection layer between client and server&lt;/P&gt;&lt;PRE&gt;    at com.sun.security.sasl.gsskerb.GssKrb5Client.doFinalHandshake(GssKrb5Client.java:251)

    at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:186)

    at org.apache.hadoop.security.SaslRpcClient.saslEvaluateToken(SaslRpcClient.java:483)

    at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:427)

    at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:552)

    at org.apache.hadoop.ipc.Client$Connection.access$1800(Client.java:367)

    at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:717)

    at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:713)

    at java.security.AccessController.doPrivileged(Native Method)

    at javax.security.auth.Subject.doAs(Subject.java:422)

    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)

    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:712)

    ... 34 more&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 16 Sep 2022 10:24:13 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/No-common-protection-layer-between-client-and-server-while/m-p/41814#M31250</guid>
      <dc:creator>dgoldenberg</dc:creator>
      <dc:date>2022-09-16T10:24:13Z</dc:date>
    </item>
    <item>
      <title>Re: “No common protection layer between client and server” while trying to communicate with kerberiz</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/No-common-protection-layer-between-client-and-server-while/m-p/41816#M31251</link>
      <description>You certainly do need to set hadoop.rpc.protection to the exact value the&lt;BR /&gt;cluster expects. While "authentication" is the default, the other values&lt;BR /&gt;your cluster services may be using/enforcing are "privacy" or "integrity".&lt;BR /&gt;&lt;BR /&gt;If your cluster is run via CM, I highly recommend downloading a client&lt;BR /&gt;configuration zip from its services and looking over all the properties&lt;BR /&gt;present in it, and applying the same into your project (simplest way is to&lt;BR /&gt;place the *.xml files into your src/main/resources, if you use maven, but&lt;BR /&gt;you can also apply them programmatically).&lt;BR /&gt;&lt;BR /&gt;That said, from 5.1.0 onwards the clients are designed to auto-negotiate&lt;BR /&gt;the SASL QOP properties from the server so you never have to specify the&lt;BR /&gt;hadoop.rpc.protection accurately. This feature, combined with the error you&lt;BR /&gt;face, leads me to believe that perhaps your hadoop-client dependency&lt;BR /&gt;libraries are much older than 5.1.0.&lt;BR /&gt;</description>
      <pubDate>Thu, 09 Jun 2016 00:33:42 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/No-common-protection-layer-between-client-and-server-while/m-p/41816#M31251</guid>
      <dc:creator>Harsh J</dc:creator>
      <dc:date>2016-06-09T00:33:42Z</dc:date>
    </item>
    <item>
      <title>Re: “No common protection layer between client and server” while trying to communicate with kerberiz</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/No-common-protection-layer-between-client-and-server-while/m-p/41823#M31252</link>
      <description>&lt;P&gt;Hi Harsh,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;My Hadoop dependencies are: hadoop-common, hadoop-hdfs, version=2.5.0, since we're running CDH 5.3. &amp;nbsp;Does that sound like the right version?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;lt;dependency&amp;gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;lt;groupId&amp;gt;org.apache.hadoop&amp;lt;/groupId&amp;gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;lt;artifactId&amp;gt;hadoop-common&amp;lt;/artifactId&amp;gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;lt;version&amp;gt;2.5.0&amp;lt;/version&amp;gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&amp;lt;/dependency&amp;gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;lt;dependency&amp;gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;lt;groupId&amp;gt;org.apache.hadoop&amp;lt;/groupId&amp;gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;lt;artifactId&amp;gt;hadoop-hdfs&amp;lt;/artifactId&amp;gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;lt;version&amp;gt;2.5.0&amp;lt;/version&amp;gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&amp;lt;/dependency&amp;gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Thanks.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Dmitry&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 09 Jun 2016 03:10:11 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/No-common-protection-layer-between-client-and-server-while/m-p/41823#M31252</guid>
      <dc:creator>dgoldenberg</dc:creator>
      <dc:date>2016-06-09T03:10:11Z</dc:date>
    </item>
    <item>
      <title>Re: “No common protection layer between client and server” while trying to communicate with kerberiz</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/No-common-protection-layer-between-client-and-server-while/m-p/41825#M31253</link>
      <description>That is an Apache Hadoop upstream version. Please add Cloudera's repository:&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;cloudera&lt;BR /&gt;&lt;A href="https://repository.cloudera.com/artifactory/cloudera-repos/" target="_blank"&gt;https://repository.cloudera.com/artifactory/cloudera-repos/&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;And the right version dependency (the hadoop-client wrapper one needs to be&lt;BR /&gt;used generally, not specific ones such as common/hdfs/etc.):&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;org.apache.hadoop&lt;BR /&gt;hadoop-client&lt;BR /&gt;2.5.0-cdh5.3.10&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;That said, did you check up what protection mode the server configuration&lt;BR /&gt;is expecting?&lt;BR /&gt;</description>
      <pubDate>Thu, 09 Jun 2016 04:25:42 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/No-common-protection-layer-between-client-and-server-while/m-p/41825#M31253</guid>
      <dc:creator>Harsh J</dc:creator>
      <dc:date>2016-06-09T04:25:42Z</dc:date>
    </item>
    <item>
      <title>Re: “No common protection layer between client and server” while trying to communicate with kerberiz</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/No-common-protection-layer-between-client-and-server-while/m-p/41844#M31254</link>
      <description>&lt;P&gt;Harsh,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks for the hadoop-client suggestion, I've changed the pom file. That did not make any difference, however, as far as the issue is concerned.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;As far as downloading a client configuration zip, is that something I could do via Hue? I do not have access to the main SCM interface. Any other means of retrieving this?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Per your comment "&lt;SPAN&gt;You certainly do need to set hadoop.rpc.protection to the exact value the &lt;/SPAN&gt;&lt;SPAN&gt;cluster expects", I've tried the other values. The "authentication" and "integrity" did not make a difference, I was still getting the error&amp;nbsp;“No common protection layer between client and server”.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;However, setting "hadoop.rpc.protection" to "privacy" caused a different type of error (see below). &amp;nbsp;Any recommendations at this point? &amp;nbsp;Thanks.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Exception in thread "main" org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1775)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1402)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:4221)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:881)&lt;BR /&gt;at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getFileInfo(AuthorizationProviderProxyClientProtocol.java:526)&lt;BR /&gt;at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:822)&lt;BR /&gt;at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)&lt;BR /&gt;at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)&lt;BR /&gt;at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)&lt;BR /&gt;at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)&lt;BR /&gt;at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)&lt;BR /&gt;at java.security.AccessController.doPrivileged(Native Method)&lt;BR /&gt;at javax.security.auth.Subject.doAs(Subject.java:415)&lt;BR /&gt;at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)&lt;BR /&gt;at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038)&lt;/P&gt;&lt;P&gt;at org.apache.hadoop.ipc.Client.call(Client.java:1405)&lt;BR /&gt;at org.apache.hadoop.ipc.Client.call(Client.java:1364)&lt;BR /&gt;at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)&lt;BR /&gt;at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)&lt;BR /&gt;at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:744)&lt;BR /&gt;at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)&lt;BR /&gt;at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)&lt;BR /&gt;at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)&lt;BR /&gt;at java.lang.reflect.Method.invoke(Unknown Source)&lt;BR /&gt;at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)&lt;BR /&gt;at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)&lt;BR /&gt;at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1912)&lt;BR /&gt;at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1089)&lt;BR /&gt;at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1085)&lt;BR /&gt;at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)&lt;BR /&gt;at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1085)&lt;BR /&gt;at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1400)&lt;/P&gt;</description>
      <pubDate>Thu, 09 Jun 2016 13:25:24 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/No-common-protection-layer-between-client-and-server-while/m-p/41844#M31254</guid>
      <dc:creator>dgoldenberg</dc:creator>
      <dc:date>2016-06-09T13:25:24Z</dc:date>
    </item>
    <item>
      <title>Re: “No common protection layer between client and server” while trying to communicate with kerberiz</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/No-common-protection-layer-between-client-and-server-while/m-p/41845#M31255</link>
      <description>This is good, it looks like your server does use privacy and expect it. The&lt;BR /&gt;new error is cause your cluster has HA HDFS but you are passing only a&lt;BR /&gt;single hostname for the NN which is currently not the active one.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;You will need to use the entire HA config set, or for the moment pass in&lt;BR /&gt;the other NN hostname.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;As to client configs, you can perhaps request your admin to generate a Zip&lt;BR /&gt;from CM -&amp;gt; Cluster -&amp;gt; Actions -&amp;gt; Client Configuration URLs option visible&lt;BR /&gt;to them. You'll have an easier time developing apps once you have all the&lt;BR /&gt;required properties set, which is what the client configuration download in&lt;BR /&gt;CM is designed for.&lt;BR /&gt;</description>
      <pubDate>Thu, 09 Jun 2016 14:31:22 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/No-common-protection-layer-between-client-and-server-while/m-p/41845#M31255</guid>
      <dc:creator>Harsh J</dc:creator>
      <dc:date>2016-06-09T14:31:22Z</dc:date>
    </item>
    <item>
      <title>Re: “No common protection layer between client and server” while trying to communicate with kerberiz</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/No-common-protection-layer-between-client-and-server-while/m-p/41861#M31256</link>
      <description>&lt;P&gt;Harsh,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;There are 3 host names at play: A, B, and C. Things have started working, actually, when I set fs.defaultFS to one of these (B); originally I was using A. &amp;nbsp; I'm told, however, that all 3 are supposed to be 'active'. &amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&amp;gt;&amp;gt; you are passing only a&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;single hostname for the NN&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Per this comment you made, should I be passing in all 3 hostnames? &amp;nbsp;If so, how? &amp;nbsp;The doc states that fs.defaultFS is "The name of the default file system." so a) should all 3 names be passed and b) if so, how?&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Thanks for your help.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 10 Jun 2016 01:56:09 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/No-common-protection-layer-between-client-and-server-while/m-p/41861#M31256</guid>
      <dc:creator>dgoldenberg</dc:creator>
      <dc:date>2016-06-10T01:56:09Z</dc:date>
    </item>
    <item>
      <title>Re: “No common protection layer between client and server” while trying to communicate with kerberiz</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/No-common-protection-layer-between-client-and-server-while/m-p/41868#M31257</link>
      <description>&lt;P&gt;&amp;nbsp;HDFS currently is deeply tested only with 2x NameNodes, so while you can technically run 3x NNs, not everything would behave as intended. There is work ongoing to have 2+ NameNodes in future of HDFS.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;HDFS HA architecture is also Active-Standby based, so 2x NNs being active is not possible by at least HDFS HA design. If you're using CDH, then this certainly isn't available, so am unsure what they are trying to mean by 3x Active NameNodes.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;As to HA configuration, it involves a few configurations that are associated to one another. Here's an example core-site.xml and hdfs-site.xml properties that are relevant to HA config description from one such cluster. You can adapt them to your hostnames, but once again I'd like to recommend you obtain a client configuration zip from your administrator to make it easier in deploying with your cluster's configs, vs. hand-setting each relevant property. If you have access to some form of command/gateway/edge host, you can also usually find such config files under its /etc/hadoop/conf/ directory:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;core-site.xml&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;&amp;lt;configuration&amp;gt;
  &amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;fs.defaultFS&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;hdfs://ha-nameservice-name&amp;lt;/value&amp;gt;
  &amp;lt;/property&amp;gt;&lt;BR /&gt;  …
&amp;lt;/configuration&amp;gt;&lt;/PRE&gt;&lt;P&gt;&lt;STRONG&gt;hdfs-site.xml&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;&amp;lt;configuration&amp;gt;
  &amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;dfs.nameservices&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;ha-nameservice-name&amp;lt;/value&amp;gt;
  &amp;lt;/property&amp;gt;
  &amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;dfs.client.failover.proxy.provider.ha-nameservice-name&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider&amp;lt;/value&amp;gt;
  &amp;lt;/property&amp;gt;
  &amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;dfs.ha.automatic-failover.enabled.ha-nameservice-name&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
  &amp;lt;/property&amp;gt;
  &amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;ha.zookeeper.quorum&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;ZKHOST:2181&amp;lt;/value&amp;gt;
  &amp;lt;/property&amp;gt;
  &amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;dfs.ha.namenodes.ha-nameservice-name&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;namenode10,namenode142&amp;lt;/value&amp;gt;
  &amp;lt;/property&amp;gt;
  &amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;dfs.namenode.rpc-address.ha-nameservice-name.namenode10&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;NN1HOST:8020&amp;lt;/value&amp;gt;
  &amp;lt;/property&amp;gt;
  &amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;dfs.namenode.servicerpc-address.ha-nameservice-name.namenode10&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;NN1HOST:8022&amp;lt;/value&amp;gt;
  &amp;lt;/property&amp;gt;
  &amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;dfs.namenode.http-address.ha-nameservice-name.namenode10&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;NN1HOST:20101&amp;lt;/value&amp;gt;
  &amp;lt;/property&amp;gt;
  &amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;dfs.namenode.https-address.ha-nameservice-name.namenode10&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;NN1HOST:20102&amp;lt;/value&amp;gt;
  &amp;lt;/property&amp;gt;
  &amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;dfs.namenode.rpc-address.ha-nameservice-name.namenode142&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;NN2HOST:8020&amp;lt;/value&amp;gt;
  &amp;lt;/property&amp;gt;
  &amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;dfs.namenode.servicerpc-address.ha-nameservice-name.namenode142&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;NN2HOST:8022&amp;lt;/value&amp;gt;
  &amp;lt;/property&amp;gt;
  &amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;dfs.namenode.http-address.ha-nameservice-name.namenode142&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;NN2HOST:20101&amp;lt;/value&amp;gt;
  &amp;lt;/property&amp;gt;
  &amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;dfs.namenode.https-address.ha-nameservice-name.namenode142&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;NN2HOST:20102&amp;lt;/value&amp;gt;
  &amp;lt;/property&amp;gt;&lt;BR /&gt;  …
&amp;lt;/configuration&amp;gt;&lt;/PRE&gt;&lt;P&gt;With this configuration in place, all HDFS URIs must be accessed with FS URI &lt;STRONG&gt;hdfs://ha-nameservice-name&lt;/STRONG&gt;. Ideally you want to use the same name your cluster uses, so remote services can reuse the name too, which is why grabbing an actual cluster client configuration set is important.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 10 Jun 2016 09:07:56 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/No-common-protection-layer-between-client-and-server-while/m-p/41868#M31257</guid>
      <dc:creator>Harsh J</dc:creator>
      <dc:date>2016-06-10T09:07:56Z</dc:date>
    </item>
    <item>
      <title>Re: “No common protection layer between client and server” while trying to communicate with kerberiz</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/No-common-protection-layer-between-client-and-server-while/m-p/41878#M31258</link>
      <description>&lt;P&gt;Thanks, Harsh, very helpful.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I've been poking around on an edge node, so I do have access to hdfs-site.xml and core-site.xml. We may have to munge these files before we can use them, as they contain some values such as host names for fs.defaultFS which are cluster internal; we'll have to use different host names to be able to get in from outside the cluster...&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Since we deal with multiple clusters organized by stage (dev, prod, etc.), we'd have to maintain multiple pairs of core-site.xml, hdfs-site.xml files, and load them dynamically at runtime via Configuration.addResource() method...&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 10 Jun 2016 13:02:13 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/No-common-protection-layer-between-client-and-server-while/m-p/41878#M31258</guid>
      <dc:creator>dgoldenberg</dc:creator>
      <dc:date>2016-06-10T13:02:13Z</dc:date>
    </item>
    <item>
      <title>Re: “No common protection layer between client and server” while trying to communicate with kerberiz</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/No-common-protection-layer-between-client-and-server-while/m-p/42027#M31259</link>
      <description>&lt;P&gt;Because of our requirements (to be able to be targeted toward a different cluster based on deployment, and the HDFS config files potentially having cluster-internal host names in them), we're going with the approach of maintaining the minimalistic set of Configuration properties required to make Kerberos work on the client side. These are, again:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;* dfs.namenode.kerberos.principal&lt;/P&gt;&lt;P&gt;* hadoop.rpc.protection&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Having said that, Harsh's comments are all valid and relevant.&lt;/P&gt;</description>
      <pubDate>Thu, 16 Jun 2016 12:47:01 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/No-common-protection-layer-between-client-and-server-while/m-p/42027#M31259</guid>
      <dc:creator>dgoldenberg</dc:creator>
      <dc:date>2016-06-16T12:47:01Z</dc:date>
    </item>
    <item>
      <title>Re: “No common protection layer between client and server” while trying to communicate with kerberiz</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/No-common-protection-layer-between-client-and-server-while/m-p/55008#M31260</link>
      <description>&lt;P&gt;Hi Harsh,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I am facing the same issue of&amp;nbsp;&amp;nbsp;“No common protection layer between client and server”, my server has hadoop.ipc.protection as privacy which I can't change to authentication..&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;I also tried using below code in my program,&lt;/P&gt;&lt;P&gt;Configuration config = new Configuration();&lt;/P&gt;&lt;P&gt;config.set("hadoop.rpc.protection","privacy"); but still facing same issue..&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have all cluster xmls in my classpath.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Could you please help me in solving the issue&lt;/P&gt;</description>
      <pubDate>Wed, 24 May 2017 14:07:20 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/No-common-protection-layer-between-client-and-server-while/m-p/55008#M31260</guid>
      <dc:creator>samrat1</dc:creator>
      <dc:date>2017-05-24T14:07:20Z</dc:date>
    </item>
    <item>
      <title>Re: “No common protection layer between client and server” while trying to communicate with kerberiz</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/No-common-protection-layer-between-client-and-server-while/m-p/65855#M31261</link>
      <description>&lt;P&gt;Have you tried setting&amp;nbsp;&lt;SPAN&gt;dfs.namenode.kerberos.principal?&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 28 Mar 2018 12:01:17 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/No-common-protection-layer-between-client-and-server-while/m-p/65855#M31261</guid>
      <dc:creator>dgoldenberg</dc:creator>
      <dc:date>2018-03-28T12:01:17Z</dc:date>
    </item>
    <item>
      <title>Re: “No common protection layer between client and server” while trying to communicate with kerberiz</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/No-common-protection-layer-between-client-and-server-while/m-p/65856#M31262</link>
      <description>&lt;P&gt;A while back, I made a comment to the effect of "W&lt;SPAN&gt;e may have to munge these files before we can use them" &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt; For those of you giggling about the use of the word munge, it's widely used on the East Coast to mean "edit" or "overwrite".&amp;nbsp; Typically files are not very edible and XML would not taste good &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt; &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Cheers!&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 28 Mar 2018 12:03:44 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/No-common-protection-layer-between-client-and-server-while/m-p/65856#M31262</guid>
      <dc:creator>dgoldenberg</dc:creator>
      <dc:date>2018-03-28T12:03:44Z</dc:date>
    </item>
    <item>
      <title>Re: “No common protection layer between client and server” while trying to communicate with kerberiz</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/No-common-protection-layer-between-client-and-server-while/m-p/65870#M31263</link>
      <description>&lt;P&gt;Ohhh! thanks i needed this information.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks, really.&lt;/P&gt;</description>
      <pubDate>Wed, 28 Mar 2018 19:34:44 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/No-common-protection-layer-between-client-and-server-while/m-p/65870#M31263</guid>
      <dc:creator>b612</dc:creator>
      <dc:date>2018-03-28T19:34:44Z</dc:date>
    </item>
  </channel>
</rss>

