Support Questions

Find answers, ask questions, and share your expertise

Facing issue while communicating between two different REALM kerberized HDP2.5 clusters using Oracle Virtual Box

avatar
Contributor

Hi Team - I am facing issues while communicating between two different REALM kerberized clusters using Oracle Virtual Box. Please find details of my clusters. I had followed below mentioned link. Any help on this much appreciated. Thanks in advance.

https://community.hortonworks.com/articles/18686/kerberos-cross-realm-trust-for-distcp.html

Details: HDP: 2.5.3.0 , Ambari: 2.4.2.0, OS: CentOS 6.8, Java: JDK1.7

cross-realm-details.txt distcp-error-cross-clusters.txt

Cluster-PRIMARY: REALM: EXAMPLE.COM

Cluster-DR:REALM: HORTONWORKS.COM

Also not able to perform DISTCP between clusters

[ambari-qa@ambaristandby ~]$ hadoop distcp hdfs://172.21.58.120:8020/user/ambari-qa/distcp.txt hdfs://172.21.58.111:8020/user/ambari-qa/distcp_test/

Error:

[hdfs@ambarinode ~]$ hdfs dfs -ls hdfs://172.21.58.120:8020/user/ 17/03/19 10:04:27 WARN security.UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before. 17/03/19 10:04:27 WARN security.UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before. 17/03/19 10:04:30 WARN security.UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before. 17/03/19 10:04:34 WARN security.UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before. 17/03/19 10:04:35 WARN ipc.Client: Couldn't setup connection for hdfs-dr@HORTONWORKS.COM to /172.21.58.120:8020 org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): GSS initiate failed at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:375) at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:595) at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:397) at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:762) at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:758) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:757) at org.apache.hadoop.ipc.Client$Connection.access$3200(Client.java:397) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1618) at org.apache.hadoop.ipc.Client.call(Client.java:1449) at org.apache.hadoop.ipc.Client.call(Client.java:1396) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233) at com.sun.proxy.$Proxy16.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:816) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176) at com.sun.proxy.$Proxy17.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2158) at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1423) at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1419) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1419) at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57) at org.apache.hadoop.fs.Globber.glob(Globber.java:252) at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1674) at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:326) at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:235) at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:218) at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103) at org.apache.hadoop.fs.shell.Command.run(Command.java:165) at org.apache.hadoop.fs.FsShell.run(FsShell.java:297) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at org.apache.hadoop.fs.FsShell.main(FsShell.java:350) 17/03/19 10:04:35 WARN retry.RetryInvocationHandler: Exception while invoking ClientNamenodeProtocolTranslatorPB.getFileInfo over null. Not retrying because try once and fail. java.io.IOException: Failed on local exception: java.io.IOException: Couldn't setup connection for hdfs-dr@HORTONWORKS.COM to /172.21.58.120:8020; Host Details : local host is: "ambarinode.myhadoop.com/172.21.58.111"; destination host is: "172.21.58.120":8020; at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:782) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1556) at org.apache.hadoop.ipc.Client.call(Client.java:1496) at org.apache.hadoop.ipc.Client.call(Client.java:1396) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233) at com.sun.proxy.$Proxy16.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:816) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176) at com.sun.proxy.$Proxy17.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2158) at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1423) at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1419) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1419) at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57) at org.apache.hadoop.fs.Globber.glob(Globber.java:252) at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1674) at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:326) at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:235) at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:218) at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103) at org.apache.hadoop.fs.shell.Command.run(Command.java:165) at org.apache.hadoop.fs.FsShell.run(FsShell.java:297) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at org.apache.hadoop.fs.FsShell.main(FsShell.java:350) Caused by: java.io.IOException: Couldn't setup connection for hdfs-dr@HORTONWORKS.COM to /172.21.58.120:8020 at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:712) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) at org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:683) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:770) at org.apache.hadoop.ipc.Client$Connection.access$3200(Client.java:397) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1618) at org.apache.hadoop.ipc.Client.call(Client.java:1449) ... 29 more Caused by: org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): GSS initiate failed at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:375) at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:595) at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:397) at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:762) at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:758) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:757) ... 32 more ls: Failed on local exception: java.io.IOException: Couldn't setup connection for hdfs-dr@HORTONWORKS.COM to /172.21.58.120:8020; Host Details : local host is: "ambarinode.myhadoop.com/172.21.58.111"; destination host is: "172.21.58.120":8020;

5 REPLIES 5

avatar
Guru

Hello @Balaji Badarla,

From the attached error log, it looks like some principal is not found in the Kerberos database. This could be because the correct principal name not getting formed. Can you please change the auth-to-local rules to these:

1. On HORTONWORKS.COM realm:

RULE:[1:$1@$0](ambari-qa-dr@HORTONWORKS.COM)s/.*/ambari-qa/
RULE:[1:$1@$0](hdfs-dr@HORTONWORKS.COM)s/.*/hdfs/
RULE:[1:$1@$0](.*@HORTONWORKS.COM)s/@.*//
RULE:[2:$1@$0](amshbase@HORTONWORKS.COM)s/.*/ams/
RULE:[2:$1@$0](amszk@HORTONWORKS.COM)s/.*/ams/
RULE:[2:$1@$0](dn@HORTONWORKS.COM)s/.*/hdfs/
RULE:[2:$1@$0](jhs@HORTONWORKS.COM)s/.*/mapred/
RULE:[2:$1@$0](jn@HORTONWORKS.COM)s/.*/hdfs/
RULE:[2:$1@$0](nm@HORTONWORKS.COM)s/.*/yarn/
RULE:[2:$1@$0](nn@HORTONWORKS.COM)s/.*/hdfs/
RULE:[2:$1@$0](rm@HORTONWORKS.COM)s/.*/yarn/
RULE:[2:$1@$0](yarn@HORTONWORKS.COM)s/.*/yarn/
RULE:[1:$1@$0](ambari-qa-primary@EXAMPLE.COM)s/.*/ambari-qa/
RULE:[1:$1@$0](hdfs-primary@EXAMPLE.COM)s/.*/hdfs/
RULE:[1:$1@$0](.*@EXAMPLE.COM)s/@.*//
RULE:[2:$1@$0](amshbase@EXAMPLE.COM)s/.*/ams/
RULE:[2:$1@$0](amszk@EXAMPLE.COM)s/.*/ams/
RULE:[2:$1@$0](dn@EXAMPLE.COM)s/.*/hdfs/
RULE:[2:$1@$0](jhs@EXAMPLE.COM)s/.*/mapred/
RULE:[2:$1@$0](jn@EXAMPLE.COM)s/.*/hdfs/
RULE:[2:$1@$0](nfs@EXAMPLE.COM)s/.*/hdfs/
RULE:[2:$1@$0](nm@EXAMPLE.COM)s/.*/yarn/
RULE:[2:$1@$0](nn@EXAMPLE.COM)s/.*/hdfs/
RULE:[2:$1@$0](rm@EXAMPLE.COM)s/.*/yarn/
RULE:[2:$1@$0](yarn@EXAMPLE.COM)s/.*/yarn/
RULE:[2:$1@$0](.*@EXAMPLE.COM)s/@.*//
DEFAULT

2. On EXAMPLE.COM realm:

RULE:[1:$1@$0](ambari-qa-primary@EXAMPLE.COM)s/.*/ambari-qa/
RULE:[1:$1@$0](hdfs-primary@EXAMPLE.COM)s/.*/hdfs/
RULE:[1:$1@$0](.*@EXAMPLE.COM)s/@.*//
RULE:[2:$1@$0](amshbase@EXAMPLE.COM)s/.*/ams/
RULE:[2:$1@$0](amszk@EXAMPLE.COM)s/.*/ams/
RULE:[2:$1@$0](dn@EXAMPLE.COM)s/.*/hdfs/
RULE:[2:$1@$0](jhs@EXAMPLE.COM)s/.*/mapred/
RULE:[2:$1@$0](jn@EXAMPLE.COM)s/.*/hdfs/
RULE:[2:$1@$0](nfs@EXAMPLE.COM)s/.*/hdfs/
RULE:[2:$1@$0](nm@EXAMPLE.COM)s/.*/yarn/
RULE:[2:$1@$0](nn@EXAMPLE.COM)s/.*/hdfs/
RULE:[2:$1@$0](rm@EXAMPLE.COM)s/.*/yarn/
RULE:[2:$1@$0](yarn@EXAMPLE.COM)s/.*/yarn/
RULE:[1:$1@$0](ambari-qa-dr@HORTONWORKS.COM)s/.*/ambari-qa/
RULE:[1:$1@$0](hdfs-dr@HORTONWORKS.COM)s/.*/hdfs/
RULE:[1:$1@$0](.*@HORTONWORKS.COM)s/@.*//
RULE:[2:$1@$0](amshbase@HORTONWORKS.COM)s/.*/ams/
RULE:[2:$1@$0](amszk@HORTONWORKS.COM)s/.*/ams/
RULE:[2:$1@$0](dn@HORTONWORKS.COM)s/.*/hdfs/
RULE:[2:$1@$0](jhs@HORTONWORKS.COM)s/.*/mapred/
RULE:[2:$1@$0](jn@HORTONWORKS.COM)s/.*/hdfs/
RULE:[2:$1@$0](nm@HORTONWORKS.COM)s/.*/yarn/
RULE:[2:$1@$0](nn@HORTONWORKS.COM)s/.*/hdfs/
RULE:[2:$1@$0](rm@HORTONWORKS.COM)s/.*/yarn/
RULE:[2:$1@$0](yarn@HORTONWORKS.COM)s/.*/yarn/
RULE:[2:$1@$0](.*@HORTONWORKS.COM)s/@.*//
DEFAULT

Please note that I've changed the "*" rules in both of them. Please try with these and let us know.

If you still see any Kerberos error, please set these & then run distcp command:

export HADOOP_OPTS="$HADOOP_OPTS -Dsun.security.krb5.debug=true"
export HADOOP_ROOT_LOGGER=DEBUG,console

Hope this helps !

avatar
Contributor

Thanks a lot Vipin. I had implemented the above changes and still facing the issues.

Attached the latest files as well.cross-realm-details-89602.txt

distcp-error-cross-clusters-89602.txt

===================================/var/log/krb5kdc.log================================================================== Server:ambaristandby.myhadoop.com Mar 29 05:20:21 ambaristandby.myhadoop.com krb5kdc[1177](info): AS_REQ (4 etypes {18 17 16 23}) 172.21.58.120: ISSUE: authtime 1490757621, etypes {rep=18 tkt=18 ses=18}, HTTP/standbyms.myhadoop.com@EXAMPLE.COM for krbtgt/EXAMPLE.COM@EXAMPLE.COM Mar 29 05:20:21 ambaristandby.myhadoop.com krb5kdc[1177](info): AS_REQ (4 etypes {18 17 16 23}) 172.21.58.120: ISSUE: authtime 1490757621, etypes {rep=18 tkt=18 ses=18}, yarn/standbyms.myhadoop.com@EXAMPLE.COM for krbtgt/EXAMPLE.COM@EXAMPLE.COM Mar 29 05:20:21 ambaristandby.myhadoop.com krb5kdc[1177](info): AS_REQ (4 etypes {18 17 16 23}) 172.21.58.120: ISSUE: authtime 1490757621, etypes {rep=18 tkt=18 ses=18}, rm/standbyms.myhadoop.com@EXAMPLE.COM for krbtgt/EXAMPLE.COM@EXAMPLE.COM Mar 29 05:20:21 ambaristandby.myhadoop.com krb5kdc[1177](info): AS_REQ (4 etypes {18 17 16 23}) 172.21.58.120: ISSUE: authtime 1490757621, etypes {rep=18 tkt=18 ses=18}, zookeeper/standbyms.myhadoop.com@EXAMPLE.COM for krbtgt/EXAMPLE.COM@EXAMPLE.COM Mar 29 05:21:07 ambaristandby.myhadoop.com krb5kdc[1177](info): TGS_REQ (6 etypes {18 17 16 23 1 3}) 172.21.58.116: UNKNOWN_SERVER: authtime 0, varnika@EXAMPLE.COM for nn/ms.myhadoop.com@EXAMPLE.COM, Server not found in Kerberos database Mar 29 05:21:08 ambaristandby.myhadoop.com krb5kdc[1177](info): TGS_REQ (6 etypes {18 17 16 23 1 3}) 172.21.58.116: UNKNOWN_SERVER: authtime 0, varnika@EXAMPLE.COM for nn/ms.myhadoop.com@EXAMPLE.COM, Server not found in Kerberos database Mar 29 05:21:09 ambaristandby.myhadoop.com krb5kdc[1177](info): TGS_REQ (6 etypes {18 17 16 23 1 3}) 172.21.58.116: UNKNOWN_SERVER: authtime 0, varnika@EXAMPLE.COM for nn/ms.myhadoop.com@EXAMPLE.COM, Server not found in Kerberos database Mar 29 05:21:09 ambaristandby.myhadoop.com krb5kdc[1177](info): TGS_REQ (6 etypes {18 17 16 23 1 3}) 172.21.58.116: UNKNOWN_SERVER: authtime 0, varnika@EXAMPLE.COM for nn/ms.myhadoop.com@EXAMPLE.COM, Server not found in Kerberos database Mar 29 05:21:10 ambaristandby.myhadoop.com krb5kdc[1177](info): TGS_REQ (6 etypes {18 17 16 23 1 3}) 172.21.58.116: UNKNOWN_SERVER: authtime 0, varnika@EXAMPLE.COM for nn/ms.myhadoop.com@EXAMPLE.COM, Server not found in Kerberos database Mar 29 05:21:14 ambaristandby.myhadoop.com krb5kdc[1177](info): TGS_REQ (6 etypes {18 17 16 23 1 3}) 172.21.58.116: UNKNOWN_SERVER: authtime 0, varnika@EXAMPLE.COM for nn/ms.myhadoop.com@EXAMPLE.COM, Server not found in Kerberos database

avatar
Expert Contributor

Hello @Balaji Badarla

I think that added to @Vipin Rathor analysis which is correct, you also have mistakes in your krb5.conf. To setup a crossrealm trust, kerberos must be aware of the foreing realm kdc, this is acomplished setting up correctly your krb5.conf.

With your current configuration the services (including kdc) of the realm HORTONWORKS.COM doesn't know how to reach EXAMPLE.COM and vice versa.

And as I can see in your current configuration you could also have problems with SPNEGO using crossrealm identities because of [domain_realm], you are mapping all subdomains of .hortonworks.com to HORTONWORKS.COM but you are not mapping ambarinode.myhadoop.com to HORTONWORKS.COM and either ambaristandby.myhadoop.com to EXAMPLE.COM in the oposite realm, so when they try to get the http principal keytab of a service of the oposite realm it will take the one of their realm instead, this is a little tricky thing which one I had troubles. So first you must add athe following entry in [realms] section cluster-DR krb5.conf:

EXAMPLE.COM = {
    admin_server = ambaristandby.myhadoop.com
    kdc = ambaristandby.myhadoop.com
  }
Then add to [realms] section of Cluster-PRIMARY krb5.conf the following:
HORTONWORKS.COM = {
  kdc = ambarinode.myhadoop.com
  admin_server = ambarinode.myhadoop.com
 }
With this changes the scp should not be a problem, in case you also want to test cross realm spnego authentication you also must set the following (this case is if you have the services in ambarinode and ambaristandby the syntax is: <hostname or domain wildcard> = <REALM>.

At botch krb5.conf:

[domain_realm]
 ambarinode.myhadoop.com = HORTONWORKS.COM
 ambaristandby.myhadoop.com = EXAMPLE.COM
I hope this help, in case of any doubt please ask. 🙂

avatar
Contributor

Thanks a lot Juan. I had implemented the above changes and still facing the issues. It seems there is some other change which I need to perform. Please help.

Attached the latest files as well.

cross-realm-details-89602.txt

distcp-error-cross-clusters-89602.txt

===================================/var/log/krb5kdc.log================================================================== Server:ambaristandby.myhadoop.com Mar 29 05:20:21 ambaristandby.myhadoop.com krb5kdc[1177](info): AS_REQ (4 etypes {18 17 16 23}) 172.21.58.120: ISSUE: authtime 1490757621, etypes {rep=18 tkt=18 ses=18}, HTTP/standbyms.myhadoop.com@EXAMPLE.COM for krbtgt/EXAMPLE.COM@EXAMPLE.COM Mar 29 05:20:21 ambaristandby.myhadoop.com krb5kdc[1177](info): AS_REQ (4 etypes {18 17 16 23}) 172.21.58.120: ISSUE: authtime 1490757621, etypes {rep=18 tkt=18 ses=18}, yarn/standbyms.myhadoop.com@EXAMPLE.COM for krbtgt/EXAMPLE.COM@EXAMPLE.COM Mar 29 05:20:21 ambaristandby.myhadoop.com krb5kdc[1177](info): AS_REQ (4 etypes {18 17 16 23}) 172.21.58.120: ISSUE: authtime 1490757621, etypes {rep=18 tkt=18 ses=18}, rm/standbyms.myhadoop.com@EXAMPLE.COM for krbtgt/EXAMPLE.COM@EXAMPLE.COM Mar 29 05:20:21 ambaristandby.myhadoop.com krb5kdc[1177](info): AS_REQ (4 etypes {18 17 16 23}) 172.21.58.120: ISSUE: authtime 1490757621, etypes {rep=18 tkt=18 ses=18}, zookeeper/standbyms.myhadoop.com@EXAMPLE.COM for krbtgt/EXAMPLE.COM@EXAMPLE.COM Mar 29 05:21:07 ambaristandby.myhadoop.com krb5kdc[1177](info): TGS_REQ (6 etypes {18 17 16 23 1 3}) 172.21.58.116: UNKNOWN_SERVER: authtime 0, varnika@EXAMPLE.COM for nn/ms.myhadoop.com@EXAMPLE.COM, Server not found in Kerberos database Mar 29 05:21:08 ambaristandby.myhadoop.com krb5kdc[1177](info): TGS_REQ (6 etypes {18 17 16 23 1 3}) 172.21.58.116: UNKNOWN_SERVER: authtime 0, varnika@EXAMPLE.COM for nn/ms.myhadoop.com@EXAMPLE.COM, Server not found in Kerberos database Mar 29 05:21:09 ambaristandby.myhadoop.com krb5kdc[1177](info): TGS_REQ (6 etypes {18 17 16 23 1 3}) 172.21.58.116: UNKNOWN_SERVER: authtime 0, varnika@EXAMPLE.COM for nn/ms.myhadoop.com@EXAMPLE.COM, Server not found in Kerberos database Mar 29 05:21:09 ambaristandby.myhadoop.com krb5kdc[1177](info): TGS_REQ (6 etypes {18 17 16 23 1 3}) 172.21.58.116: UNKNOWN_SERVER: authtime 0, varnika@EXAMPLE.COM for nn/ms.myhadoop.com@EXAMPLE.COM, Server not found in Kerberos database Mar 29 05:21:10 ambaristandby.myhadoop.com krb5kdc[1177](info): TGS_REQ (6 etypes {18 17 16 23 1 3}) 172.21.58.116: UNKNOWN_SERVER: authtime 0, varnika@EXAMPLE.COM for nn/ms.myhadoop.com@EXAMPLE.COM, Server not found in Kerberos database Mar 29 05:21:14 ambaristandby.myhadoop.com krb5kdc[1177](info): TGS_REQ (6 etypes {18 17 16 23 1 3}) 172.21.58.116: UNKNOWN_SERVER: authtime 0, varnika@EXAMPLE.COM for nn/ms.myhadoop.com@EXAMPLE.COM, Server not found in Kerberos database

avatar
Contributor

Hi Vipin & Juan - Thanks a lot for your suggestions. I had implemented all of your suggestion but still facing the issue. It seems something I had missed out. Please look at the latest configuration and error details and let me know.cross-realm-details-89602.txtdistcp-error-cross-clusters-89602.txt