Support Questions

Find answers, ask questions, and share your expertise

DEBUG security.UserGroupInformation: PrivilegedActionException as:root (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]

avatar
Explorer

Hey All,

 

I'm trying to run spark-shell for the first time on a CM / CDH 6.3 installation.  But getting the below instead.  

 

 

 

 

19/08/31 11:05:24 DEBUG ipc.Client: The ping interval is 60000 ms.
19/08/31 11:05:24 DEBUG ipc.Client: Connecting to cm-r01nn02.mws.mds.xyz/192.168.0.133:8032
19/08/31 11:05:24 DEBUG security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:795)
19/08/31 11:05:24 DEBUG security.SaslRpcClient: Sending sasl message state: NEGOTIATE

19/08/31 11:05:24 DEBUG security.SaslRpcClient: Get token info proto:interface org.apache.hadoop.yarn.api.ApplicationClientProtocolPB info:org.apache.hadoop.yarn.security.client.ClientRMSecurityInfo$2@238c63df
19/08/31 11:05:24 DEBUG client.RMDelegationTokenSelector: Looking for a token with service 192.168.0.133:8032
19/08/31 11:05:24 DEBUG security.SaslRpcClient: tokens aren't supported for this protocol or user doesn't have one
19/08/31 11:05:24 DEBUG security.SaslRpcClient: client isn't using kerberos
19/08/31 11:05:24 DEBUG security.UserGroupInformation: PrivilegedActionException as:root (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
19/08/31 11:05:24 DEBUG security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:719)
19/08/31 11:05:24 WARN ipc.Client: Exception encountered while connecting to the server : org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
19/08/31 11:05:24 DEBUG security.UserGroupInformation: PrivilegedActionException as:root (auth:SIMPLE) cause:java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
19/08/31 11:05:24 DEBUG ipc.Client: closing ipc connection to cm-r01nn02.mws.mds.xyz/192.168.0.133:8032: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
        at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:756)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
        at org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:719)
        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:812)
        at org.apache.hadoop.ipc.Client$Connection.access$3600(Client.java:410)
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1560)
        at org.apache.hadoop.ipc.Client.call(Client.java:1391)
        at org.apache.hadoop.ipc.Client.call(Client.java:1355)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
        at com.sun.proxy.$Proxy16.getClusterMetrics(Unknown Source)
        at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getClusterMetrics(ApplicationClientProtocolPBClientImpl.java:251)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
        at com.sun.proxy.$Proxy17.getClusterMetrics(Unknown Source)
        at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getYarnClusterMetrics(YarnClientImpl.java:604)
        at org.apache.spark.deploy.yarn.Client$$anonfun$submitApplication$1.apply(Client.scala:169)
        at org.apache.spark.deploy.yarn.Client$$anonfun$submitApplication$1.apply(Client.scala:169)
        at org.apache.spark.internal.Logging$class.logInfo(Logging.scala:57)
        at org.apache.spark.deploy.yarn.Client.logInfo(Client.scala:62)
        at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:168)
        at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:60)
        at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:186)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:511)
        at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2549)
        at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:944)
        at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:935)
        at scala.Option.getOrElse(Option.scala:121)
        at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:935)
        at org.apache.spark.repl.Main$.createSparkSession(Main.scala:106)
        at $line3.$read$$iw$$iw.<init>(<console>:15)
        at $line3.$read$$iw.<init>(<console>:43)
        at $line3.$read.<init>(<console>:45)
        at $line3.$read$.<init>(<console>:49)
        at $line3.$read$.<clinit>(<console>)
        at $line3.$eval$.$print$lzycompute(<console>:7)
        at $line3.$eval$.$print(<console>:6)
        at $line3.$eval.$print(<console>)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:793)
        at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1054)
        at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:645)
        at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:644)
        at scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31)
        at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:19)
        at scala.tools.nsc.interpreter.IMain$WrappedRequest.loadAndRunReq(IMain.scala:644)
        at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:576)
        at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:572)
        at scala.tools.nsc.interpreter.IMain$$anonfun$quietRun$1.apply(IMain.scala:231)
        at scala.tools.nsc.interpreter.IMain$$anonfun$quietRun$1.apply(IMain.scala:231)
        at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:221)
        at scala.tools.nsc.interpreter.IMain.quietRun(IMain.scala:231)
        at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1$$anonfun$apply$mcV$sp$1.apply(SparkILoop.scala:109)
        at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1$$anonfun$apply$mcV$sp$1.apply(SparkILoop.scala:109)
        at scala.collection.immutable.List.foreach(List.scala:392)
        at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply$mcV$sp(SparkILoop.scala:109)
        at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:109)
        at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:109)
        at scala.tools.nsc.interpreter.ILoop.savingReplayStack(ILoop.scala:91)
        at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:108)
        at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$org$apache$spark$repl$SparkILoop$$anonfun$$loopPostInit$1$1.apply$mcV$sp(SparkILoop.scala:211)
        at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$org$apache$spark$repl$SparkILoop$$anonfun$$loopPostInit$1$1.apply(SparkILoop.scala:199)
        at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$org$apache$spark$repl$SparkILoop$$anonfun$$loopPostInit$1$1.apply(SparkILoop.scala:199)
        at scala.tools.nsc.interpreter.ILoop$$anonfun$mumly$1.apply(ILoop.scala:189)
        at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:221)
        at scala.tools.nsc.interpreter.ILoop.mumly(ILoop.scala:186)
        at org.apache.spark.repl.SparkILoop$$anonfun$process$1.org$apache$spark$repl$SparkILoop$$anonfun$$loopPostInit$1(SparkILoop.scala:199)
        at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$startup$1$1.apply(SparkILoop.scala:267)
        at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$startup$1$1.apply(SparkILoop.scala:247)
        at org.apache.spark.repl.SparkILoop$$anonfun$process$1.withSuppressedSettings$1(SparkILoop.scala:235)
        at org.apache.spark.repl.SparkILoop$$anonfun$process$1.startup$1(SparkILoop.scala:247)
        at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply$mcZ$sp(SparkILoop.scala:282)
        at org.apache.spark.repl.SparkILoop.runClosure(SparkILoop.scala:159)
        at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:182)
        at org.apache.spark.repl.Main$.doMain(Main.scala:78)
        at org.apache.spark.repl.Main$.main(Main.scala:58)
        at org.apache.spark.repl.Main.main(Main.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
        at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:851)
        at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167)
        at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)
        at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
        at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:926)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:935)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
        at org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:173)
        at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:390)
        at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:614)
        at org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:410)
        at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:799)
        at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:795)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:795)
        ... 95 more
19/08/31 11:05:24 DEBUG ipc.Client: IPC Client (483582792) connection to cm-r01nn02.mws.mds.xyz/192.168.0.133:8032 from root: closed
19/08/31 11:05:24 INFO retry.RetryInvocationHandler: java.io.IOException: Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : local host is: "cm-r01en01.mws.mds.xyz/192.168.0.140"; destination host is: "cm-r01nn02.mws.mds.xyz":8032; , while invoking ApplicationClientProtocolPBClientImpl.getClusterMetrics over null after 6 failover attempts. Trying to failover after sleeping for 19516ms.
19/08/31 11:05:24 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:25 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:26 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:27 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:28 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:29 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:30 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:31 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:32 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:33 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:34 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:35 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:36 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:37 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:38 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:39 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:40 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:41 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:42 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:43 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:43 DEBUG ipc.Client: The ping interval is 60000 ms.
19/08/31 11:05:43 DEBUG ipc.Client: Connecting to cm-r01nn02.mws.mds.xyz/192.168.0.133:8032
19/08/31 11:05:43 DEBUG security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:795)
19/08/31 11:05:43 DEBUG security.SaslRpcClient: Sending sasl message state: NEGOTIATE

19/08/31 11:05:43 DEBUG security.SaslRpcClient: Get token info proto:interface org.apache.hadoop.yarn.api.ApplicationClientProtocolPB info:org.apache.hadoop.yarn.security.client.ClientRMSecurityInfo$2@321558f8
19/08/31 11:05:43 DEBUG client.RMDelegationTokenSelector: Looking for a token with service 192.168.0.133:8032
19/08/31 11:05:43 DEBUG security.SaslRpcClient: tokens aren't supported for this protocol or user doesn't have one
19/08/31 11:05:43 DEBUG security.SaslRpcClient: client isn't using kerberos
19/08/31 11:05:43 DEBUG security.UserGroupInformation: PrivilegedActionException as:root (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
19/08/31 11:05:43 DEBUG security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:719)
19/08/31 11:05:43 WARN ipc.Client: Exception encountered while connecting to the server : org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
19/08/31 11:05:43 DEBUG security.UserGroupInformation: PrivilegedActionException as:root (auth:SIMPLE) cause:java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
19/08/31 11:05:43 DEBUG ipc.Client: closing ipc connection to cm-r01nn02.mws.mds.xyz/192.168.0.133:8032: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
        at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:756)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
        at org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:719)
        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:812)
        at org.apache.hadoop.ipc.Client$Connection.access$3600(Client.java:410)
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1560)
        at org.apache.hadoop.ipc.Client.call(Client.java:1391)
        at org.apache.hadoop.ipc.Client.call(Client.java:1355)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
        at com.sun.proxy.$Proxy16.getClusterMetrics(Unknown Source)
        at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getClusterMetrics(ApplicationClientProtocolPBClientImpl.java:251)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
        at com.sun.proxy.$Proxy17.getClusterMetrics(Unknown Source)
        at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getYarnClusterMetrics(YarnClientImpl.java:604)
        at org.apache.spark.deploy.yarn.Client$$anonfun$submitApplication$1.apply(Client.scala:169)
        at org.apache.spark.deploy.yarn.Client$$anonfun$submitApplication$1.apply(Client.scala:169)
        at org.apache.spark.internal.Logging$class.logInfo(Logging.scala:57)
        at org.apache.spark.deploy.yarn.Client.logInfo(Client.scala:62)
        at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:168)
        at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:60)
        at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:186)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:511)
        at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2549)
        at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:944)
        at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:935)
        at scala.Option.getOrElse(Option.scala:121)
        at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:935)
        at org.apache.spark.repl.Main$.createSparkSession(Main.scala:106)
        at $line3.$read$$iw$$iw.<init>(<console>:15)
        at $line3.$read$$iw.<init>(<console>:43)
        at $line3.$read.<init>(<console>:45)
        at $line3.$read$.<init>(<console>:49)
        at $line3.$read$.<clinit>(<console>)
        at $line3.$eval$.$print$lzycompute(<console>:7)
        at $line3.$eval$.$print(<console>:6)
        at $line3.$eval.$print(<console>)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:793)
        at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1054)
        at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:645)
        at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:644)
        at scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31)
        at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:19)
        at scala.tools.nsc.interpreter.IMain$WrappedRequest.loadAndRunReq(IMain.scala:644)
        at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:576)
        at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:572)
        at scala.tools.nsc.interpreter.IMain$$anonfun$quietRun$1.apply(IMain.scala:231)
        at scala.tools.nsc.interpreter.IMain$$anonfun$quietRun$1.apply(IMain.scala:231)
        at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:221)
        at scala.tools.nsc.interpreter.IMain.quietRun(IMain.scala:231)
        at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1$$anonfun$apply$mcV$sp$1.apply(SparkILoop.scala:109)
        at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1$$anonfun$apply$mcV$sp$1.apply(SparkILoop.scala:109)
        at scala.collection.immutable.List.foreach(List.scala:392)
        at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply$mcV$sp(SparkILoop.scala:109)
        at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:109)
        at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:109)
        at scala.tools.nsc.interpreter.ILoop.savingReplayStack(ILoop.scala:91)
        at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:108)
        at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$org$apache$spark$repl$SparkILoop$$anonfun$$loopPostInit$1$1.apply$mcV$sp(SparkILoop.scala:211)
        at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$org$apache$spark$repl$SparkILoop$$anonfun$$loopPostInit$1$1.apply(SparkILoop.scala:199)
        at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$org$apache$spark$repl$SparkILoop$$anonfun$$loopPostInit$1$1.apply(SparkILoop.scala:199)
        at scala.tools.nsc.interpreter.ILoop$$anonfun$mumly$1.apply(ILoop.scala:189)
        at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:221)
        at scala.tools.nsc.interpreter.ILoop.mumly(ILoop.scala:186)
        at org.apache.spark.repl.SparkILoop$$anonfun$process$1.org$apache$spark$repl$SparkILoop$$anonfun$$loopPostInit$1(SparkILoop.scala:199)
        at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$startup$1$1.apply(SparkILoop.scala:267)
        at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$startup$1$1.apply(SparkILoop.scala:247)
        at org.apache.spark.repl.SparkILoop$$anonfun$process$1.withSuppressedSettings$1(SparkILoop.scala:235)
        at org.apache.spark.repl.SparkILoop$$anonfun$process$1.startup$1(SparkILoop.scala:247)
        at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply$mcZ$sp(SparkILoop.scala:282)
        at org.apache.spark.repl.SparkILoop.runClosure(SparkILoop.scala:159)
        at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:182)
        at org.apache.spark.repl.Main$.doMain(Main.scala:78)
        at org.apache.spark.repl.Main$.main(Main.scala:58)
        at org.apache.spark.repl.Main.main(Main.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
        at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:851)
        at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167)
        at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)
        at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
        at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:926)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:935)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
        at org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:173)
        at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:390)
        at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:614)
        at org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:410)
        at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:799)
        at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:795)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:795)
        ... 95 more
19/08/31 11:05:43 DEBUG ipc.Client: IPC Client (483582792) connection to cm-r01nn02.mws.mds.xyz/192.168.0.133:8032 from root: closed
19/08/31 11:05:43 INFO retry.RetryInvocationHandler: java.io.IOException: Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : local host is: "cm-r01en01.mws.mds.xyz/192.168.0.140"; destination host is: "cm-r01nn02.mws.mds.xyz":8032; , while invoking ApplicationClientProtocolPBClientImpl.getClusterMetrics over null after 7 failover attempts. Trying to failover after sleeping for 33704ms.
19/08/31 11:05:44 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:45 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:46 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:47 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:48 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:49 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:50 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:51 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:52 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:53 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:54 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:55 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:56 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:57 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:58 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:05:59 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:06:00 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:06:01 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:06:02 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:06:03 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:06:04 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:06:05 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:06:06 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:06:07 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:06:08 INFO storage.DiskBlockManager: Shutdown hook called
19/08/31 11:06:08 INFO util.ShutdownHookManager: Shutdown hook called
19/08/31 11:06:08 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-9a9338cb-f16b-48e0-b0cd-7ddfcc148a13/repl-52ba4c53-3478-4ead-93e7-d20ecbd2e866
19/08/31 11:06:08 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-9a9338cb-f16b-48e0-b0cd-7ddfcc148a13/userFiles-5f218430-30bb-4a7e-87df-7ee235183578
19/08/31 11:06:08 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-9a9338cb-f16b-48e0-b0cd-7ddfcc148a13
19/08/31 11:06:08 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-79f3eecb-69d4-4b21-85dc-6746fc33f65c
19/08/31 11:06:08 DEBUG ipc.Client: stopping client from cache: org.apache.hadoop.ipc.Client@19656e21
19/08/31 11:06:08 DEBUG util.ShutdownHookManager: Completed shutdown in 0.062 seconds; Timeouts: 0
19/08/31 11:06:08 DEBUG util.ShutdownHookManager: ShutdownHookManger completed shutdown.
[root@cm-r01en01 process]# dig -x 192.168.0.140

; <<>> DiG 9.9.4-RedHat-9.9.4-73.el7_6 <<>> -x 192.168.0.140
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 39821
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 3

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;140.0.168.192.in-addr.arpa.    IN      PTR

;; ANSWER SECTION:
140.0.168.192.in-addr.arpa. 1200 IN     PTR     cm-r01en01.mws.mds.xyz.

;; AUTHORITY SECTION:
0.168.192.in-addr.arpa. 86400   IN      NS      idmipa03.mws.mds.xyz.
0.168.192.in-addr.arpa. 86400   IN      NS      idmipa04.mws.mds.xyz.

;; ADDITIONAL SECTION:
idmipa03.mws.mds.xyz.   1200    IN      A       192.168.0.154
idmipa04.mws.mds.xyz.   1200    IN      A       192.168.0.155

;; Query time: 1 msec
;; SERVER: 192.168.0.154#53(192.168.0.154)
;; WHEN: Sat Aug 31 11:06:18 EDT 2019
;; MSG SIZE  rcvd: 169

[root@cm-r01en01 process]# dig -x 192.168.0.133

; <<>> DiG 9.9.4-RedHat-9.9.4-73.el7_6 <<>> -x 192.168.0.133
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 11817
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 3

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;133.0.168.192.in-addr.arpa.    IN      PTR

;; ANSWER SECTION:
133.0.168.192.in-addr.arpa. 1200 IN     PTR     cm-r01nn02.mws.mds.xyz.

;; AUTHORITY SECTION:
0.168.192.in-addr.arpa. 86400   IN      NS      idmipa04.mws.mds.xyz.
0.168.192.in-addr.arpa. 86400   IN      NS      idmipa03.mws.mds.xyz.

;; ADDITIONAL SECTION:
idmipa03.mws.mds.xyz.   1200    IN      A       192.168.0.154
idmipa04.mws.mds.xyz.   1200    IN      A       192.168.0.155

;; Query time: 1 msec
;; SERVER: 192.168.0.154#53(192.168.0.154)
;; WHEN: Sat Aug 31 11:26:10 EDT 2019
;; MSG SIZE  rcvd: 169

[root@cm-r01en01 process]#

 

 

 

 

 

I try the same as a non previlidged AD / FreeIPA user but with same results:

 

 

 

 

 

19/08/31 11:33:07 DEBUG ipc.Client: The ping interval is 60000 ms.
19/08/31 11:33:07 DEBUG ipc.Client: Connecting to cm-r01nn02.mws.mds.xyz/192.168.0.133:8032
19/08/31 11:33:07 DEBUG security.UserGroupInformation: PrivilegedAction as:tom@mds.xyz (auth:SIMPLE) from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:795)
19/08/31 11:33:07 DEBUG security.SaslRpcClient: Sending sasl message state: NEGOTIATE

19/08/31 11:33:07 DEBUG security.SaslRpcClient: Get token info proto:interface org.apache.hadoop.yarn.api.ApplicationClientProtocolPB info:org.apache.hadoop.yarn.security.client.ClientRMSecurityInfo$2@6d4df1d2
19/08/31 11:33:07 DEBUG client.RMDelegationTokenSelector: Looking for a token with service 192.168.0.133:8032
19/08/31 11:33:07 DEBUG security.SaslRpcClient: tokens aren't supported for this protocol or user doesn't have one
19/08/31 11:33:07 DEBUG security.SaslRpcClient: client isn't using kerberos
19/08/31 11:33:07 DEBUG security.UserGroupInformation: PrivilegedActionException as:tom@mds.xyz (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
19/08/31 11:33:07 DEBUG security.UserGroupInformation: PrivilegedAction as:tom@mds.xyz (auth:SIMPLE) from:org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:719)
19/08/31 11:33:07 WARN ipc.Client: Exception encountered while connecting to the server : org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
19/08/31 11:33:07 DEBUG security.UserGroupInformation: PrivilegedActionException as:tom@mds.xyz (auth:SIMPLE) cause:java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
19/08/31 11:33:07 DEBUG ipc.Client: closing ipc connection to cm-r01nn02.mws.mds.xyz/192.168.0.133:8032: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
        at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:756)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
        at org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:719)
        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:812)
        at org.apache.hadoop.ipc.Client$Connection.access$3600(Client.java:410)
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1560)
        at org.apache.hadoop.ipc.Client.call(Client.java:1391)
        at org.apache.hadoop.ipc.Client.call(Client.java:1355)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
        at com.sun.proxy.$Proxy16.getClusterMetrics(Unknown Source)
        at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getClusterMetrics(ApplicationClientProtocolPBClientImpl.java:251)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
        at com.sun.proxy.$Proxy17.getClusterMetrics(Unknown Source)
        at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getYarnClusterMetrics(YarnClientImpl.java:604)
        at org.apache.spark.deploy.yarn.Client$$anonfun$submitApplication$1.apply(Client.scala:169)
        at org.apache.spark.deploy.yarn.Client$$anonfun$submitApplication$1.apply(Client.scala:169)
        at org.apache.spark.internal.Logging$class.logInfo(Logging.scala:57)
        at org.apache.spark.deploy.yarn.Client.logInfo(Client.scala:62)
        at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:168)
        at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:60)
        at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:186)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:511)
        at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2549)
        at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:944)
        at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:935)
        at scala.Option.getOrElse(Option.scala:121)
        at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:935)
        at org.apache.spark.repl.Main$.createSparkSession(Main.scala:106)
        at $line3.$read$$iw$$iw.<init>(<console>:15)
        at $line3.$read$$iw.<init>(<console>:43)
        at $line3.$read.<init>(<console>:45)
        at $line3.$read$.<init>(<console>:49)
        at $line3.$read$.<clinit>(<console>)
        at $line3.$eval$.$print$lzycompute(<console>:7)
        at $line3.$eval$.$print(<console>:6)
        at $line3.$eval.$print(<console>)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:793)
        at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1054)
        at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:645)
        at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:644)
        at scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31)
        at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:19)
        at scala.tools.nsc.interpreter.IMain$WrappedRequest.loadAndRunReq(IMain.scala:644)
        at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:576)
        at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:572)
        at scala.tools.nsc.interpreter.IMain$$anonfun$quietRun$1.apply(IMain.scala:231)
        at scala.tools.nsc.interpreter.IMain$$anonfun$quietRun$1.apply(IMain.scala:231)
        at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:221)
        at scala.tools.nsc.interpreter.IMain.quietRun(IMain.scala:231)
        at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1$$anonfun$apply$mcV$sp$1.apply(SparkILoop.scala:109)
        at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1$$anonfun$apply$mcV$sp$1.apply(SparkILoop.scala:109)
        at scala.collection.immutable.List.foreach(List.scala:392)
        at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply$mcV$sp(SparkILoop.scala:109)
        at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:109)
        at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:109)
        at scala.tools.nsc.interpreter.ILoop.savingReplayStack(ILoop.scala:91)
        at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:108)
        at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$org$apache$spark$repl$SparkILoop$$anonfun$$loopPostInit$1$1.apply$mcV$sp(SparkILoop.scala:211)
        at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$org$apache$spark$repl$SparkILoop$$anonfun$$loopPostInit$1$1.apply(SparkILoop.scala:199)
        at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$org$apache$spark$repl$SparkILoop$$anonfun$$loopPostInit$1$1.apply(SparkILoop.scala:199)
        at scala.tools.nsc.interpreter.ILoop$$anonfun$mumly$1.apply(ILoop.scala:189)
        at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:221)
        at scala.tools.nsc.interpreter.ILoop.mumly(ILoop.scala:186)
        at org.apache.spark.repl.SparkILoop$$anonfun$process$1.org$apache$spark$repl$SparkILoop$$anonfun$$loopPostInit$1(SparkILoop.scala:199)
        at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$startup$1$1.apply(SparkILoop.scala:267)
        at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$startup$1$1.apply(SparkILoop.scala:247)
        at org.apache.spark.repl.SparkILoop$$anonfun$process$1.withSuppressedSettings$1(SparkILoop.scala:235)
        at org.apache.spark.repl.SparkILoop$$anonfun$process$1.startup$1(SparkILoop.scala:247)
        at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply$mcZ$sp(SparkILoop.scala:282)
        at org.apache.spark.repl.SparkILoop.runClosure(SparkILoop.scala:159)
        at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:182)
        at org.apache.spark.repl.Main$.doMain(Main.scala:78)
        at org.apache.spark.repl.Main$.main(Main.scala:58)
        at org.apache.spark.repl.Main.main(Main.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
        at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:851)
        at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167)
        at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)
        at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
        at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:926)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:935)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
        at org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:173)
        at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:390)
        at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:614)
        at org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:410)
        at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:799)
        at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:795)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:795)
        ... 95 more
19/08/31 11:33:07 DEBUG ipc.Client: IPC Client (1263257405) connection to cm-r01nn02.mws.mds.xyz/192.168.0.133:8032 from tom@mds.xyz: closed
19/08/31 11:33:07 INFO retry.RetryInvocationHandler: java.io.IOException: Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : local host is: "cm-r01en01.mws.mds.xyz/192.168.0.140"; destination host is: "cm-r01nn02.mws.mds.xyz":8032; , while invoking ApplicationClientProtocolPBClientImpl.getClusterMetrics over null after 1 failover attempts. Trying to failover after sleeping for 17516ms.
19/08/31 11:33:07 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:33:08 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:33:09 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:33:10 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:33:11 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:33:12 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:33:13 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:33:14 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:33:15 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:33:16 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:33:17 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:33:18 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:33:19 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:33:20 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:33:21 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:33:22 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:33:23 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:33:24 INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all
19/08/31 11:33:25 DEBUG ipc.Client: The ping interval is 60000 ms.
19/08/31 11:33:25 DEBUG ipc.Client: Connecting to cm-r01nn02.mws.mds.xyz/192.168.0.133:8032
19/08/31 11:33:25 DEBUG security.UserGroupInformation: PrivilegedAction as:tom@mds.xyz (auth:SIMPLE) from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:795)
19/08/31 11:33:25 DEBUG security.SaslRpcClient: Sending sasl message state: NEGOTIATE

 

 

Has anyone seen the same and could suggest what to do to move forward with this?  


A few points:
1) Reverse and forward lookups work fine from the OS side.
2) Kerberos credentials generate without issue.

Cheers,
TK

15 REPLIES 15

avatar
Super Guru
Hi @TCloud,

Based on the message below:

19/08/31 11:05:43 DEBUG security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:719)

It looks like client does not have correct kerberos config setup properly. Have you checked the configuration files under /etc/spark/conf? Have you added the host to have Spark Gateway role?

Thanks
Eric

avatar
Explorer

Hey Eric,

 

Thanks!

 

Yes, I've added the Hive and Spark gateway role to the host.  The Spark gateways are distributed to the same nodes.  However, since your comment, I notice both Hive and Spark gateways are offline.   Can't start them as of this writing.  Getting:

 

Command Start is not currently available for execution.

whenever I try to start the role.  So definitely an issue there. 

Kerberos credentials appear ok.  I can regenerate them without an issue.  Running kinit using the hdfs.keytab works as expected.  On a closer look, I do get this error which I'll try to fix up after this comment.

 

 

19/09/02 09:56:42 ERROR repl.Main: Failed to initialize Spark session.
java.lang.IllegalArgumentException: Required executor memory (1024), overhead (384 MB), and PySpark memory (0 MB) is above the max threshold (256 MB) of this cluster! Please check the values of 'yarn.scheduler.maximum-allocation-mb' and/or 'yarn.nodemanager.resource.memory-mb'.

 

This is a fairly small cluster for POC type of work so I would rather tweak Spark memory requirements rather than increase max memory per container.  Not able to figure that out yet.  

 

Cheers,
TK

avatar
Super Guru
@TCloud,

You can start and stop Gateway roles, as they are client side roles. All you need to do is Deploy Client Config, so that configurations will be deployed on those gateway roles.

Not sure if the memory issue is related, but at least you need to fix that first and see if there are further issues.

Cheers
Eric

avatar
Explorer

I've upped the memory to get over the issue stated above.  Now I get:

Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=tom@MDS.XYZ, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x

 

How do I configured Spark to write to individual user folders such as /user/tom? 

Cheers,
TK

avatar
Explorer

Just to clarify, did you mean you "can" or "can't" stop client side gateway roles?

avatar
Super Guru
My apologies, I meant you CAN'T start or stop gateway roles, as there is no server, but only client configuration needed.

Sorry about the confusion, I can't seem to edit my update anymore.

Cheers

avatar
Super Guru
@TCloud,

This looks like you are missing /user/tom directory in HDFS and job was probably trying to create something under your home directory as part of processing and failed due to directory missing and you are not allowed to create under /user directory.

Please use "hdfs" user to create /user/tom directory, and update ownership to tom:tom, then try again.

Cheers
Eric

avatar
Explorer

I had the directory /user/tom and tried the following permissions:

tom:tom
tom@mds.xyz:tom@mds.xyz
tom@MDS.XYZ:tom@MDS.XYZ


No luck.  Till I saw this message:

 

19/09/03 21:51:53 WARN fs.TrashPolicyDefault: Can't create trash directory: hdfs://cm-r01nn02.mws.mds.xyz:8020/user/tom@MDS.XYZ/.Trash/Current/user/mds.xyz/tom
org.apache.hadoop.security.AccessControlException: Permission denied: user=tom@MDS.XYZ, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x

 

Telling me the right folder was supposed to be /user/tom@MDS.XYZ.  So that's what I set and spark-shell now works.  

It really has to do with this issue:

 

19/09/03 21:51:33 INFO util.KerberosName: No auth_to_local rules applied to tom@MDS.XYZ


And I really need to define auth_to_local to create the folders in this manner:

/user/domain/user

But I'm not sure how just yet.  

Cheers,
TK

avatar
Explorer

What I'm getting now is the below and wondering what the solution might be, having tried the ones on the community so far without success:

INFO yarn.SparkRackResolver: Got an error when resolving hostNames. Falling back to /default-rack for all

 

This message floods the spark-shell console rendering it unusable.  What I did so far to try and check it:

1) Reverse lookups work.
2) Forward lookups work.
3) UID of CM Agent are unique.
4) RHEL 7 UID's are unique.

Looks like it might be related to this bug so I may just have to wait it out or grab a copy of the latest Spark somehow to fix it?

 

https://issues.apache.org/jira/browse/SPARK-28005


Cheers,
TK