Created 09-05-2023 08:07 AM
I am getting client can not communicate via ( token, kerberos) exception when I am trying to access yarn resource manager from java application.
As part of my requirement in my spark java application I need to check some other applications running or not. For this I am using YarnClient class when I try to access this method getting above mentioned exception.
Same working for me in my local system with kerberos authentication.
I am using Hadoop library classes only to authenticate. UsergroupInformation class
Created on 09-05-2023 08:31 AM - edited 09-05-2023 08:32 AM
@Kolli Welcome to the Cloudera Community!
To help you get the best possible solution, I have tagged our YARN expert @Bharati who may be able to assist you further.
Please keep us updated on your post, and we hope you find a satisfactory solution to your query.
Regards,
Diana Torres,Created on 09-05-2023 09:02 AM - edited 09-05-2023 09:41 AM
Hi Diana,
Thank you.
Attaching more logs for reference
2023/09/04 09:30:05.522 INFO o.a.h.s.UserGroupInformation
: Login successful for user abc using keytab file abc. Keytab auto renewal enabled : false
2023/09/04 09:30:05.522 INFO c.s.f.c.s.c.SchedulingConfig After login
2023/09/04 09:30:05.730 WARN o.a.h.i.Client
Exception encountered while connecting to the server : org.apache.hadoop. security.AccessControlException: Client cannot authenticate via: [TOKEN, KERBEROS] 2023/09/04 09:30:05.734 ERROR o.a.s.d.y.ApplicationMaster
: User class threw exception: java.io. IOException: DestHost: destPort uklvadhdp123 : 8032 , LocalHost: localPort ukhdp/133.0Failed on local exception: java.io. IOException: org.apache. hadoop. security. AccessControlException: Client cannot authenticate via: [TOKEN, KERBEROS]
java.io. IOException: DestHost: destPort uklvad, LocalHost: localPort Failed on local exception: java.io. IOException: org. apache. hadoop. security. AccessControlException: Client cannot authenticate via: [TOKEN, KERBEROS]
at sun.reflect.NativeConstructorAccessorImpl.newInstance0 (Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance (NativeConstructorAccessorImpl. java : 62) at sun. reflect.DelegatingConstructorAccessorImpl.newInstance (DelegatingConstructorAccessorImpl. java : 45) at java.lang.reflect. Constructor.newInstance (Constructor. java : 423) at org.apache.hadoop.net.NetUtils. wrapWithMessage (NetUtils.java : 892) at org.apache.hadoop.net.NetUtils. wrapException (NetUtils.java : 867) at org. apache.hadoop.ipc.Client. getRpcResponse (Client. java: 1566) at org. apache.hadoop.ipc.Client.call (Client. java: 1508) at org.apache.hadoop.ipc.Client.call (Client.java: 1405) at org. apache.hadoop. ipc. ProtobufRpcEngine$Invoker.invoke (ProtobufRpcEngine. java :233) at org. apache. hadoop.ipc. ProtobufRpcEngine$Invoker.invoke (ProtobufRpcEngine. java :118) at com. sun. proxy. $Proxy79.getApplications (Unknown Source) at org. apache. hadoop.yarn. api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getApplications (ApplicationClientProtocolPBClientImpl. java: 316) at sun. reflect.NativeMethodAccessorImpl.invoke0 (Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl. java: 62) at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl. java : 43) at java.lang. reflect. Method. invoke (Method. java : 498) at org. apache.hadoop.io. retry.RetryInvocationHandler. invokeMethod (RetryInvocationHandler. java : 431) at org. apache.hadoop.io. retry. RetryInvocationHandler$Call.invokeMethod (RetryInvocationHandler. java:166) at org. apache.hadoop.io. retry.RetryInvocationHandler$Call.invoke (RetryInvocationHandler. java: 158) at org.apache.hadoop.io. retry. RetryInvocationHandler$Call. invokeOnce (RetryInvocationHandler. java : 96) at org. apache.hadoop.io. retry. RetryInvocationHandler. invoke (RetryInvocationHandler. java : 362) at com. sun.proxy.$Proxy80.getApplications (Unknown Source) at org.apache.hadoop.yarn.client.api. impl. YarnClientImpl.getApplications (YarnClientImpl. java:651) at com. scheduling. config. SchedulingConfig.renameFolders (SchedulingConfig. java: 901) at com.scheduling. SchedulingApp.main (SchedulingApp. java : 30)
>
I have modified destination host and source host for security reasons.
Please let me know if any details are required to identify the root cause
Created 09-06-2023 07:42 AM
Hello @Kolli
Do you have a valid Kerberos ticket? Check using klist and it should display a message like below.
Ticket cache: FILE:/tmp/krb5cc_12345
Default principal: user@example.com
Valid starting Expires Service principal
09/06/23 13:30:00 09/07/23 13:30:00 krbtgt/example.com@example.com
renew until 09/08/23 13:30:00
Created 09-06-2023 07:48 AM
Hi Asok,
Yes, we have valid kerberos ticket. We are using keytab to authenticate. With same keytab I am able to access from local. When I deploy to cluster then I am facing this issue.
Thanks.
Created 09-06-2023 07:51 AM
2023/09/04 09:30:05.522 INFO o.a.h.s.UserGroupInformation
: Login successful for user abc using keytab file abc. Keytab auto renewal enabled : false
2023/09/04 09:30:05.522 INFO c.s.f.c.s.c.SchedulingConfig After login
2023/09/04 09:30:05.730 WARN o.a.h.i.Client
Exception encountered while connecting to the server : org.apache.hadoop. security.AccessControlException: Client cannot authenticate via: [TOKEN, KERBEROS] 2023/09/04 09:30:05.734 ERROR o.a.s.d.y.ApplicationMaster
: User class threw exception: java.io. IOException: DestHost: destPort uklvadhdp123 : 8032 , LocalHost: localPort ukhdp/133.0Failed on local exception: java.io. IOException: org.apache. hadoop. security. AccessControlException: Client cannot authenticate via: [TOKEN, KERBEROS]
In above error message first I am getting login successful. After that if I try to access yarn resource manager getting this issue
Created 06-21-2024 01:41 PM
Hi I'm also facing this same exact issue.
I can submit the distcp job to yarn which runs and completes but I'm not able to check the progress of the job while it's in running.
The strange thing is that there's no issue when I use the hadoop2 library. This issue came up as I upgrade to hadoop version > 3.2.3
Where you able to resolve your issue?
Created 06-25-2024 12:58 PM
Despite extensive efforts, I was unable to directly resolve the issue, but I devised a workaround.
Rather than directly accessing the Hadoop Job object for status updates, I extracted the job ID after submitting the job. Using this job ID, I created an ApplicationID object, which I then used to instantiate my YarnClient. This approach enabled me to effectively monitor the status and completion rate of the running job.
Interestingly, both my distcp job and YarnClient are utilizing the same HadoopConf object (YarnClient is instantiated right after the DistCP Job is executed) and is within the same scope of the UserGroupInformation. The exact reason why the YarnClient can access the necessary information while the Job object cannot remains unclear. Nevertheless, this workaround has successfully unblocked me.
Additional context: I am using Java 8 and running on an Ubuntu Xenial image.