Support Questions

Find answers, ask questions, and share your expertise
Announcements
Check out our newest addition to the community, the Cloudera Data Analytics (CDA) group hub.

tez job started by oozie fails with "Delegation Token can be issued only with kerberos or web authentication"

Explorer

Hello everybody!

I have a very difficult problem for me, could you help me with the decision?

I'm trying to run Tez job from oozie workflow on secure cluster.

My wf action code:

    <action name="replace-data" cred="hive-credentials">
        <java>
            <job-tracker>${jobTracker}</job-tracker>
            <name-node>${nameNode}</name-node>
            <configuration>
                <property>
                    <name>mapreduce.job.queuename</name>
                        <value>${queueName}</value>
                </property>
                <property>
                    <name>tez.queue.name</name>
                        <value>${queueName}</value>
                </property>
            </configuration>
            <main-class>ru.beeline.hadoop.smsnewest.WestReplacer</main-class>
            <arg>-Dtez.queue.name=${queueName}</arg>
            <arg>-Dmapreduce.job.queuename=${queueName}</arg>
            <arg>${inputHDFSDir}/${partDate}</arg>
            <arg>${outputHDFSDir}/data_date=${partDate}</arg>
            <file>lib/SmsNeWest.jar</file>
        </java>
        <ok to="load-parts-to-hive-recover"/>
        <error to="fail"/>
    </action>

tez-site.xml file with the correct value of "tez.lib.uris=/hdp/apps/".... exists in lib subdirectory of workflow application path.

But an error occurs during the execution step of the method submitDAG(dag) at the stage tez.lib.uris read:

Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation Token can be issued only with kerberos or web authentication
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDelegationToken(FSNamesystem.java:7751)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getDelegationToken(NameNodeRpcServer.java:534)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getDelegationToken(ClientNamenodeProtocolServerSideTranslatorPB.java:977)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2127)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2123)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2121)

	at org.apache.hadoop.ipc.Client.call(Client.java:1469)
	at org.apache.hadoop.ipc.Client.call(Client.java:1400)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
	at com.sun.proxy.$Proxy17.getDelegationToken(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getDelegationToken(ClientNamenodeProtocolTranslatorPB.java:925)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
	at com.sun.proxy.$Proxy18.getDelegationToken(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.getDelegationToken(DFSClient.java:1032)
	at org.apache.hadoop.hdfs.DistributedFileSystem.getDelegationToken(DistributedFileSystem.java:1452)
	at org.apache.hadoop.fs.FileSystem.collectDelegationTokens(FileSystem.java:529)
	at org.apache.hadoop.fs.FileSystem.addDelegationTokens(FileSystem.java:507)
	at org.apache.hadoop.hdfs.DistributedFileSystem.addDelegationTokens(DistributedFileSystem.java:2138)
	at org.apache.tez.common.security.TokenCache.obtainTokensForFileSystemsInternal(TokenCache.java:107)
	at org.apache.tez.common.security.TokenCache.obtainTokensForFileSystemsInternal(TokenCache.java:86)
	at org.apache.tez.common.security.TokenCache.obtainTokensForFileSystems(TokenCache.java:76)
	at org.apache.tez.client.TezClientUtils.setupTezJarsLocalResources(TezClientUtils.java:195)
	at org.apache.tez.client.TezClient.getTezJarResources(TezClient.java:724)
	at org.apache.tez.client.TezClient.submitDAGApplication(TezClient.java:692)
	at org.apache.tez.client.TezClient.submitDAGApplication(TezClient.java:670)
	at org.apache.tez.client.TezClient.submitDAG(TezClient.java:356)
	at ru.beeline.hadoop.smsnewest.WestReplacer.run(WestReplacer.java:173)
	at ru.beeline.hadoop.smsnewest.WestReplacer.run(WestReplacer.java:100)
	at ru.beeline.hadoop.smsnewest.WestReplacer.main(WestReplacer.java:193)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.oozie.action.hadoop.JavaMain.run(JavaMain.java:55)
	... 15 more

This class works fine when called from the console.

My stack and versions:

  • hdp-2.2.8.0
  • hdfs and yarn 2.6.0.2.2
  • oozie 4.1.0.2.2
  • tez 0.5.2.2.2

Any help would be greatly fo me! Thanks!

1 ACCEPTED SOLUTION

Explorer

Ok, I avoided this problem by runnin shell action that performs 'hadoop jar' command instead of java action.

Of course I have to use 'UserGroupInformation.loginUserFromKeytab' inside my class (or run 'kinit -kt' command at the start of shell script).

I suppose that @Pranshu Pranshu's scenario can be realized too. But in my situation, solution with shell action is also suitable. If anybody knows how to replace the old tokens with the new ones inside java action, please show me example.

Thank you very much for helping!

View solution in original post

6 REPLIES 6

Since Hadoop gives precedence to the delegation tokens, we must make sure we login as a different user, get new tokens and replace the old ones in the current user's credentials cache to avoid not being able to get new ones. This may help.

Explorer

Thank you for your reply!

I can get new crendentials by using UserGroupInformation.loginUserFromKeytab (I tryed it, but no effect). Could you please tell me how to replace the old tokens with the new ones?

I tried to google it but i didn't find any usefull information.

@Roman Boyko Can you try a quick fix described here, setting

set hive.server2.enable.doAs=false;

before running the query. It's also the mode required for Ranger to manage permissions in Hive.

Explorer

Predrag, realy thank you for ansswer!

I tried this solution, but it did not take any result for my problem.

Explorer

Ok, I avoided this problem by runnin shell action that performs 'hadoop jar' command instead of java action.

Of course I have to use 'UserGroupInformation.loginUserFromKeytab' inside my class (or run 'kinit -kt' command at the start of shell script).

I suppose that @Pranshu Pranshu's scenario can be realized too. But in my situation, solution with shell action is also suitable. If anybody knows how to replace the old tokens with the new ones inside java action, please show me example.

Thank you very much for helping!

@Neeraj Sabharwal Can you please help us out here by providing an example if this lies in your scope.

Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.