Support Questions

Find answers, ask questions, and share your expertise

Problem running a jar in kerberized cluster

avatar
Rising Star

Hi,

I've a prolem with running a jar using an oozie shell action in a kerberized cluster.

My jar has the following code for authentification:

		Configuration conf = new Configuration();
		conf.set("hadoop.security.authentication","kerberos");
		UserGroupInformation.setConfiguration(conf);
		try {
			UserGroupInformation.loginUserFromKeytab(principal, keytabPath);
		} catch (IOException e) {
			e.printStackTrace();
		}

My workflow.xml as following:

<shell xmlns="uri:oozie:shell-action:0.1">
           		 <job-tracker>${resourceManager}</job-tracker>
           			 <name-node>${nameNode}</name-node>
           		 <configuration>
                		<property>
                 		 	<name>mapred.job.queue.name</name>
                  			<value>${queueName}</value>
               		 </property>
           		 </configuration>
           		 <exec>hadoop</exec>
           		 <argument>jar</argument>
			<argument>jarfile</argument>
			<argument>x.x.x.x.UnzipFile</argument>
			<argument>keytab</argument>
			<argument>${kerberosPrincipal}</argument>
			<argument>${nameNode}</argument>
			<argument>${zipFilePath}</argument>
			<argument>${unzippingDir}</argument>
			
			
			<env-var>HADOOP_USER_NAME=${wf:user()}</env-var>
			<file>${workdir}/lib/[keytabFileName]#keytab</file>
			<file>${workdir}/lib/[JarFileName]#jarfile</file>
			
        	</shell>

The jar file and the keytab are located in HDFS in the /lib directory of the directory where the .xml is located.

The problem is that on various identical run of the oozie workflow I sometime get this error:

java.io.IOException: Incomplete HDFS URI, no host: hdfs://[name_bode_URI]:8020keytab
    at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:154)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2795)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:99)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2829)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2811)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:390)
    at x.x.x.x.CompressedFilesUtilities.unzip(CompressedFilesUtilities.java:54)
    at x.x.x.x.UnzipFile.main(UnzipFile.java:13)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:483)
    at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
1 ACCEPTED SOLUTION

avatar
Rising Star

Thank you @Matt Andruff for your reply.
I resolved the issue. I had another .jar in the /lib directory containing the same code but with another file name. I'm not sure how it does affect the execution of the job. But after removing it every thing works fine, for now at least.

View solution in original post

2 REPLIES 2

avatar
Expert Contributor

@Zaher - without more details it sounds environmental. You left out the Java code that constructs the keytabPath. This might help diagnose the issue. Have you considered adding additional logging to the exception to show what value keytabPath is set to under the circumstances that it fails? Might help you track down the problem.

avatar
Rising Star

Thank you @Matt Andruff for your reply.
I resolved the issue. I had another .jar in the /lib directory containing the same code but with another file name. I'm not sure how it does affect the execution of the job. But after removing it every thing works fine, for now at least.