Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

how to resolve ${nameservice2} in oozie.launcher.mapreduce.job.hdfs-servers without changing Oozie Server configuration

avatar
Super Collaborator

In 2 same REALM Kerberized cluster having Namenode HA enabled, how to resolve ${nameservice2} in oozie.launcher.mapreduce.job.hdfs-servers without changing Oozie Server configuration.

Running the workflow on defaultFS1.

WorkFlow:

<workflow-app xmlns='uri:oozie:workflow:0.3' name='shell-wf'>
	<start to="copyHFilesToRemoteClusterAction"/> 
	<action name="copyHFilesToRemoteClusterAction">
		<distcp xmlns="uri:oozie:distcp-action:0.1">
			<job-tracker>${jobTracker}</job-tracker>
			<name-node>${defaultFS1}</name-node>
			<configuration>
			<property>
				<name>dfs.nameservices</name>
				<value>${nameService1},${nameService2}</value>
			</property>
			<property>
				<name>dfs.ha.namenodes.${nameService2}</name>
				<value>${nn21},${nn22}</value>
			</property>
			<property>
				<name>dfs.namenode.rpc-address.${nameService2}.${nn21}</name>
				<value>${nn21_fqdn}:8020</value>
			</property>
			<property>
				<name>dfs.namenode.rpc-address.${nameService2}.${nn22}</name>
				<value>${nn22_fqdn}:8020</value>
			</property>
			<property>
				<name>dfs.client.failover.proxy.provider.${nameService2}</name>
				<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
			</property>
			<property>
				<name>oozie.launcher.mapreduce.job.hdfs-servers</name>
				<value>${defaultFS1},${defaultFS2}</value>
			</property>
			</configuration>
			<arg>${defaultFS1}/${workDir}</arg>
			<arg>${defaultFS2}/${workDir}</arg>
		</distcp>
		<ok to="end"/>
		<error to="fail"/>
	</action>
	<kill name="fail">
		<message>Map/Reduce failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
	</kill>
	<end name="end"/>
</workflow-app>

Error:

2016-02-16 15:37:30,276 WARN ActionStartXCommand:546 - USER[bborah1] GROUP[-] TOKEN[] APP[shell-wf] JOB[0000xyz-000000000000000-oozie-oozi-W] ACTION[0000xyz-000000000000000-oozie-oozi-W@copyHFilesToRemoteClusterAction] Error starting action [copyHFilesToRemoteClusterAction].
 ErrorType [TRANSIENT], ErrorCode [JA001], Message [JA001: ${nameservice2}]

org.apache.oozie.action.ActionExecutorException: JA001: ${nameservice2}
at org.apache.oozie.action.ActionExecutor.convertExceptionHelper(ActionExecutor.java:412)
at org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:392)
at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:980)
at org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1135)
at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:228)
at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:63)
at org.apache.oozie.command.XCommand.call(XCommand.java:281)
at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:323)
at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:252)
at org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:174)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.net.UnknownHostException: ${nameservice2}
at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:374)
at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:312)
at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:178)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:665)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:601)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:148)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2596)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:97)
at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
at org.apache.hadoop.mapreduce.JobSubmitter.populateTokenCache(JobSubmitter.java:725)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:462)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548)
at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:965)
1 ACCEPTED SOLUTION

avatar
Super Collaborator

This can be solved by appending "oozie.launcher." to the properties.

<configuration>
	<property>
		<name>oozie.launcher.dfs.nameservices</name>				
		<value>${nameService1},${nameService2}</value>			
	</property>
	<property>
		<name>oozie.launcher.dfs.ha.namenodes.${nameService2}</name>
		<value>${nn21},${nn22}</value>
	</property>
	<property>
		<name>oozie.launcher.dfs.namenode.rpc-address.${nameService2}.${nn21}</name>		
		<value>${nn21_fqdn}:8020</value>
	</property>
	<property>
		<name>oozie.launcher.dfs.namenode.rpc-address.${nameService2}.${nn22}</name>
		<value>${nn22_fqdn}:8020</value>			
	</property>
	<property>
		<name>oozie.launcher.dfs.client.failover.proxy.provider.${nameService2}</name>
		<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
	</property>
	<property>
		<name>oozie.launcher.mapreduce.job.hdfs-servers</name>
		<value>${defaultFS1},${defaultFS2}</value>
	</property>
</configuration>

View solution in original post

5 REPLIES 5

avatar
Master Mentor

@Saumil Mayani you can try to overwrite the property in workflow.XML or job.properties, it will take precedence and not affect whole Oozie server configuration.

avatar
Super Collaborator

@Artem Ervits how would oozie.launcher.mapreduce.job.hdfs-servers resolve the ${defaultFS2}.

From job.properties, I am passing e.g. hdfs://xyz where dfs.nameservices = xyz. This xyz is not a host name, hence getting error "Causedby: java.net.UnknownHostException: xyz" . appology for the confusion. I do have all the variables replaced in the job.properties to match the cluster.

avatar
Master Mentor
@Saumil Mayani

do you have defaultFS2 property set in job.properties?

defaultFS2=hdfs://...

avatar
Super Collaborator

@Artem Ervits yes,defaultFS2=hdfs://... is set in job.properties.

avatar
Super Collaborator

This can be solved by appending "oozie.launcher." to the properties.

<configuration>
	<property>
		<name>oozie.launcher.dfs.nameservices</name>				
		<value>${nameService1},${nameService2}</value>			
	</property>
	<property>
		<name>oozie.launcher.dfs.ha.namenodes.${nameService2}</name>
		<value>${nn21},${nn22}</value>
	</property>
	<property>
		<name>oozie.launcher.dfs.namenode.rpc-address.${nameService2}.${nn21}</name>		
		<value>${nn21_fqdn}:8020</value>
	</property>
	<property>
		<name>oozie.launcher.dfs.namenode.rpc-address.${nameService2}.${nn22}</name>
		<value>${nn22_fqdn}:8020</value>			
	</property>
	<property>
		<name>oozie.launcher.dfs.client.failover.proxy.provider.${nameService2}</name>
		<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
	</property>
	<property>
		<name>oozie.launcher.mapreduce.job.hdfs-servers</name>
		<value>${defaultFS1},${defaultFS2}</value>
	</property>
</configuration>