Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Oozie Spark - AbstractMethodError: org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider

Highlighted

Oozie Spark - AbstractMethodError: org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider

New Contributor

Hi All,

 

We are using HDP 3.10 and trying to run spark 2.3.2 job using oozie. But getting the error below. Please let me know what we are missing?

 

Workflow.xml file.

 

<workflow-app name="ABC" xmlns="uri:oozie:workflow:0.4">
<parameters>
<property>
<name>sparkMaster</name>
<value>yarn</value>
</property>
<property>
<name>oozie.use.system.libpath</name>
<value>true</value>
</property>
</parameters>
<start to="ABC"/>
<action name="Demand_History_Rollup">
<spark xmlns="uri:oozie:spark-action:0.1">
<job-tracker>${resourceManager}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>default</value>
</property>
<property>
<name>oozie.use.system.libpath</name>
<value>true</value>
</property>
<property>
<name>oozie.libpath</name>
<value>${nameNode}/user/oozie/share/lib*</value>
</property>
<property>
<name>oozie.action.sharelib.for.spark</name>
<value>spark,hive2</value>
</property>
</configuration>
<master>yarn</master>
<mode>cluster</mode>
<name>Demand_History_Rollup</name>
<class>com.xxx.ABC</class>
<jar>${nameNode}${baseDir}/workflows/spark/abc.jar</jar>
<spark-opts>--files ${nameNode}${baseDir}/workflows/hive-site.xml,${nameNode}${baseDir}/workflows/core-site.xml,${nameNode}${baseDir}/workflows/hdfs-site.xml</spark-opts>
<arg>HIVE_JDBC_STRING=${hiveConnectionString}</arg>
<arg>HIVE_DB_NAME=${hiveDbName}</arg>
<arg>HIVE_DB_USER=${hiveDbUser}</arg>
<arg>HIVE_DB_PASSWORD=${hiveDbPassword}</arg>
<arg>HADOOP_USER_NAME=${hiveDbUser}</arg>
<arg>SPARK_MASTER=${sparkMaster}</arg>
</spark>
<ok to="end"/>
<error to="kill"/>
</action>

<kill name="kill">
<message>Action failed, error
message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>
</workflow-app>

 

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/data/hadoop/yarn/local/filecache/83112/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/data/hadoop/yarn/local/filecache/83012/hive-warehouse-connector-assembly-1.0.0.3.1.0.0-78.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/data/hadoop/yarn/local/filecache/83123/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/3.1.0.0-78/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Exception in thread "main" java.lang.AbstractMethodError: org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.getProxy()Lorg/apache/hadoop/io/retry/FailoverProxyProvider$ProxyInfo;
	at org.apache.hadoop.io.retry.RetryInvocationHandler$ProxyDescriptor.<init>(RetryInvocationHandler.java:197)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:328)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:322)
	at org.apache.hadoop.io.retry.RetryProxy.create(RetryProxy.java:59)
	at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:147)
	at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:510)
	at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:453)
	at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:136)
	at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303)
	at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
	at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352)
	at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
	at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$7$$anonfun$apply$3.apply(ApplicationMaster.scala:234)
	at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$7$$anonfun$apply$3.apply(ApplicationMaster.scala:232)
	at scala.Option.foreach(Option.scala:257)
	at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$7.apply(ApplicationMaster.scala:232)
	at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$7.apply(ApplicationMaster.scala:197)
	at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$5.run(ApplicationMaster.scala:815)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
	at org.apache.spark.deploy.yarn.ApplicationMaster.doAsUser(ApplicationMaster.scala:814)
	at org.apache.spark.deploy.yarn.ApplicationMaster.<init>(ApplicationMaster.scala:197)
	at org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:838)
	at org.apache.spark.deploy.yarn.ApplicationMaster.main(ApplicationMaster.scala)

 

 

 

 

Don't have an account?
Coming from Hortonworks? Activate your account here