Support Questions
Find answers, ask questions, and share your expertise

Exception submitting spark job in HDP 3.0

Highlighted

Exception submitting spark job in HDP 3.0

Explorer

Hi,

I am getting the following exception on one node when submitting a spark job, it is working fine if submitted from any other node. Any help will be highly appreciated.

19/07/31 15:56:19 INFO Executor: Starting executor ID driver on host localhost
19/07/31 15:56:19 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 35492.
19/07/31 15:56:19 INFO NettyBlockTransferService: Server created on hdata5.dom.local:35492
19/07/31 15:56:19 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
19/07/31 15:56:19 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, hdata5.dom.local, 35492, None)
19/07/31 15:56:19 INFO BlockManagerMasterEndpoint: Registering block manager hdata5.dom.local:35492 with 366.3 MB RAM, BlockManagerId(driver, hdata5.dom.local, 35492, None)
19/07/31 15:56:19 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, hdata5.dom.local, 35492, None)
19/07/31 15:56:19 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, hdata5.dom.local, 35492, None)
Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hadoop.io.retry.RetryUtils.getDefaultRetryPolicy(Lorg/apache/hadoop/conf/Configuration;Ljava/lang/String;ZLjava/lang/String;Ljava/lang/String;Ljava/lang/Class;)Lorg/apache/hadoop/io/retry/RetryPolicy;
        at org.apache.hadoop.hdfs.NameNodeProxies.createNNProxyWithClientProtocol(NameNodeProxies.java:318)
        at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:235)
        at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:139)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:510)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:453)
        at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:136)
        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354)
        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
        at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3371)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:477)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:468)
        at org.apache.spark.util.Utils$.getHadoopFileSystem(Utils.scala:1897)
        at org.apache.spark.scheduler.EventLoggingListener.<init>(EventLoggingListener.scala:74)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:520)
        at com.dom.pipeline.spark.MergeDom$.main(MergeDom.scala:100)
        at com.dom.pipeline.spark.MergeDom.main(MergeDom.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:904)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
19/07/31 15:56:20 INFO DiskBlockManager: Shutdown hook called
19/07/31 15:56:20 INFO ShutdownHookManager: Shutdown hook called
19/07/31 15:56:20 INFO ShutdownHookManager: Deleting directory /tmp/spark-9dc84539-36a1-40b3-b991-ccf28473cea1
Don't have an account?