Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Spark (Standalone) error local class incompatible: stream classdesc serialVersionUID

avatar
Explorer

I'm trying to use spark (standalone) to load data onto hive tables. The avro schema is successfully, I see (on spark ui page) that my applications are finished running, however the applications are in the Killed state.

 

THIS IS THE STDERR.LOG ON THE SPARK WEB UI PAGE VIA CLOUDERA MANAGER:

 

15/03/25 06:15:58 ERROR Executor: Exception in task 1.3 in stage 2.0 (TID 10)
java.io.InvalidClassException: org.apache.spark.rdd.PairRDDFunctions; local class incompatible: stream classdesc serialVersionUID = 8789839749593513237, local class serialVersionUID = -4145741279224749316
    at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:617)
    at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1622)
    at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1517)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
    at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)
    at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:87)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:57)
    at org.apache.spark.scheduler.Task.run(Task.scala:56)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
15/03/25 06:15:59 ERROR CoarseGrainedExecutorBackend: Driver Disassociated [akka.tcp://sparkExecutor@HadoopNode01.local:48707] -> [akka.tcp://sparkDriver@HadoopNode02.local:54550] disassociated! Shutting down.

 

 

Any help will be greatly appreciated.


Thanks

 

 

22 REPLIES 22

avatar
Explorer

<<Sorry duplicate post>>

avatar
Master Collaborator

This generally means you're mixing two versions of Spark somehow. Are you sure your app isn't also trying to bundle Spark? are you using the CDH Spark, and not your own compiled version?

avatar
Explorer

I am using spark (standalone) from the latest cloudera cdh version. Once my cluster is up and running, via cloudera manager I am selecting the "Add a service" option and adding the spark(standalone) service. Could you pls clarify on what you mean by "my app trying to bundle spark"? My application depends on spark as installed by cloudera CDH. The applcation does not come with spark.

 

Thanks

avatar
Master Collaborator

I mean, do you build your app with a dependency on Spark, and if so what version, and, have you marked it as 'provided' so as not to be included the JAR you submit?

avatar
Explorer

Yes, my application is installed with a dependency on spark, if spark(standalone) is not present then my app fails to install. I do not specify any spark version, it takes what ever version that is available from cloudera manager.

Where do i mark it 'provided'?

How do i check the spark version on cloudera manager?

I do not submit any jars for spark through my application.

 

The application I'm trying to install on CDH is Oracle Big Data Discovery, it is tightly coupled with cloudera cdh and depends on spark for data processing.

avatar
Master Collaborator

Hm, how do you compile your app? Usually you create a Maven or SBT project to declare its dependencies, which should include a "provided" dependency on the same version of Spark as is on your cluster. How do you submit your application? spark-submit? you are submitting a JAR to run your app, right?

avatar
Explorer

My application comes prepackaged from oracle, I dont find any 'provided' dependency, im still checking though. where can i find the spark version that is installed via cloudera? is there a way to make it upward/downward compatible with other versions? my applciation uses not just spark, it uses oozie, hdfs, hive and yarn

avatar
Explorer

Ok, so I deleted my entire cluster, hadoop and my application and reinstalled everything. Now i dont see a version mismatch error. I have a different spark related error. I have one spark master and one spark worker nodes. Pls find the errors as below.

 

 

MASTER NODE ERROR(hadoop01.mycompany.local)

2015-03-30 04:22:52,919 INFO org.apache.spark.deploy.master.Master: akka.tcp://sparkDriver@hadoop02.mycompany.local:55921 got disassociated, removing it.
2015-03-30 04:22:52,922 INFO org.apache.spark.deploy.master.Master: akka.tcp://sparkDriver@hadoop02.mycompany.local:55921 got disassociated, removing it.
2015-03-30 04:22:52,926 ERROR akka.remote.EndpointWriter: AssociationError [akka.tcp://sparkMaster@hadoop01.mycompany.local:7077] -> [akka.tcp://sparkDriver@hadoop02.mycompany.local:55921]: Error [Association failed with [akka.tcp://sparkDriver@hadoop02.mycompany.local:55921]] [
akka.remote.EndpointAssociationException: Association failed with [akka.tcp://sparkDriver@hadoop02.mycompany.local:55921]
Caused by: akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2: Connection refused: hadoop02.mycompany.local/192.168.209.172:55921
]

 

*******************************************************************************************************************

 

WORKER NODE ERROR(hadoop02.mycompany.local)

2015-03-30 04:22:42,840 INFO org.apache.spark.deploy.worker.Worker: Asked to launch executor app-20150330042242-0000/0 for EDP
2015-03-30 04:22:42,892 INFO org.apache.spark.deploy.worker.ExecutorRunner: Launch command: "/usr/java/jdk1.7.0_67-cloudera/bin/java" "-cp" "::/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/spark/conf:/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/spark/lib/spark-assembly.jar:/var/run/cloudera-scm-agent/process/76-spark-SPARK_WORKER/hadoop-conf:/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/hadoop/client/*:/var/run/cloudera-scm-agent/process/76-spark-SPARK_WORKER/hadoop-conf:/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/hadoop/libexec/../../hadoop/lib/*:/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/hadoop/libexec/../../hadoop/.//*:/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/hadoop/../hadoop-hdfs/./:/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/hadoop/../hadoop-hdfs/lib/*:/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/hadoop/../hadoop-hdfs/.//*:/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/hadoop/../hadoop-yarn/lib/*:/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/hadoop/../hadoop-yarn/.//*:/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/hadoop/../hadoop-mapreduce/lib/*:/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/hadoop/../hadoop-mapreduce/.//*:/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/spark/lib/scala-library.jar:/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/spark/lib/scala-compiler.jar:/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/spark/lib/jline.jar" "-XX:MaxPermSize=128m" "-Dspark.driver.port=55921" "-Xms20480M" "-Xmx20480M" "org.apache.spark.executor.CoarseGrainedExecutorBackend" "akka.tcp://sparkDriver@hadoop02.mycompany.local:55921/user/CoarseGrainedScheduler" "0" "hadoop02.mycompany.local" "1" "app-20150330042242-0000" "akka.tcp://sparkWorker@hadoop02.mycompany.local:7078/user/Worker"
2015-03-30 04:22:53,338 INFO org.apache.spark.deploy.worker.Worker: Asked to kill executor app-20150330042242-0000/0
2015-03-30 04:22:53,338 INFO org.apache.spark.deploy.worker.ExecutorRunner: Runner thread for executor app-20150330042242-0000/0 interrupted
2015-03-30 04:22:53,339 INFO org.apache.spark.deploy.worker.ExecutorRunner: Killing process!
2015-03-30 04:22:53,596 INFO org.apache.spark.deploy.worker.Worker: Executor app-20150330042242-0000/0 finished with state KILLED exitStatus 1
2015-03-30 04:22:53,603 INFO akka.actor.LocalActorRef: Message [akka.remote.transport.ActorTransportAdapter$DisassociateUnderlying] from Actor[akka://sparkWorker/deadLetters] to Actor[akka://sparkWorker/system/transports/akkaprotocolmanager.tcp0/akkaProtocol-tcp%3A%2F%2FsparkWorker%40192.168.209.172%3A54963-2#1273102661] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
2015-03-30 04:22:53,612 ERROR akka.remote.EndpointWriter: AssociationError [akka.tcp://sparkWorker@hadoop02.mycompany.local:7078] -> [akka.tcp://sparkExecutor@hadoop02.mycompany.local:37271]: Error [Association failed with [akka.tcp://sparkExecutor@hadoop02.mycompany.local:37271]] [
akka.remote.EndpointAssociationException: Association failed with [akka.tcp://sparkExecutor@hadoop02.mycompany.local:37271]
Caused by: akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2: Connection refused: hadoop02.mycompany.local/192.168.209.172:37271
]

 

 

 

Thanks!

 

avatar
Master Collaborator

It sounds like a network config problem:

 

Caused by: akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2: Connection refused: hadoop02.mycompany.local/192.168.209.172:37271