New Contributor
Posts: 3
Registered: ‎05-11-2016
Accepted Solution

Spark 2.0 App not working on cluster

Hi all


We have Spark 2.0 (*) installed from the Cloudera parcel on our cluster (CDH 5.9.0).

When running a quite simple App which just reads in some csv files and does a groupBy I always receive errors.

The App is submitted with:

spark2-submit --class my_class myapp-1.0-SNAPSHOT.jar

And I receive the following error message: org.apache.commons.lang3.time.FastDateFormat; local class incompatible: stream classdesc serialVersionUID = 2, local class serialVersionUID = 1

I figured out that there are multiple versions of lang3 installed with the Cloudera release and modified the spark2-submit to:

spark2-submit --conf spark.driver.userClassPathFirst=true --conf spark.executor.userClassPathFirst=true --jars /var/opt/teradata/cloudera/parcels/CDH/jars/commons-lang3-3.3.2.jar --class my_class myapp-1.0-SNAPSHOT.jar

This way I cloud get rid of the first error message, but now I get:

java.lang.ClassCastException: cannot assign instance of org.apache.commons.lang3.time.FastDateFormat to field org.apache.spark.sql.execution.datasources.csv.CSVOptions.dateFormat of type org.apache.commons.lang3.time.FastDateFormat in instance of org.apache.spark.sql.execution.datasources.csv.CSVOptions

The App was written in Scala and compiled using Maven. The source code (**) and the maven pom file (***) are attached at the bottom of this post.

Does anybody have an idea on solving this issue?

Any help is highly appreciated!


Thanks a lot in advance!

Kind Regards



$spark2-submit --version
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.0.0.cloudera1

Branch HEAD
Compiled by user jenkins on 2016-12-06T18:34:13Z
Revision 2389f44e0185f33969d782ed09b41ae45fe30324


import org.apache.spark.sql.SparkSession

object my_class {
  def main(args: Array[String]): Unit = {
    val spark = SparkSession

    val csv ="header", value = false).csv("/path/to/folder/with/some/csv/files/")

    val pivot = csv.groupBy("_c0").count()



<?xml version="1.0" encoding="UTF-8"?>
<project xmlns=""






Cloudera Employee
Posts: 399
Registered: ‎08-11-2014

Re: Spark 2.0 App not working on cluster

This is due to a difference in the version of commons-lang3 you use and the one Spark does, generally. See for example.

I believe you'll find that it's resolved in the latest Spark 2 release for CDH.

New Contributor
Posts: 3
Registered: ‎05-11-2016

Re: Spark 2.0 App not working on cluster

Thanks a lot.

With the given workaround at the end of the Zeppelin issue, it works for me now.