Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

"ClassNotFoundException: kafka.DefaultSource" with Spark 2.1 Structured Streaming and Kafka 2.1

avatar
Rising Star

Hi 

 

I use CDH 5.10.1, Spark 2.1 and Kafka 2.1

 

When I try simple program for ETL:

val mystream = spark
.readStream
.format("kafka")
.option("kafka.bootstrap.servers", "mybroker:9092")
.option("subscribe", "mytopic")
.load()

With dependencies in build.sbt:

scalaVersion := "2.11.8"

lazy val spark = Seq (
"spark-core",
"spark-hive",
"spark-streaming",
"spark-sql",
"spark-streaming-kafka-0-10"
).map( "org.apache.spark" %% _ % "2.1.0.cloudera1" % "provided")

libraryDependencies += "org.apache.spark" %% "spark-sql-kafka-0-10" % "2.1.0.cloudera1"
libraryDependencies += "org.apache.kafka" % "kafka-clients" % "0.10.0-kafka-2.1.0"
libraryDependencies += "org.apache.kafka" %% "kafka" % "0.10.0-kafka-2.1.0"
libraryDependencies ++= spark

resolvers ++= Seq(
"Cloudera Repository" at "https://repository.cloudera.com/artifactory/cloudera-repos/"
)

 

When submitting my app, I get error:

 
Exception in thread "main" java.lang.ClassNotFoundException: Failed to find data source: kafka. Please find packages at http://spark.apache.org/third-party-projects.html
        at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:594)
        at org.apache.spark.sql.execution.datasources.DataSource.providingClass$lzycompute(DataSource.scala:86)
        at org.apache.spark.sql.execution.datasources.DataSource.providingClass(DataSource.scala:86)
        at org.apache.spark.sql.execution.datasources.DataSource.sourceSchema(DataSource.scala:197)
        at org.apache.spark.sql.execution.datasources.DataSource.sourceInfo$lzycompute(DataSource.scala:87)
        at org.apache.spark.sql.execution.datasources.DataSource.sourceInfo(DataSource.scala:87)
        at org.apache.spark.sql.execution.streaming.StreamingRelation$.apply(StreamingRelation.scala:30)
        at org.apache.spark.sql.streaming.DataStreamReader.load(DataStreamReader.scala:124)
        at myetl.Main$.main(Main.scala:20)
        at myetl.Main.main(Main.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:738)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: kafka.DefaultSource
        at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$25$$anonfun$apply$13.apply(DataSource.scala:579)
        at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$25$$anonfun$apply$13.apply(DataSource.scala:579)
        at scala.util.Try$.apply(Try.scala:192)
        at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$25.apply(DataSource.scala:579)
        at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$25.apply(DataSource.scala:579)
        at scala.util.Try.orElse(Try.scala:84)
        at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:579)
        ... 18 more
 

 

Please help...

 

 

 

2 REPLIES 2

avatar
Rising Star

OK, the problem is that for some reason spark-sql-kafka-0-10_2.11-2.1.0.cloudera1.jar is not loaded and lookupDataSource do not see KafkaSourceProvider as extended class of trait DataSourceRegister, so there is no 
override def shortName(): String = "kafka"

 

and when not found it sets default data source appending to given datasource provider, and that is why I got class not found kafka.DefaultSource.

 

The solution is to

1. get from somewhere this jar spark-sql-kafka-0-10_2.11-2.1.0.cloudera1.jar

a) manually download from Cloudera repo https://repository.cloudera.com/cloudera/cloudera-repos/org/apache/spark/spark-sql-kafka-0-10_2.11/2...

or 

b) run spark2-submit --packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.1.0 as Jacek Laskowski explained:
https://github.com/jaceklaskowski/spark-structured-streaming-book/blob/master/spark-sql-streaming-Ka...

 

and

2. Add run your app via spark2-submit --jars ~/.ivy2/cache/org.apache.spark/spark-sql-kafka-0-10_2.11/jars/spark-sql-kafka-0-10_2.11-2.1.0.cloudera1.jar

 

 

Very important: remember to set proper Kafka version (in this case 0.10) via export or in Spark2 service configuration in Cloudera Manager.

https://www.cloudera.com/documentation/spark2/latest/topics/spark2_kafka.html

avatar
New Contributor

Thanks for the Answer..! I was also stuck at this point. now it worked for me after jar download.