Support Questions
Find answers, ask questions, and share your expertise

Hit "Exception in thread “main” java.lang.NoClassDefFoundError: org/apache/spark/Logging"

New Contributor

I'm new to Spark. I attempted to run a Spark app (.jar) on CDH 5.8.0-0 on Oracle VirtualBox 5.1.4r110228 which leveraged Spark Steaming to perform sentiment analysis on twitter. I have my twitter account created and all required (4) tokens were generated. I was blocked by the NoClassDefFoundError exception.


I've been googling around for a couple of days. The best advice I found so far was in the URL below but apparently my environment is still missing something.


What does it mean by a library showed up in Compile by was missing at RunTime? How can we fix this?


What is the library of Logging? I came across an article stating this Logging is subject to be deprecated. Besides that, I do see log4j in my environment.


In my CDH 5.8, I'm running these versions of software:

 Spark-2.0.0-bin-hadoop2.7 / spark-core_2.10-2.0.0

 jdk-8u101-linux-x64 / jre-bu101-linux-x64


I appended the detail of the exception at the end. Here is the procedure I performed to execute the app and some verification I did after hitting the exception:


  1. Unzip (the Spark app)
  2. cd twitter-streaming
  3. run ./sbt/sbt assembly
  4. Update with your Twitter account


$ cat

export SPARK_HOME=/home/cloudera/spark-2.0.0-bin-hadoop2.7


export CONSUMER_KEY=<my_consumer_key>

export CONSUMER_SECRET=<my_consumer_secret>

export ACCESS_TOKEN=<my_twitterapp_access_token>

export ACCESS_TOKEN_SECRET=<my_twitterapp_access_token>


The script wrapped up the spark-submit command with required credential info in


$ cat



source ./


$SPARK_HOME/bin/spark-submit --class "TwitterStreamingApp" --master local[*] ./target/scala-2.10/twitter-streaming-assembly-1.0.jar $CONSUMER_KEY $CONSUMER_SECRET $ACCESS_TOKEN $ACCESS_TOKEN_SECRET


The log of the assembly process:

[cloudera@quickstart twitter-streaming]$ ./sbt/sbt assembly

Launching sbt from sbt/sbt-launch-0.13.7.jar

[info] Loading project definition from /home/cloudera/workspace/twitter-streaming/project

[info] Set current project to twitter-streaming (in build file:/home/cloudera/workspace/twitter-streaming/)

[info] Including: twitter4j-stream-3.0.3.jar

[info] Including: twitter4j-core-3.0.3.jar

[info] Including: scala-library.jar

[info] Including: unused-1.0.0.jar

[info] Including: spark-streaming-twitter_2.10-1.4.1.jar

[info] Checking every *.class/*.jar file's SHA-1.

[info] Merging files...

[warn] Merging 'META-INF/LICENSE.txt' with strategy 'first'

[warn] Merging 'META-INF/MANIFEST.MF' with strategy 'discard'

[warn] Merging 'META-INF/maven/org.spark-project.spark/unused/' with strategy 'first'

[warn] Merging 'META-INF/maven/org.spark-project.spark/unused/pom.xml' with strategy 'first'

[warn] Merging '' with strategy 'discard'

[warn] Merging 'org/apache/spark/unused/UnusedStubClass.class' with strategy 'first'

[warn] Strategy 'discard' was applied to 2 files

[warn] Strategy 'first' was applied to 4 files

[info] SHA-1: 69146d6fdecc2a97e346d36fafc86c2819d5bd8f

[info] Packaging /home/cloudera/workspace/twitter-streaming/target/scala-2.10/twitter-streaming-assembly-1.0.jar ...

[info] Done packaging.

[success] Total time: 6 s, completed Aug 27, 2016 11:58:03 AM


Not sure exactly what it means but everything looked good when I ran Hadoop NativeCheck:


$ hadoop checknative -a

16/08/27 13:27:22 INFO bzip2.Bzip2Factory: Successfully loaded & initialized native-bzip2 library system-native

16/08/27 13:27:22 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library

Native library checking:

hadoop:  true /usr/lib/hadoop/lib/native/

zlib:    true /lib64/

snappy:  true /usr/lib/hadoop/lib/native/

lz4:     true revision:10301

bzip2:   true /lib64/

openssl: true /usr/lib64/


Here is the console log of my exception:

$ ./

Using Spark's default log4j profile: org/apache/spark/

16/08/28 20:13:23 INFO SparkContext: Running Spark version 2.0.0

16/08/28 20:13:24 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

16/08/28 20:13:24 WARN Utils: Your hostname, quickstart.cloudera resolves to a loopback address:; using instead (on interface eth0)

16/08/28 20:13:24 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address

16/08/28 20:13:24 INFO SecurityManager: Changing view acls to: cloudera

16/08/28 20:13:24 INFO SecurityManager: Changing modify acls to: cloudera

16/08/28 20:13:24 INFO SecurityManager: Changing view acls groups to:

16/08/28 20:13:24 INFO SecurityManager: Changing modify acls groups to:

16/08/28 20:13:24 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(cloudera); groups with view permissions: Set(); users  with modify permissions: Set(cloudera); groups with modify permissions: Set()

16/08/28 20:13:25 INFO Utils: Successfully started service 'sparkDriver' on port 37550.

16/08/28 20:13:25 INFO SparkEnv: Registering MapOutputTracker

16/08/28 20:13:25 INFO SparkEnv: Registering BlockManagerMaster

16/08/28 20:13:25 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-37a0492e-67e3-4ad5-ac38-40448c25d523

16/08/28 20:13:25 INFO MemoryStore: MemoryStore started with capacity 366.3 MB

16/08/28 20:13:25 INFO SparkEnv: Registering OutputCommitCoordinator

16/08/28 20:13:25 INFO Utils: Successfully started service 'SparkUI' on port 4040.

16/08/28 20:13:25 INFO SparkUI: Bound SparkUI to, and started at

16/08/28 20:13:25 INFO SparkContext: Added JAR file:/home/cloudera/workspace/twitter-streaming/target/scala-2.10/twitter-streaming-assembly-1.1.jar at spark:// with timestamp 1472440405882

16/08/28 20:13:26 INFO Executor: Starting executor ID driver on host localhost

16/08/28 20:13:26 INFO Utils: Successfully started service '' on port 41264.

16/08/28 20:13:26 INFO NettyBlockTransferService: Server created on

16/08/28 20:13:26 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver,, 41264)

16/08/28 20:13:26 INFO BlockManagerMasterEndpoint: Registering block manager with 366.3 MB RAM, BlockManagerId(driver,, 41264)

16/08/28 20:13:26 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver,, 41264)

Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/spark/Logging

at java.lang.ClassLoader.defineClass1(Native Method)

at java.lang.ClassLoader.defineClass(






at Method)I


at java.lang.ClassLoader.loadClass(

at java.lang.ClassLoader.loadClass(

at org.apache.spark.streaming.twitter.TwitterUtils$.createStream(TwitterUtils.scala:44)

at TwitterStreamingApp$.main(TwitterStreamingApp.scala:42)

at TwitterStreamingApp.main(TwitterStreamingApp.scala)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(

at sun.reflect.DelegatingMethodAccessorImpl.invoke(

at java.lang.reflect.Method.invoke(

at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:729)

at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185)

at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)

at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)

at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

Caused by: java.lang.ClassNotFoundException: org.apache.spark.Logging


at java.lang.ClassLoader.loadClass(

at java.lang.ClassLoader.loadClass(

... 23 more

16/08/28 20:13:26 INFO SparkContext: Invoking stop() from shutdown hook

16/08/28 20:13:26 INFO SparkUI: Stopped Spark web UI at

16/08/28 20:13:26 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!

16/08/28 20:13:26 INFO MemoryStore: MemoryStore cleared

16/08/28 20:13:26 INFO BlockManager: BlockManager stopped

16/08/28 20:13:26 INFO BlockManagerMaster: BlockManagerMaster stopped

16/08/28 20:13:26 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!

16/08/28 20:13:26 INFO SparkContext: Successfully stopped SparkContext

16/08/28 20:13:26 INFO ShutdownHookManager: Shutdown hook called

16/08/28 20:13:26 INFO ShutdownHookManager: Deleting directory /tmp/spark-5e29c3b2-74c2-4d89-970f-5be89d176b26


I understood my post is lengthy. I also posted my question on another site:


Your advice or insights are highly appreciated!!




Master Collaborator

You shouldn't use org.apache.spark.Logging in your app at all. That's likely the problem and solution.

New Contributor
Is it still supported? What would be the jar file to include to mitigate the dependency? Thanks for your quick response!!

Master Collaborator

Is what supported? Spark has never supported using org.apache.spark.Logging.

New Contributor

Looked at the release documentation of cloudera.


Understand the exception NoClassDefFoundError here.