Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Spark-submit to deploy the jar to Spark - java.lang.NoSuchMethodError

Highlighted

Spark-submit to deploy the jar to Spark - java.lang.NoSuchMethodError

Explorer

Hi, I'm doing the following tutoria.

https://es.hortonworks.com/tutorial/deploying-machine-learning-models-using-spark-structured-streami...

I'm using the HDP version is HDP-2.5.0.0-1245, the spark version is 1.6.2 and the scala version is 2.10.5.

I have reached this point of the tutorial: Then use spark-submit to deploy the jar to Spark.

I am trying to submit a job which is in target/main/scala, which is a jar file, with the following lines:

/usr/hdp/current/spark2-client/bin/spark-submit --class "main.scala.Collect" --master local[4] ./SentimentAnalysis-assembly-2.0.0.jar

All goes well, except the following errors:

Exception in thread "main" java.lang.NoSuchMethodError: scala.Predef$.$scope()Lscala/xml/TopScope$;
        at org.apache.spark.ui.jobs.AllJobsPage.<init>(AllJobsPage.scala:39)
        at org.apache.spark.ui.jobs.JobsTab.<init>(JobsTab.scala:38)
        at org.apache.spark.ui.SparkUI.initialize(SparkUI.scala:65)
        at org.apache.spark.ui.SparkUI.<init>(SparkUI.scala:82)
        at org.apache.spark.ui.SparkUI$.create(SparkUI.scala:220)
        at org.apache.spark.ui.SparkUI$.createLiveUI(SparkUI.scala:162)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:452)
        at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2313)
        at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:868)
        at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:860)
        at scala.Option.getOrElse(Option.scala:121)
        at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:860)
        at main.scala.Collect$.main(Collect.scala:39)
        at main.scala.Collect.main(Collect.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:738)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
18/05/23 20:47:25 INFO DiskBlockManager: Shutdown hook called
18/05/23 20:47:25 INFO ShutdownHookManager: Shutdown hook called
18/05/23 20:47:25 INFO ShutdownHookManager: Deleting directory /tmp/spark-c691ba05-469e-441b-bb91-61f50d071df3/userFiles-1973ccc1-1ba9-4ae6-aa0d-4a529878f7f5
18/05/23 20:47:25 INFO ShutdownHookManager: Deleting directory /tmp/spark-c691ba05-469e-441b-bb91-61f50d071df3

My build.sbt file is:

name := "SentimentAnalysis"

version := "2.0.0"

scalaVersion := "2.10.5"//"2.10.4"//

libraryDependencies ++= {
  val sparkVer = "2.1.0"//"1.6.1"//
  Seq(
    "org.apache.spark"     %% "spark-core"              % sparkVer % "provided" withSources(),
    "org.apache.spark"     %% "spark-mllib"             % sparkVer % "provided" withSources(),
    "org.apache.spark"     %% "spark-sql"               % sparkVer % "provided" withSources(),
    "org.apache.spark"     %% "spark-streaming"         % sparkVer % "provided" withSources(),
    "org.apache.spark"     %% "spark-streaming-kafka-0-10" % sparkVer withSources(),
    "org.apache.spark"     %% "spark-sql-kafka-0-10" % sparkVer withSources(),
    "org.apache.kafka"     %% "kafka" % "0.10.0" withSources(),
    "com.typesafe" % "config" % "1.3.1",
    "com.google.code.gson" % "gson" % "2.8.0"
  )
}


assemblyMergeStrategy in assembly := {
  case PathList("META-INF", xs @ _*) => MergeStrategy.discard
  case x => MergeStrategy.first
  case PathList("org", "apache", xs @ _*)      => MergeStrategy.first
  case PathList("javax", "xml", xs @ _*)      => MergeStrategy.first
  case PathList("com", "esotericsoftware", xs @ _*)      => MergeStrategy.first
  case PathList("com", "google", xs @ _*)      => MergeStrategy.first
  case x =>
    val oldStrategy = (assemblyMergeStrategy in assembly).value
    oldStrategy(x)
}

My spark-defaults.conf file is:

# Generated by Apache Ambari. Mon May 14 19:23:44 2018

spark.driver.extraLibraryPath /usr/hdp/current/hadoop-client/lib/native:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64
spark.eventLog.dir hdfs:///spark2-history/
spark.eventLog.enabled true
spark.executor.extraLibraryPath /usr/hdp/current/hadoop-client/lib/native:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64
spark.history.fs.logDirectory hdfs:///spark2-history/
spark.history.kerberos.enabled true
spark.history.kerberos.keytab /etc/security/keytabs/spark.headless.keytab
spark.history.kerberos.principal sparkuserqPS5joyO2hFxQ0sFUR0Cg@HWQE.HORTONWORKS.COM
spark.history.provider org.apache.spark.deploy.history.FsHistoryProvider
spark.history.ui.port 18081
spark.yarn.historyServer.address sandbox.hortonworks.com:18081
spark.yarn.queue default
spark.driver.extraClassPath /opt/spark-receiver/nifi-spark-receiver-1.0.0.jar:/opt/spark-receiver/nifi-site-to-site-client-1.0.0.jar:/opt/HDF-2.0.0.0-579/lib/nifi-api-1.0.0.2.0.0.0-579.jar:/opt/HDF-2.0.0.0-579/lib/nifi-framework-api-1.0.0.2.0.0.0-579.jar:/opt/HDF-2.0.0.0-579/lib/bootstrap/nifi-utils-1.0.0.2.0.0.0-579.jar:/opt/HDF-2.0.0.0-579/work/nar/framework/nifi-framework-nar-1.0.0.2.0.0.0-579.nar-unpacked/META-INF/bundled-dependencies/nifi-client-dto-1.0.0.2.0.0.0-579.jar:/root/.ivy2/cache/org.apache.spark/spark-streaming_2.10/jars/spark-streaming_2.10-2.1.0.jar:/root/.ivy2/cache/org.apache.spark/spark-sql_2.10/jars/spark-sql_2.10-2.1.0.jar:/root/.ivy2/cache/org.apache.spark/spark-core_2.10/jars/spark-core_2.10-2.1.0.jar:/root/.ivy2/cache/org.apache.spark/spark-core_2.10/jars/spark-core_2.10-1.6.2.jar
spark.driver.allowMultipleContexts = true

If anyone can help me thank you very much.

Don't have an account?
Coming from Hortonworks? Activate your account here