Support Questions

Find answers, ask questions, and share your expertise
Announcements
Welcome to the upgraded Community! Read this blog to see What’s New!

Run Spark job and get "Found both spark.driver.extraClassPath and SPARK_CLASSPATH. Use only the former." Error

avatar
New Contributor

I am using Spark service in HDP 2.5.0.0. I need to set SPARK_CLASSPATH in spark-env to include secondary fs client jar file. But I also need to specify "--driver-class-path" in spark-submit command. So I get following error in output. Do you have any idea how to solve this issue?

2017-03-16 11:16:11,731|INFO|MainThread|machine.py:142 - run()|This is deprecated in Spark 1.0+. 2017-03-16 11:16:11,732|INFO|MainThread|machine.py:142 - run()| 2017-03-16 11:16:11,732|INFO|MainThread|machine.py:142 - run()|Please instead use: 2017-03-16 11:16:11,732|INFO|MainThread|machine.py:142 - run()|- ./spark-submit with --driver-class-path to augment the driver classpath 2017-03-16 11:16:11,732|INFO|MainThread|machine.py:142 - run()|- spark.executor.extraClassPath to augment the executor classpath 2017-03-16 11:16:11,733|INFO|MainThread|machine.py:142 - run()| 2017-03-16 11:16:11,733|INFO|MainThread|machine.py:142 - run()|17/03/16 11:16:11 WARN SparkConf: Setting 'spark.executor.extraClassPath' to '/usr/hdp/2.5.0.0-1245/hadoop/lib/viprfs-client-3.1.0.0-hadoop-2.7.jar:/usr/hdp/2.5.0.0-1245/hadoop/lib/guava-11.0.2.jar:/usr/hdp/current/spark-client/lib/spark-examples-1.6.2.2.5.0.0-1245-hadoop2.7.3.2.5.0.0-1245.jar' as a work-around. 2017-03-16 11:16:11,738|INFO|MainThread|machine.py:142 - run()|17/03/16 11:16:11 ERROR SparkContext: Error initializing SparkContext. 2017-03-16 11:16:11,739|INFO|MainThread|machine.py:142 - run()|org.apache.spark.SparkException: Found both spark.driver.extraClassPath and SPARK_CLASSPATH. Use only the former. 2017-03-16 11:16:11,739|INFO|MainThread|machine.py:142 - run()|at org.apache.spark.SparkConf$anonfun$validateSettings$7$anonfun$apply$8.apply(SparkConf.scala:492) 2017-03-16 11:16:11,739|INFO|MainThread|machine.py:142 - run()|at org.apache.spark.SparkConf$anonfun$validateSettings$7$anonfun$apply$8.apply(SparkConf.scala:490) 2017-03-16 11:16:11,739|INFO|MainThread|machine.py:142 - run()|at scala.collection.immutable.List.foreach(List.scala:318) 2017-03-16 11:16:11,740|INFO|MainThread|machine.py:142 - run()|at org.apache.spark.SparkConf$anonfun$validateSettings$7.apply(SparkConf.scala:490) 2017-03-16 11:16:11,740|INFO|MainThread|machine.py:142 - run()|at org.apache.spark.SparkConf$anonfun$validateSettings$7.apply(SparkConf.scala:478) 2017-03-16 11:16:11,740|INFO|MainThread|machine.py:142 - run()|at scala.Option.foreach(Option.scala:236) 2017-03-16 11:16:11,741|INFO|MainThread|machine.py:142 - run()|at org.apache.spark.SparkConf.validateSettings(SparkConf.scala:478) 2017-03-16 11:16:11,741|INFO|MainThread|machine.py:142 - run()|at org.apache.spark.SparkContext.<init>(SparkContext.scala:398) 2017-03-16 11:16:11,741|INFO|MainThread|machine.py:142 - run()|at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:59) 2017-03-16 11:16:11,741|INFO|MainThread|machine.py:142 - run()|at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 2017-03-16 11:16:11,742|INFO|MainThread|machine.py:142 - run()|at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)

3 REPLIES 3

avatar
Expert Contributor

What about putting the `secondary fs client jar file` into `--jars` option?

avatar
New Contributor

I already put the jar to the classpath. But if I don't export SPARK_CLASSPATH in spark-env, spark will be down.

avatar
Expert Contributor

It seems that your environment has SPARK_CLASSPATH somewhere. Could you do `echo $SPARK_CLASSPATH` ?

Labels