07-21-2016 11:50 AM
I am using CDH 5.7.1 with spark 1.6.0
I have a spark streaming application that read s from kafka & do some processing.
The issue is while starting the application in CLUSTER mode, i want to pass custom log4j.properies file to both driver & executor.
I have the below command :-
--class xyx.search.spark.Boot \
--conf "spark.cores.max=6" \
--conf "spark.eventLog.enabled=true" \
--conf "spark.driver.extraJavaOptions=-Dlog4j.configuration=file:/some/path/search-spark-service-log4j-Driver.properties" \
--conf "spark.executor.extraJavaOptions=-Dlog4j.configuration=file:/some/path/search-spark-service-log4j-Executor.properties" \
--deploy-mode "cluster" \
But it gives the below exception :-
SPARK_JAVA_OPTS was detected (set to '-XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/killparent.sh ').
This is deprecated in Spark 1.0+.
Please instead use:
- ./spark-submit with conf/spark-defaults.conf to set defaults for an application
- ./spark-submit with --driver-java-options to set -X options for a driver
- spark.executor.extraJavaOptions to set -X options for executors
- SPARK_DAEMON_JAVA_OPTS to set java options for standalone daemons (master or worker)
2016-07-21 12:59:41 ERROR SparkContext:95 - Error initializing SparkContext.
org.apache.spark.SparkException: Found both spark.executor.extraJavaOptions and SPARK_JAVA_OPTS. Use only the former.
Please note the same works with CDH 5.4 with spark 1.3.0.
07-21-2016 12:01 PM
You've got SPARK_JAVA_OPTS set somewhere, as the error says, and you should remove it. It looks like you may have old or outdated spark-env.sh configuration somewhere in your cluster. I'd make sure every node is up to date.
07-21-2016 12:29 PM
I had checked the /opt/cloudera/parcels/CDH/lib/spark/conf/spark-env.sh in all nodes. None of then have SPARK_JAVA_OPTS exported.
This is a brand new cluster, initally i was working with 1.3.0 & the same command worked fine.
Is there any specific issue reported with CDH 5.7.1 + spark 1.6.0 reg extraJavaOptions
Thanks for your reply.
07-21-2016 12:58 PM
I have 5.7.1 and do not observe this. Something is putting that config in your env. It looks like it might be something that CM sets for the Spark History Server, but you would not have that config in your environment by default. Something's cross-wired or sourcing the wrong thing, or old config is lying around.
06-15-2017 01:43 AM
@sam1988curious to know if you found any fix for this issue . I am landing on the same error and History server wont even start - its in down stage . any information is highly appreciable.