Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Ambari does not saves proper configuration

Ambari does not saves proper configuration

Expert Contributor

Dear community,

I`ve updated spark to 1.6 in hdp 2.2. However when I am trying to apply new config changing spark-defaults.conf to:

spark.yarn.historyServer.address c6401.ambari.apache.org:18080
spark.history.ui.port 18080
spark.eventLog.dir hdfs://my-hdfs-server/spark-history
spark.eventLog.enabled true
spark.history.fs.logDirectory hdfs://my-hdfs-server/spark-history
spark.history.provider org.apache.spark.deploy.history.FsHistoryProvider

ambari restarts the spark and gives it config file without specified changes and wrong parameters.

When I execute /var/lib/ambari-server/resources/scripts/configs.sh get test.server.my testcluster spark-defaults

########## Performing 'GET' on (Site:spark-defaults, Tag:version1470753804174)
"properties" : {
"spark.driver.extraJavaOptions" : "",
"spark.eventLog.dir" : "hdfs://my-hdfs-server/spark-history",
"spark.eventLog.enabled" : "true",
"spark.history.fs.logDirectory" : "hdfs://my-hdfs-server/spark-history",
"spark.history.kerberos.keytab" : "none",
"spark.history.kerberos.principal" : "none",
"spark.history.provider" : "org.apache.spark.deploy.yarn.history.YarnHistoryProvider",
"spark.history.ui.port" : "18080",
"spark.yarn.am.extraJavaOptions" : "",
"spark.yarn.applicationMaster.waitTries" : "10",
"spark.yarn.containerLauncherMaxThreads" : "25",
"spark.yarn.driver.memoryOverhead" : "384",
"spark.yarn.executor.memoryOverhead" : "384",
"spark.yarn.max.executor.failures" : "3",
"spark.yarn.preserve.staging.files" : "false",
"spark.yarn.queue" : "default",
"spark.yarn.scheduler.heartbeat.interval-ms" : "5000",
"spark.yarn.submit.file.replication" : "3"

it gives a right config.

However when I look into spark-defaults.conf in spark folder, the file seems to be modified, but the values are wrong:

cat /etc/spark/conf/spark-defaults.conf
spark.yarn.max_executor.failures 	 3
spark.yarn.applicationMaster.waitTries 	 10
spark.history.fs.logDirectory 	 hdfs://my-hdfs-server/spark-history
spark.yarn.submit.file.replication 	 3
spark.history.kerberos.principal 	 none
spark.yarn.historyServer.address 	 test.server.my:18080
spark.yarn.queue 	 default
spark.yarn.scheduler.heartbeat.interval-ms 	 5000
spark.history.kerberos.keytab 	 none
spark.yarn.services 	 org.apache.spark.deploy.yarn.history.YarnHistoryService
spark.driver.extraJavaOptions 	  -Dhdp.version=2.2.4.2-2
spark.history.provider 	 org.apache.spark.deploy.yarn.history.YarnHistoryProvider
spark.eventLog.dir 	 hdfs://my-hdfs-server/spark-history
spark.history.ui.port 	 18080
spark.yarn.preserve.staging.files 	 False
spark.yarn.driver.memoryOverhead 	 384
spark.yarn.containerLauncherMaxThreads 	 25
spark.eventLog.enabled 	 true
spark.yarn.max.executor.failures 	 3
spark.yarn.am.extraJavaOptions 	  -Dhdp.version=2.2.4.2-2
spark.yarn.executor.memoryOverhead 	 384
1 REPLY 1

Re: Ambari does not saves proper configuration

Mentor

Spark 1.6 is not supported on HDP 2.2. the certified version of HDP for Spark 1.6 is HDP 2.4 and Ambari 2.2.x.

Don't have an account?
Coming from Hortonworks? Activate your account here