<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: &amp;quot;bad substitution&amp;quot; error running Spark on Yarn in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/quot-bad-substitution-quot-error-running-Spark-on-Yarn/m-p/135400#M98059</link>
    <description>&lt;P&gt;Background: &lt;/P&gt;&lt;P&gt;Starting with HDP 2.2 which is based on Hadoop 2.6, Hortonworks has added support for rolling upgrades (detailed description available here &lt;A href="http://hortonworks.com/blog/introducing-rolling-upgrades-downgrades-apache-hadoop-yarn-cluster/"&gt;http://hortonworks.com/blog/introducing-rolling-upgrades-downgrades-apache-hadoop-yarn-cluster/&lt;/A&gt;). A fundamental assumption made by rolling upgrades is that jobs should not rely implicitly on the current version of artefacts such as jar files and native libraries, since they could change during the execution of a job in the middle of a rolling upgrade. Instead, the system is configured to require a particular value for hdp.version at the time of job submission.&lt;/P&gt;&lt;P&gt;Solution:&lt;/P&gt;&lt;P&gt;1. One option is to modify mapred-site.xml to replace the hdp.version property with the right value for your cluster. CAUTION - if you modify mapred-site.xml on a node on the cluster, this will break rolling upgrades in certain scenarios where a program like oozie submitting a job from that node will use the hardcoded version instead of the version specified by the client.&lt;/P&gt;&lt;P&gt;2. Better option is to:&lt;/P&gt;&lt;P&gt;a) create a file called java-opts with the following config value in it -Dhdp.version=2.3.4.0-3485. You can also specify the same value using SPARK_JAVA_OPTS, i.e. export SPARK_JAVA_OPTS="-Dhdp.version=2.3.4.0-3485"&lt;/P&gt;&lt;P&gt;b) modify /usr/hdp/current/spark-client/conf/spark-defaults.conf and add below lines &lt;/P&gt;&lt;PRE&gt;spark.driver.extraJavaOptions   -Dhdp.version=2.3.4.0-3485
spark.yarn.am.extraJavaOptions 	-Dhdp.version=2.3.4.0-3485&lt;/PRE&gt;</description>
    <pubDate>Sat, 19 Mar 2016 06:45:15 GMT</pubDate>
    <dc:creator>abajwa</dc:creator>
    <dc:date>2016-03-19T06:45:15Z</dc:date>
  </channel>
</rss>

