<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: spark thrift server  not started in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/spark-thrift-server-not-started/m-p/210269#M172211</link>
    <description>&lt;P&gt;Hi, &lt;A rel="user" href="https://community.cloudera.com/users/26229/uribarih.html" nodeid="26229"&gt;@Michael Bronson&lt;/A&gt; &lt;/P&gt;&lt;P&gt;`spark.executor.memory` seems to be 10240.&lt;/P&gt;&lt;P&gt;Please change it in your Ambari, `spark-thrift-conf`.&lt;/P&gt;</description>
    <pubDate>Tue, 06 Feb 2018 03:08:41 GMT</pubDate>
    <dc:creator>dhyun</dc:creator>
    <dc:date>2018-02-06T03:08:41Z</dc:date>
    <item>
      <title>spark thrift server  not started</title>
      <link>https://community.cloudera.com/t5/Support-Questions/spark-thrift-server-not-started/m-p/210268#M172210</link>
      <description>&lt;P&gt;we have ambari cluster with 3 worker machines ( each worker have 8G memory )&lt;/P&gt;&lt;P&gt;when we start the spark thrift server on master01/03 machines we get the following errors&lt;/P&gt;&lt;PRE&gt;Warning: Master yarn-client is deprecated since 2.0. Please use master "yarn" with specified deploy mode instead.
18/02/05 18:12:52 WARN Utils: spark.executor.instances less than spark.dynamicAllocation.minExecutors is invalid, ignoring its setting, please update your configs.
18/02/05 18:12:53 ERROR SparkContext: Error initializing SparkContext.
java.lang.IllegalArgumentException: Required executor memory (10240+1024 MB) is above the max threshold (6144 MB) of this cluster! Please check the values of 'yarn.scheduler.maximum-allocat
ion-mb' and/or 'yarn.nodemanager.resource.memory-mb'.&lt;/PRE&gt;&lt;P&gt;please advice - what these errors means ?&lt;/P&gt;&lt;P&gt;about - Required executor memory (10240+1024 MB)  , what this memory values and how to set the parameters in spark or other in order to solve these isshue ?&lt;/P&gt;</description>
      <pubDate>Tue, 06 Feb 2018 02:55:29 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/spark-thrift-server-not-started/m-p/210268#M172210</guid>
      <dc:creator>mike_bronson7</dc:creator>
      <dc:date>2018-02-06T02:55:29Z</dc:date>
    </item>
    <item>
      <title>Re: spark thrift server  not started</title>
      <link>https://community.cloudera.com/t5/Support-Questions/spark-thrift-server-not-started/m-p/210269#M172211</link>
      <description>&lt;P&gt;Hi, &lt;A rel="user" href="https://community.cloudera.com/users/26229/uribarih.html" nodeid="26229"&gt;@Michael Bronson&lt;/A&gt; &lt;/P&gt;&lt;P&gt;`spark.executor.memory` seems to be 10240.&lt;/P&gt;&lt;P&gt;Please change it in your Ambari, `spark-thrift-conf`.&lt;/P&gt;</description>
      <pubDate>Tue, 06 Feb 2018 03:08:41 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/spark-thrift-server-not-started/m-p/210269#M172211</guid>
      <dc:creator>dhyun</dc:creator>
      <dc:date>2018-02-06T03:08:41Z</dc:date>
    </item>
    <item>
      <title>Re: spark thrift server  not started</title>
      <link>https://community.cloudera.com/t5/Support-Questions/spark-thrift-server-not-started/m-p/210270#M172212</link>
      <description>&lt;P&gt;thank  you , could you please explain about this variable and who is responsible to set it ? ( I mean this value set by the cluster itself? )  , on the first thinking it seems that workers not have the right memory so I increase the workers memory to 32G insted 8G&lt;/P&gt;</description>
      <pubDate>Tue, 06 Feb 2018 03:15:37 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/spark-thrift-server-not-started/m-p/210270#M172212</guid>
      <dc:creator>mike_bronson7</dc:creator>
      <dc:date>2018-02-06T03:15:37Z</dc:date>
    </item>
    <item>
      <title>Re: spark thrift server  not started</title>
      <link>https://community.cloudera.com/t5/Support-Questions/spark-thrift-server-not-started/m-p/210271#M172213</link>
      <description>&lt;P&gt;It's a memory size for Spark executor (worker). And, there is additional overhead in Spark executor. You need to set a proper value by yourself. Of course, in YARN environment, the memory (+ overhead) should be smaller than the limitation of YARN container. So, Spark shows you the error message.&lt;/P&gt;&lt;P&gt;It's an application property. For normal Spark jobs, users are responsible because each app can set their `spark.executor.memory` with `spark-submit`. For Spark Thrift Server, admins should manage that properly when they adjust YARN configuration.&lt;/P&gt;&lt;P&gt;For more information, please see this. &lt;A href="http://spark.apache.org/docs/latest/configuration.html#application-properties" target="_blank"&gt;http://spark.apache.org/docs/latest/configuration.html#application-properties&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 07 Feb 2018 01:22:33 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/spark-thrift-server-not-started/m-p/210271#M172213</guid>
      <dc:creator>dhyun</dc:creator>
      <dc:date>2018-02-07T01:22:33Z</dc:date>
    </item>
  </channel>
</rss>

