<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Configuring/Adding workers in Spark in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Configuring-Adding-workers-in-Spark/m-p/26196#M5510</link>
    <description>&lt;P&gt;Are you trying to manually set up standalone Master / Workers? You should use CM to do this.&lt;/P&gt;</description>
    <pubDate>Fri, 03 Apr 2015 14:06:20 GMT</pubDate>
    <dc:creator>srowen</dc:creator>
    <dc:date>2015-04-03T14:06:20Z</dc:date>
    <item>
      <title>Configuring/Adding workers in Spark</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Configuring-Adding-workers-in-Spark/m-p/26195#M5509</link>
      <description>&lt;P&gt;Hi All,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have a cloudera cdh5.3 quickstart running on a VM. I am having problems with running Spaark. I have gone thruogh those steps&amp;nbsp;&lt;A href="http://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/cdh_ig_spark_configure.html" target="_blank"&gt;http://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/cdh_ig_spark_configure.html&lt;/A&gt; and run the word exapmle and it worked. But when I go to the master (quickstart.cloudera&lt;SPAN&gt;:18080) it has no workers there the cores=0, memory=0... when I go to (quickstart.cloudera&lt;SPAN&gt;:18081) there is a worker. My question is how to add workers? And what should I enter in&amp;nbsp;export STANDALONE_SPARK_MASTER_HOST?&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&lt;SPAN&gt;This is the spark-env.sh:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;### Change the following to specify a real cluster's Master host&lt;BR /&gt;###&lt;BR /&gt;export STANDALONE_SPARK_MASTER_HOST=worker-20150402201049-10.0.2.15-7078&lt;/P&gt;&lt;P&gt;export SPARK_MASTER_IP=$STANDALONE_SPARK_MASTER_HOST&lt;/P&gt;&lt;P&gt;### Let's run everything with JVM runtime, instead of Scala&lt;BR /&gt;export SPARK_LAUNCH_WITH_SCALA=0&lt;BR /&gt;export SPARK_LIBRARY_PATH=${SPARK_HOME}/lib&lt;BR /&gt;export SCALA_LIBRARY_PATH=${SPARK_HOME}/lib&lt;BR /&gt;export SPARK_MASTER_WEBUI_PORT=18080&lt;BR /&gt;export SPARK_MASTER_PORT=7077&lt;BR /&gt;export SPARK_WORKER_PORT=7078&lt;BR /&gt;export SPARK_WORKER_WEBUI_PORT=18081&lt;BR /&gt;export SPARK_WORKER_DIR=/var/run/spark/work&lt;BR /&gt;export SPARK_LOG_DIR=/var/log/spark&lt;BR /&gt;export SPARK_PID_DIR='/var/run/spark/'&lt;/P&gt;&lt;P&gt;if [ -n "$HADOOP_HOME" ]; then&lt;BR /&gt;export LD_LIBRARY_PATH=:/lib/native&lt;BR /&gt;fi&lt;/P&gt;&lt;P&gt;export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-etc/hadoop/conf}&lt;/P&gt;&lt;P&gt;### Comment above 2 lines and uncomment the following if&lt;BR /&gt;### you want to run with scala version, that is included with the package&lt;BR /&gt;#export SCALA_HOME=${SCALA_HOME:-/usr/lib/spark/scala}&lt;BR /&gt;#export PATH=$PATH:$SCALA_HOME/bin&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&lt;SPAN&gt;Thank you,&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&lt;SPAN&gt;Amr&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 16 Sep 2022 09:25:58 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Configuring-Adding-workers-in-Spark/m-p/26195#M5509</guid>
      <dc:creator>amoriam</dc:creator>
      <dc:date>2022-09-16T09:25:58Z</dc:date>
    </item>
    <item>
      <title>Re: Configuring/Adding workers in Spark</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Configuring-Adding-workers-in-Spark/m-p/26196#M5510</link>
      <description>&lt;P&gt;Are you trying to manually set up standalone Master / Workers? You should use CM to do this.&lt;/P&gt;</description>
      <pubDate>Fri, 03 Apr 2015 14:06:20 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Configuring-Adding-workers-in-Spark/m-p/26196#M5510</guid>
      <dc:creator>srowen</dc:creator>
      <dc:date>2015-04-03T14:06:20Z</dc:date>
    </item>
    <item>
      <title>Re: Configuring/Adding workers in Spark</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Configuring-Adding-workers-in-Spark/m-p/26197#M5511</link>
      <description>&lt;P&gt;Yes, I am using the cluodera quick start which runs on CentOS to run Spark on Standalone mode.&lt;/P&gt;</description>
      <pubDate>Fri, 03 Apr 2015 14:10:56 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Configuring-Adding-workers-in-Spark/m-p/26197#M5511</guid>
      <dc:creator>amoriam</dc:creator>
      <dc:date>2015-04-03T14:10:56Z</dc:date>
    </item>
    <item>
      <title>Re: Configuring/Adding workers in Spark</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Configuring-Adding-workers-in-Spark/m-p/26201#M5512</link>
      <description>&lt;P&gt;Is there anyway to do that on a single michine running the cloudera quickstart? OR maybe its called adding executors on the same machine I'm new to Spark so not sure what its acually called.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Like this tutorial adding workers on the same machine but on CentOS running CDH5.3 quickstart:&amp;nbsp;&lt;A href="http://mbonaci.github.io/mbo-spark/" target="_blank"&gt;http://mbonaci.github.io/mbo-spark/&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thank you,&lt;/P&gt;&lt;P&gt;Amr&lt;/P&gt;</description>
      <pubDate>Fri, 03 Apr 2015 19:25:51 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Configuring-Adding-workers-in-Spark/m-p/26201#M5512</guid>
      <dc:creator>amoriam</dc:creator>
      <dc:date>2015-04-03T19:25:51Z</dc:date>
    </item>
    <item>
      <title>Re: Configuring/Adding workers in Spark</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Configuring-Adding-workers-in-Spark/m-p/26252#M5513</link>
      <description>&lt;P&gt;I got the answer:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;"Add &lt;/SPAN&gt;export STANDALONE_SPARK_MASTER_HOST=10.0.2.15&lt;SPAN&gt; to your &lt;/SPAN&gt;spark-env.sh&lt;SPAN&gt; so both master and worker agree on the same host address".&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 06 Apr 2015 13:57:41 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Configuring-Adding-workers-in-Spark/m-p/26252#M5513</guid>
      <dc:creator>amoriam</dc:creator>
      <dc:date>2015-04-06T13:57:41Z</dc:date>
    </item>
  </channel>
</rss>

