04-03-2015 06:47 AM
I have a cloudera cdh5.3 quickstart running on a VM. I am having problems with running Spaark. I have gone thruogh those steps http://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/cdh_ig_spark_configure.... and run the word exapmle and it worked. But when I go to the master (quickstart.cloudera:18080) it has no workers there the cores=0, memory=0... when I go to (quickstart.cloudera:18081) there is a worker. My question is how to add workers? And what should I enter in export STANDALONE_SPARK_MASTER_HOST?
This is the spark-env.sh:
### Change the following to specify a real cluster's Master host
### Let's run everything with JVM runtime, instead of Scala
if [ -n "$HADOOP_HOME" ]; then
### Comment above 2 lines and uncomment the following if
### you want to run with scala version, that is included with the package
04-03-2015 12:25 PM
Is there anyway to do that on a single michine running the cloudera quickstart? OR maybe its called adding executors on the same machine I'm new to Spark so not sure what its acually called.
Like this tutorial adding workers on the same machine but on CentOS running CDH5.3 quickstart: http://mbonaci.github.io/mbo-spark/