Created 05-25-2016 09:54 AM
Im executing a query on spark and it is working Im getting the result. I did not configure any cluster so spark should be using its own cluster manager.
But in the spark page: master:8080 I get this:
Alive Workers: 2 Cores in use: 4 Total, 0 Used Memory in use: 6.0 GB Total, 0.0 B Used Applications: 0 Running, 0 Completed Drivers: 0 Running, 0 Completed Status: ALIVE
But when Im executing the query I get the same result while Im refresinh the page:
Alive Workers: 2 Cores in use: 4 Total, 0 Used Memory in use: 6.0 GB Total, 0.0 B Used Applications: 0 Running, 0 Completed Drivers: 0 Running, 0 Completed Status: ALIVE
And after the execution of the query this is the same again...Do you know why? Its very strange, it seems that spark is executing the query without using any hardware which is not possible, so why this info is not updating do you know?
Created 05-25-2016 10:06 AM
How you are submit job? if you are not specifying --master "spark://masterip:7077" while running spark shell then it will run in local mode.
Created 05-26-2016 01:11 AM
Hi, thanks for your answer. But Im not understanding. I think the answer that I accpted fixed the issue. Because starting the spark-shell with spark-shell --master spark://masterhost:7077 in the 8080 port I get:
So it seems that it is already working starting spark-shell with thay way, right? But you are suggesting that should be spark-shell --master "local" spark:///mastehost:7077?
Created 05-25-2016 01:46 PM
was there anything on the spark history server or in logs.
Created 05-25-2016 10:24 PM
Spark supports the following cluster modes:
We don't supprot Spark on Mesos.
For Spark on YARN specify mode by adding --master yarn-client or --master yarn-cluster on your Spark-submit command on a per job basis. Or configure it in spark-defaults.conf for all jobs submitted from that node.
--master "spark://masterip:7077" indicates Spark standalone mode.