I am tuning a spark application and noticed there are discrepancies between the job's metrics shown on Spark's history server UI and YARN's resource manager UI.
I've specified the the following properties on my Zeppelin Notebook's spark interpreter:
master | yarn-client |
spark.app.name | Zeppelin |
spark.cores.max | |
spark.driver.memory | 3g |
spark.executor.cores | 3 |
spark.executor.instances | 2 |
spark.executor.memory | 4g |
When I look at the YARN ResourceManager UI I do not see evidence that the executor's containers are getting 3 cores each. I see that they each are using 1 v-core each.
Yet, when I check the Spark History Server... it describes each running executor has 3 cores and reflects all the properties I've specified.
What's up with this? Which of these should I be looking at?
YARN 3.1.0 Zeppelin 0.8.0 Spark2 2.3.1