Support Questions

Find answers, ask questions, and share your expertise

Container metrics do not match between spark UI and YARN resourcemanager UI

avatar
New Contributor

I am tuning a spark application and noticed there are discrepancies between the job's metrics shown on Spark's history server UI and YARN's resource manager UI.

I've specified the the following properties on my Zeppelin Notebook's spark interpreter:

masteryarn-client
spark.app.nameZeppelin
spark.cores.max
spark.driver.memory3g
spark.executor.cores3
spark.executor.instances2
spark.executor.memory4g

When I look at the YARN ResourceManager UI I do not see evidence that the executor's containers are getting 3 cores each. I see that they each are using 1 v-core each.

Yet, when I check the Spark History Server... it describes each running executor has 3 cores and reflects all the properties I've specified.

What's up with this? Which of these should I be looking at?

YARN 3.1.0 Zeppelin 0.8.0 Spark2 2.3.1

1 ACCEPTED SOLUTION

avatar

@Matt Krueger you should look at spark history server/spark ui to see the correct environment settings being used. Setting executor cores to 3 is actually going to use 3 concurrent threads in each executor. AFAIK this might not be same as yarn v-core concept.

HTH

*** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.

View solution in original post

1 REPLY 1

avatar

@Matt Krueger you should look at spark history server/spark ui to see the correct environment settings being used. Setting executor cores to 3 is actually going to use 3 concurrent threads in each executor. AFAIK this might not be same as yarn v-core concept.

HTH

*** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.