Support Questions
Find answers, ask questions, and share your expertise

What is the best amount of memory for YARN AppMaster?

Expert Contributor

I've seen 12GB of RAM and sometimes 9GB for the allocated memory of YARN AppMaster. AppMaster is just managing the lifecycle of the containers and don't think it needs more than 1GB of RAM.

2 REPLIES 2

I'd say it depends of the kind of applications you are running over YARN. If I am not mistaken, in case you are running Spark over YARN, the Spark driver will be inside a YARN application master container. For example with some machine learning algorithms a lot of data must be send back to the driver. That could explain, in some cases, the need to have a large amount of memory allocated for the application master.

Hi @rgarcia, I agree that for a general MR or Tez job 10G sounds like too much for AM. And it's also true that you can set it per job, on the command line. However, the minimum is still bound by minimum container size (yarn.scheduler.minimum-allocation-mb) you set for Yarn. If you have nodes with say 512G of RAM and let's say 36 cores, and you want to run max of 40 containers per node, your memory per container will be about 10G. So even if you request only 1G for your AM, Yarn will use 10G. By the way, the properties specifying defaults are:

(1) yarn.app.mapreduce.am.resource.mb, for MR
(2) tez.am.resource.memory.mb, for Tez
(3) spark.yarn.am.memory, for Spark in client mode
(4) spark.yarn.am.memoryOverhead, the container for the client will be (3)+(4)

For (1) and (2) defaults are usually set to k*yarn.scheduler.minimum-allocation-mb, k=1,2,..., but usually k=1

For (3) the Spark default is 512M and for (4) max[384M, (3)*0.1].