Support Questions
Find answers, ask questions, and share your expertise

Yarn seems not allocated memory as configued, why?

Rising Star

HDP 2.5

My nodemanger deployed on host with 1TB memory

and

yarn.nodemanger.resource.memory-mb=973824MB

yarn.scheduler.minimum-allocation-mb=8192MB

yarn.scheduler.maximum-allocation-mb=973824MB

while YARN memory 100% running

12509-7pgcy.png

12510-3my2a.png

From centos , the free memory still have more than 300GB. Why?

12511-gs4j2.png

4 REPLIES 4

New Contributor

Hello! The YARN memory allocated is based on what the ApplicationMaster has requested for. The OS free memory is based on what's actually being used.

For example, MR can ask for a 4GB container for a mapper, which actually uses only 2GB. In this case allocated memory will be 4GB and the OS will show 2GB free.

Rising Star

Thansk for @vvasudev quick response. So how can I make the physical memory usage more efficient? It seems not all physical memory used for the yarn containers. Is my yarn memory configuration reasonable?

New Contributor

Your memory settings are reasonable. The only way to improve the memory usage is -

1. To run more containers.

2. Do more work in a single container.

It's probably easier to do (1).

Rising Star

How to run more containers? It already 100% used