Created 05-08-2019 10:02 PM
Is their any way to control memory utilization exceed than maximum memory allocated in the Yarn resource manager.
My configuration placed in yarn is :-
yarn.scheduler.minimum-allocation-mb = 1024
yarn.scheduler.maximum-allocation-mb = 4096
yarn.schedular.minimum-allocation-vcores = 3
yarn.schedular.maximum-allocation-vcores = 3
Error:-
Now the point however it is ignoring the static configuration which placed in the yarn.
It is picking more memory When i observed in Yarn-UI (I am seeing these configuration)
running containers to 72
Allocated Cpu v-cores to 72
Allocated memory MB to 120034
Please help me how to set to the maximum level which placed in yarn configuration
Thanks a lot
Created 05-10-2019 07:32 PM
Hi @Vasanth Reddy,
If I understood you correctly, I think you should be checking this:
These are per container properties:
yarn.scheduler.minimum-allocation-mb = 1024
yarn.scheduler.maximum-allocation-mb = 4096
yarn.schedular.minimum-allocation-vcores = 3
yarn.schedular.maximum-allocation-vcores = 3
While you're watching at cluster wide metrics:
running containers to 72
Allocated Cpu v-cores to 72
Allocated memory MB to 120034
With the above settings, at one point you may have the same 72 containers with Allocated memory MB to 294912 (72 containers * maximum mb 4096)
Let me know if I missunderstood your question.
BR,
David Bompart
Created on 05-17-2019 11:33 AM - edited 08-17-2019 03:34 PM
HI @dbompart
Yes the logic you mentioned is perfectly
I have some more clarification regarding containers on Map Reduce and Spark
In Map Reduce running sqoop Import
In Spark running PySpark shell on top of yarn
Now the configuration :
MapReduce:-
yarn.scheduler.maximum-allocation-MB :- 36864 * 2 = 73728
But my concern is now how can i limit the Running containers per user basics (I cant set Different queues in capacitor scheduler as mentioned above)
-> When ever i am running spark application is also running on top of yarn
Running Containers :- 3
Allocated CPU's :- 3
Total Memory allocated :- 5120
Will you help me the logic what is happening behind these
Thanks a lot