Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

YARN Memory

avatar

When a certain amount of memory is given to ResourceManager (Memory allocated for all YARN containers on a node), is it immediately blocked or gradually/progressively used on as-needed basis until that capacity is reached?

1 ACCEPTED SOLUTION

avatar
Master Mentor

@bsaini@hortonworks.com

Continue to the above explanation of Container expiring

Very good explanation in this blog

"With YARN and MapReduce 2, there are no longer pre-configured static slots for Map and Reduce tasks. The entire cluster is available for dynamic resource allocation of Maps and Reduces as needed by the job"

With YARN and MapReduce 2, there are no longer pre-configured static slots for Map and Reduce tasks. The entire cluster is available for dynamic resource allocation of Maps and Reduces as needed by the job. In our example cluster, with the above configurations, YARN will be able to allocate on each node up to 10 mappers (40/4) or 5 reducers (40/8) or a permutation within that. 

View solution in original post

3 REPLIES 3

avatar
Master Mentor

@bsaini@hortonworks.com

This may help..link

  • ContainerAllocationExpirer: This component is in charge of ensuring that all allocated containers are used by AMs and subsequently launched on the correspond NMs. AMs run as untrusted user code and can potentially hold on to allocations without using them, and as such can cause cluster under-utilization. To address this, ContainerAllocationExpirer maintains the list of allocated containers that are still not used on the corresponding NMs. For any container, if the corresponding NM doesn’t report to the RM that the container has started running within a configured interval of time, by default 10 minutes, the container is deemed as dead and is expired by the RM.

avatar
Master Mentor

@bsaini@hortonworks.com

Continue to the above explanation of Container expiring

Very good explanation in this blog

"With YARN and MapReduce 2, there are no longer pre-configured static slots for Map and Reduce tasks. The entire cluster is available for dynamic resource allocation of Maps and Reduces as needed by the job"

With YARN and MapReduce 2, there are no longer pre-configured static slots for Map and Reduce tasks. The entire cluster is available for dynamic resource allocation of Maps and Reduces as needed by the job. In our example cluster, with the above configurations, YARN will be able to allocate on each node up to 10 mappers (40/4) or 5 reducers (40/8) or a permutation within that. 

avatar
Master Mentor

@bsaini are you still having issues with this? Can you accept the best answer or provide your own solution?