Support Questions

Find answers, ask questions, and share your expertise

Application master gets stuck when it gets more then half available memory

avatar
New Contributor

Hi,

 

I run simple Sleep MapReduce job provided in HUE as job designer example. I execute given example from HUE.

 

I have single node pseudo cluster in docker. Docker is running on machine with 12288MB RAM

 

I have:

yarn.nodemanager.resource.memory-mb=12288

yarn.scheduler.minimum-allocation-mb=256

yarn.scheduler.increment-allocation-mb=256

yarn.scheduler.maximum-allocation-mb=12288

 

My job has only application master and one mapper. I tried different resource allocation for those two containers but in one case it is getting stuck and I don't have any explanation why. Here are the cases.

 

mapreduce.map.memory.mb=6144 and yarn.app.mapreduce.am.resource.mb=6144 runs OK

mapreduce.map.memory.mb=10240 and yarn.app.mapreduce.am.resource.mb=512 runs OK

mapreduce.map.memory.mb=512 and yarn.app.mapreduce.am.resource.mb=6145 gets STUCK IN ACCEPTED

 

I've tried different cluster setup and I've discovered that when set yarn.app.mapreduce.am.resource.mb more then half of the yarn.nodemanager.resource.memory-mb application gets stuck.

 

I have Cloudera 5.4.5

 

Any idea why I see this behaviour? 

1 ACCEPTED SOLUTION

avatar
Super Collaborator

What have you set for the maxAMShare on the queue or in the scheduler default?

There is a setting called queueMaxAMShareDefault it defaults to 50% or 0.5f which means that a queue can not assign more than 50% of its resources to AM container(s).

 

Wilfred

View solution in original post

2 REPLIES 2

avatar
Super Collaborator

What have you set for the maxAMShare on the queue or in the scheduler default?

There is a setting called queueMaxAMShareDefault it defaults to 50% or 0.5f which means that a queue can not assign more than 50% of its resources to AM container(s).

 

Wilfred

avatar
New Contributor

Thank you, that was it. I missed this property.