Created on 08-31-2015 08:39 AM - edited 09-16-2022 02:39 AM
Hi,
I run simple Sleep MapReduce job provided in HUE as job designer example. I execute given example from HUE.
I have single node pseudo cluster in docker. Docker is running on machine with 12288MB RAM
I have:
yarn.nodemanager.resource.memory-mb=12288
yarn.scheduler.minimum-allocation-mb=256
yarn.scheduler.increment-allocation-mb=256
yarn.scheduler.maximum-allocation-mb=12288
My job has only application master and one mapper. I tried different resource allocation for those two containers but in one case it is getting stuck and I don't have any explanation why. Here are the cases.
mapreduce.map.memory.mb=6144 and yarn.app.mapreduce.am.resource.mb=6144 runs OK
mapreduce.map.memory.mb=10240 and yarn.app.mapreduce.am.resource.mb=512 runs OK
mapreduce.map.memory.mb=512 and yarn.app.mapreduce.am.resource.mb=6145 gets STUCK IN ACCEPTED
I've tried different cluster setup and I've discovered that when set yarn.app.mapreduce.am.resource.mb more then half of the yarn.nodemanager.resource.memory-mb application gets stuck.
I have Cloudera 5.4.5
Any idea why I see this behaviour?
Created 09-01-2015 05:18 AM
What have you set for the maxAMShare on the queue or in the scheduler default?
There is a setting called queueMaxAMShareDefault it defaults to 50% or 0.5f which means that a queue can not assign more than 50% of its resources to AM container(s).
Wilfred
Created 09-01-2015 05:18 AM
What have you set for the maxAMShare on the queue or in the scheduler default?
There is a setting called queueMaxAMShareDefault it defaults to 50% or 0.5f which means that a queue can not assign more than 50% of its resources to AM container(s).
Wilfred
Created 09-01-2015 11:34 AM
Thank you, that was it. I missed this property.