Reply
New Contributor
Posts: 1
Registered: ‎06-16-2016

Hive query failing because container is using memory beyond limits

[ Edited ]

Hi,

 

I am running a hive query and its failing with below error. Included are also the config details of yarn. The error says that the physical memory is 1 GB but we have set mapreduce.map.memory.mb = 4 GB. I am not sure where it picked the 1 GB value from. Can anyone please help?

 

Diagnostics: Container [pid=62593,containerID=container_1465308864800_1323_02_000001] is running beyond physical memory limits.
Current usage: 1.0 GB of 1 GB physical memory used; 1.9 GB of 2.1 GB virtual memory used. Killing container.

 

 

Container Memory Maximum
yarn.scheduler.minimun-allocation-mb = 1 GB
yarn.scheduler.maximum-allocation-mb = 8 GB

Map Task Memory
mapreduce.map.memory.mb = 4 GB

Reduce Task Memory
mapreduce.reduce.memory.mb = 8 GB

Map Task Maximum Heap Size
mapreduce.map.java.opts.max.heap = 3 GB

Reduce Task Maximum Heap Size
mapreduce.reduce.java.opts.max.heap = 6 GB

ApplicationMaster Memory
yarn.app.mapreduce.am.resource.mb = 1 GB

 

Diagnostics: Container [pid=62593,containerID=container_1465308864800_1323_02_000001] is running beyond physical memory limits.
Current usage: 1.0 GB of 1 GB physical memory used; 1.9 GB of 2.1 GB virtual memory used. Killing container.

Container Memory Maximum
yarn.scheduler.minimun-allocation-mb = 1 GB
yarn.scheduler.maximum-allocation-mb = 8 GB

Map Task Memory
mapreduce.map.memory.mb = 4 GB

Reduce Task Memory
mapreduce.reduce.memory.mb = 8 GB

Map Task Maximum Heap Size
mapreduce.map.java.opts.max.heap = 3 GB

Reduce Task Maximum Heap Size
mapreduce.reduce.java.opts.max.heap = 6 GB

ApplicationMaster Memory
yarn.app.mapreduce.am.resource.mb = 1 GB

Expert Contributor
Posts: 82
Registered: ‎02-24-2016

Re: Hive query failing because container is using memory beyond limits

I have faced this type of problem several times. I tried as like you. but problem couln't resolved. Then i changed below properties

mapreduce.map.memory.mb = 0
mapreduce.reduce.memory.mb = 0

Now its working fine for me. Please try above and post the result

New Contributor
Posts: 1
Registered: ‎12-06-2017

Re: Hive query failing because container is using memory beyond limits

Hi Chaitanya, Is there any reason why we need to set to "0" .If yes please provide some justification.

 

mapreduce.map.memory.mb = 0
mapreduce.reduce.memory.mb = 0

Cloudera Employee
Posts: 251
Registered: ‎01-16-2014

Re: Hive query failing because container is using memory beyond limits

Setting the memory to 0 means that you are not scheduling on memory any more and that also turns of container size checks. This is not the right way to fix the issue. It could cause all kinds of problems on the NMs 

Your AM is using more that the container allows so increase the setting 

yarn.app.mapreduce.am.resource.mb from 1 GB to 1.5GB or 2GB. Use increments the size of what you have set the scheduler increment to when you increase the container size and run the application again.

 

Wilfred

 

 

Announcements