Support Questions

Find answers, ask questions, and share your expertise

Hive query failing because container is using memory beyond limits

avatar
New Contributor

Hi,

 

I am running a hive query and its failing with below error. Included are also the config details of yarn. The error says that the physical memory is 1 GB but we have set mapreduce.map.memory.mb = 4 GB. I am not sure where it picked the 1 GB value from. Can anyone please help?

 

Diagnostics: Container [pid=62593,containerID=container_1465308864800_1323_02_000001] is running beyond physical memory limits.
Current usage: 1.0 GB of 1 GB physical memory used; 1.9 GB of 2.1 GB virtual memory used. Killing container.

 

 

Container Memory Maximum
yarn.scheduler.minimun-allocation-mb = 1 GB
yarn.scheduler.maximum-allocation-mb = 8 GB

Map Task Memory
mapreduce.map.memory.mb = 4 GB

Reduce Task Memory
mapreduce.reduce.memory.mb = 8 GB

Map Task Maximum Heap Size
mapreduce.map.java.opts.max.heap = 3 GB

Reduce Task Maximum Heap Size
mapreduce.reduce.java.opts.max.heap = 6 GB

ApplicationMaster Memory
yarn.app.mapreduce.am.resource.mb = 1 GB

 

Diagnostics: Container [pid=62593,containerID=container_1465308864800_1323_02_000001] is running beyond physical memory limits.
Current usage: 1.0 GB of 1 GB physical memory used; 1.9 GB of 2.1 GB virtual memory used. Killing container.

Container Memory Maximum
yarn.scheduler.minimun-allocation-mb = 1 GB
yarn.scheduler.maximum-allocation-mb = 8 GB

Map Task Memory
mapreduce.map.memory.mb = 4 GB

Reduce Task Memory
mapreduce.reduce.memory.mb = 8 GB

Map Task Maximum Heap Size
mapreduce.map.java.opts.max.heap = 3 GB

Reduce Task Maximum Heap Size
mapreduce.reduce.java.opts.max.heap = 6 GB

ApplicationMaster Memory
yarn.app.mapreduce.am.resource.mb = 1 GB

4 REPLIES 4

avatar
Expert Contributor

I have faced this type of problem several times. I tried as like you. but problem couln't resolved. Then i changed below properties

mapreduce.map.memory.mb = 0
mapreduce.reduce.memory.mb = 0

Now its working fine for me. Please try above and post the result

avatar
New Contributor

Hi Chaitanya, Is there any reason why we need to set to "0" .If yes please provide some justification.

 

mapreduce.map.memory.mb = 0
mapreduce.reduce.memory.mb = 0

avatar
Super Collaborator

Setting the memory to 0 means that you are not scheduling on memory any more and that also turns of container size checks. This is not the right way to fix the issue. It could cause all kinds of problems on the NMs 

Your AM is using more that the container allows so increase the setting 

yarn.app.mapreduce.am.resource.mb from 1 GB to 1.5GB or 2GB. Use increments the size of what you have set the scheduler increment to when you increase the container size and run the application again.

 

Wilfred

 

 

avatar
New Contributor

Please follow the below steps. Options for container size control

Now comes the complicated part - there are various overlapping and very poorly documented options for setting the size of Tez containers.

According to some links, the following options control how Tez jobs started by Hive behave:

  • hive.tez.container.size – value in megabytes
  • hive.tez.java.opts