- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Hive query failing because container is using memory beyond limits
- Labels:
-
Apache Hive
-
Apache YARN
-
MapReduce
Created on ‎06-16-2016 09:55 AM - edited ‎09-16-2022 03:25 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I am running a hive query and its failing with below error. Included are also the config details of yarn. The error says that the physical memory is 1 GB but we have set mapreduce.map.memory.mb = 4 GB. I am not sure where it picked the 1 GB value from. Can anyone please help?
Diagnostics: Container [pid=62593,containerID=container_1465308864800_1323_02_000001] is running beyond physical memory limits.
Current usage: 1.0 GB of 1 GB physical memory used; 1.9 GB of 2.1 GB virtual memory used. Killing container.
Container Memory Maximum
yarn.scheduler.minimun-allocation-mb = 1 GB
yarn.scheduler.maximum-allocation-mb = 8 GB
Map Task Memory
mapreduce.map.memory.mb = 4 GB
Reduce Task Memory
mapreduce.reduce.memory.mb = 8 GB
Map Task Maximum Heap Size
mapreduce.map.java.opts.max.heap = 3 GB
Reduce Task Maximum Heap Size
mapreduce.reduce.java.opts.max.heap = 6 GB
ApplicationMaster Memory
yarn.app.mapreduce.am.resource.mb = 1 GB
Diagnostics: Container [pid=62593,containerID=container_1465308864800_1323_02_000001] is running beyond physical memory limits.
Current usage: 1.0 GB of 1 GB physical memory used; 1.9 GB of 2.1 GB virtual memory used. Killing container.
Container Memory Maximum
yarn.scheduler.minimun-allocation-mb = 1 GB
yarn.scheduler.maximum-allocation-mb = 8 GB
Map Task Memory
mapreduce.map.memory.mb = 4 GB
Reduce Task Memory
mapreduce.reduce.memory.mb = 8 GB
Map Task Maximum Heap Size
mapreduce.map.java.opts.max.heap = 3 GB
Reduce Task Maximum Heap Size
mapreduce.reduce.java.opts.max.heap = 6 GB
ApplicationMaster Memory
yarn.app.mapreduce.am.resource.mb = 1 GB
Created ‎06-19-2016 09:31 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have faced this type of problem several times. I tried as like you. but problem couln't resolved. Then i changed below properties
mapreduce.map.memory.mb = 0
mapreduce.reduce.memory.mb = 0
Now its working fine for me. Please try above and post the result
Created ‎12-06-2017 01:33 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Chaitanya, Is there any reason why we need to set to "0" .If yes please provide some justification.
mapreduce.map.memory.mb = 0
mapreduce.reduce.memory.mb = 0
Created ‎12-08-2017 03:44 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Setting the memory to 0 means that you are not scheduling on memory any more and that also turns of container size checks. This is not the right way to fix the issue. It could cause all kinds of problems on the NMs
Your AM is using more that the container allows so increase the setting
yarn.app.mapreduce.am.resource.mb from 1 GB to 1.5GB or 2GB. Use increments the size of what you have set the scheduler increment to when you increase the container size and run the application again.
Wilfred
Created ‎09-13-2018 10:27 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Please follow the below steps. Options for container size control
Now comes the complicated part - there are various overlapping and very poorly documented options for setting the size of Tez containers.
According to some links, the following options control how Tez jobs started by Hive behave:
- hive.tez.container.size – value in megabytes
- hive.tez.java.opts
