- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Impala - Memory limit exceeded
- Labels:
-
Apache Impala
-
HDFS
Created on 09-27-2018 10:58 PM - edited 09-16-2022 06:45 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
i'm receving the below error message on memory limit although there is enough default memory limit set on the resource pool and explan plan only shows 288 MB out of 18 nodes cluster which leads to 5184 MB of total memory consumption
+-----------------------------------------------------------+
| Explain String |
+-----------------------------------------------------------+
| Estimated Per-Host Requirements: Memory=288.00MB VCores=1 |
| |
| 01:EXCHANGE [UNPARTITIONED] |
| | limit: 1 |
| | |
| 00:SCAN HDFS [fenet5.hmig_os_changes_details_malicious] |
| partitions=1/25 files=3118 size=110.01GB |
| predicates: job_id = 55451 |
| limit: 1 |
+-----------------------------------------------------------+
WARNINGS:
Memory limit exceeded
HdfsParquetScanner::ReadDataPage() failed to allocate 269074889 bytes for dictionary.
Memory Limit Exceeded
HDFS_SCAN_NODE (id=0) could not allocate 257.23 MB without exceeding limit.
Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=20.00 GB
Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=20.00 GB
HDFS_SCAN_NODE (id=0): Consumption=20.00 GB
DataStreamSender: Consumption=1.45 KB
Block Manager: Limit=16.00 GB Consumption=0
Memory Limit Exceeded
HDFS_SCAN_NODE (id=0) could not allocate 255.63 MB without exceeding limit.
Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=20.00 GB
Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=20.00 GB
HDFS_SCAN_NODE (id=0): Consumption=20.00 GB
DataStreamSender: Consumption=1.45 KB
Block Manager: Limit=16.00 GB Consumption=0
Memory Limit Exceeded
HDFS_SCAN_NODE (id=0) could not allocate 255.27 MB without exceeding limit.
Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=20.00 GB
Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=20.00 GB
HDFS_SCAN_NODE (id=0): Consumption=20.00 GB
DataStreamSender: Consumption=1.45 KB
Block Manager: Limit=16.00 GB Consumption=0
Memory Limit Exceeded
HDFS_SCAN_NODE (id=0) could not allocate 255.39 MB without exceeding limit.
Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=20.00 GB
Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=20.00 GB
HDFS_SCAN_NODE (id=0): Consumption=20.00 GB
DataStreamSender: Consumption=1.45 KB
Block Manager: Limit=16.00 GB Consumption=0
Memory Limit Exceeded
HDFS_SCAN_NODE (id=0) could not allocate 16.09 KB without exceeding limit.
Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=19.74 GB
Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=19.74 GB
HDFS_SCAN_NODE (id=0): Consumption=19.74 GB
DataStreamSender: Consumption=1.45 KB
Block Manager: Limit=16.00 GB Consumption=0
Memory Limit Exceeded
HDFS_SCAN_NODE (id=0) could not allocate 15.20 KB without exceeding limit.
Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=19.64 GB
Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=19.64 GB
HDFS_SCAN_NODE (id=0): Consumption=19.64 GB
DataStreamSender: Consumption=1.45 KB
Block Manager: Limit=16.00 GB Consumption=0
Memory Limit Exceeded
HDFS_SCAN_NODE (id=0) could not allocate 14.61 KB without exceeding limit.
Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=19.64 GB
Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=19.64 GB
HDFS_SCAN_NODE (id=0): Consumption=19.64 GB
DataStreamSender: Consumption=1.45 KB
Block Manager: Limit=16.00 GB Consumption=0
Memory Limit Exceeded
HDFS_SCAN_NODE (id=0) could not allocate 257.11 MB without exceeding limit.
Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=19.47 GB
Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=19.47 GB
HDFS_SCAN_NODE (id=0): Consumption=19.47 GB
DataStreamSender: Consumption=1.45 KB
Block Manager: Limit=16.00 GB Consumption=0
Memory Limit Exceeded
HDFS_SCAN_NODE (id=0) could not allocate 255.51 MB without exceeding limit.
Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=19.24 GB
Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=19.24 GB
HDFS_SCAN_NODE (id=0): Consumption=19.24 GB
DataStreamSender: Consumption=1.45 KB
Block Manager: Limit=16.00 GB Consumption=0
Memory Limit Exceeded
HDFS_SCAN_NODE (id=0) could not allocate 255.32 MB without exceeding limit.
Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=19.24 GB
Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=19.24 GB
HDFS_SCAN_NODE (id=0): Consumption=19.24 GB
DataStreamSender: Consumption=1.45 KB
Block Manager: Limit=16.00 GB Consumption=0
Memory Limit Exceeded
HDFS_SCAN_NODE (id=0) could not allocate 255.73 MB without exceeding limit.
Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=19.49 GB
Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=19.49 GB
HDFS_SCAN_NODE (id=0): Consumption=19.49 GB
DataStreamSender: Consumption=1.45 KB
Block Manager: Limit=16.00 GB Consumption=0
Memory Limit Exceeded
HDFS_SCAN_NODE (id=0) could not allocate 255.29 MB without exceeding limit.
Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=19.49 GB
Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=19.49 GB
HDFS_SCAN_NODE (id=0): Consumption=19.49 GB
DataStreamSender: Consumption=1.45 KB
Block Manager: Limit=16.00 GB Consumption=0
Memory Limit Exceeded
HDFS_SCAN_NODE (id=0) could not allocate 256.61 MB without exceeding limit.
Query(294eb435fbf8fc63:f529602818758c80) Limit: memory limit exceeded. Limit=20.00 GB Consumption=20.00 GB
Fragment 294eb435fbf8fc63:f529602818758c8e: Consumption=20.00 GB
HDFS_SCAN_NODE (id=0): Consumption=20.00 GB
DataStreamSender: Consumption=1.45 KB
Block Manager: Limit=16.00 GB Consumption=0
Memory Limit Exceeded
HDFS_SCAN_NODE (id=0) could not allocate 256.05 MB without exceeding limit.
Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=19.69 GB
Fragment 294eb435fbf8fc63:f529602818758c8e: Consumption=19.69 GB
HDFS_SCAN_NODE (id=0): Consumption=19.69 GB
DataStreamSender: Consumption=1.45 KB
Block Manager: Limit=16.00 GB Consumption=0
Memory Limit Exceeded
HDFS_SCAN_NODE (id=0) could not allocate 255.35 MB without exceeding limit.
Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=17.97 GB
Fragment 294eb435fbf8fc63:f529602818758c8e: Consumption=17.97 GB
HDFS_SCAN_NODE (id=0): Consumption=17.72 GB
DataStreamSender: Consumption=1.45 KB
Block Manager: Limit=16.00 GB Consumption=0
Memory Limit Exceeded
HDFS_SCAN_NODE (id=0) could not allocate 1.02 KB without exceeding limit.
Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=17.63 GB
Fragment 294eb435fbf8fc63:f529602818758c8e: Consumption=17.63 GB
HDFS_SCAN_NODE (id=0): Consumption=17.63 GB
DataStreamSender: Consumption=1.45 KB
Block Manager: Limit=16.00 GB Consumption=0
Memory Limit Exceeded
HDFS_SCAN_NODE (id=0) could not allocate 1.01 KB without exceeding limit.
Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=16.94 GB
Fragment 294eb435fbf8fc63:f529602818758c8e: Consumption=16.94 GB
HDFS_SCAN_NODE (id=0): Consumption=16.68 GB
DataStreamSender: Consumption=1.45 KB
Block Manager: Limit=16.00 GB Consumption=0
Memory Limit Exceeded
HDFS_SCAN_NODE (id=0) could not allocate 88.00 KB without exceeding limit.
Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=16.61 GB
Fragment 294eb435fbf8fc63:f529602818758c8e: Consumption=16.61 GB
HDFS_SCAN_NODE (id=0): Consumption=16.36 GB
DataStreamSender: Consumption=1.45 KB
Block Manager: Limit=16.00 GB Consumption=0
Memory Limit Exceeded
HDFS_SCAN_NODE (id=0) could not allocate 255.23 MB without exceeding limit.
Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=16.30 GB
Fragment 294eb435fbf8fc63:f529602818758c8e: Consumption=16.30 GB
HDFS_SCAN_NODE (id=0): Consumption=16.30 GB
DataStreamSender: Consumption=1.45 KB
Block Manager: Limit=16.00 GB Consumption=0
Memory Limit Exceeded
Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=8.02 GB
Fragment 294eb435fbf8fc63:f529602818758c8e: Consumption=8.02 GB
HDFS_SCAN_NODE (id=0): Consumption=8.02 GB
Block Manager: Limit=16.00 GB Consumption=0
Created 10-09-2018 11:05 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
SET MEM_LIMIT=100g;
for example.
Created 11-28-2018 10:31 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@alexmc6 I think there's (understandably) some misunderstanding of what the different mechanisms there do.
Memory estimates only play a role if you set "Max Memory" and leave "Default Query Memory Limit" unset or set to 0. I always recommend against that mode for exactly the reason you mentioned.
Created 10-09-2018 11:05 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
SET MEM_LIMIT=100g;
for example.
Created 10-09-2018 11:45 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
i had an impression that MEM_LIMIT only poses an soft limit of memory resource and helps in attaining concurrency
but the reality is that it's an hard limit and query would fail exceeding beyond the mem_limit value, so after setting the memory limit to greater value query ran fine with the peak utilization of 27GB.
Created 10-10-2018 03:07 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Created 11-27-2018 07:58 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Can we do the equivalent of "SET MEM_LIMIT=100g;" in cluster wide config?
ie can we enforce this so that no Impala query will suck up all the memory on the Impala service?
Created 11-27-2018 10:24 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@alexmc6I'd recommend setting a default memory limit for all of your resource pools. See https://www.cloudera.com/documentation/enterprise/latest/topics/impala_howto_rm.html#concept_en4_3sy... for how to get to that page in CM.
Note that if you set "Max Memory", it will enable memory-based admission control, which is stricter - it won't admit queries if their memory limits add up to more than the available memory.
If you leave "Max Memory" unset, memory-based admission control remains disabled but the memory limit provides some protection against runaway queries, which I think is the incremental step you're looking for.
Created 11-27-2018 10:26 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Created 11-28-2018 02:48 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Tim Armstrong wrote:
Also, a more general tip is that you can set a default value for *any* query option via the dynamic resource pool interface.
That is really helpful. Thanks!
Created 11-28-2018 12:43 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Setting max limits in resource pools is not an option for us. They are based upon the estimated memory consumption and the estimates are sometimes wildly innacurate. This has resulted in valid production queries being blocked from running.
Created 11-28-2018 10:31 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@alexmc6 I think there's (understandably) some misunderstanding of what the different mechanisms there do.
Memory estimates only play a role if you set "Max Memory" and leave "Default Query Memory Limit" unset or set to 0. I always recommend against that mode for exactly the reason you mentioned.
