Support Questions

Find answers, ask questions, and share your expertise

Impala - Memory limit exceeded

avatar
Contributor

i'm receving the below error message on memory limit although there is enough default memory limit set on the resource pool and explan plan only shows 288 MB out of 18 nodes cluster which leads to 5184 MB of total memory consumption

 

+-----------------------------------------------------------+
| Explain String |
+-----------------------------------------------------------+
| Estimated Per-Host Requirements: Memory=288.00MB VCores=1 |
| |
| 01:EXCHANGE [UNPARTITIONED] |
| | limit: 1 |
| | |
| 00:SCAN HDFS [fenet5.hmig_os_changes_details_malicious] |
| partitions=1/25 files=3118 size=110.01GB |
| predicates: job_id = 55451 |
| limit: 1 |
+-----------------------------------------------------------+

 

WARNINGS:
Memory limit exceeded
HdfsParquetScanner::ReadDataPage() failed to allocate 269074889 bytes for dictionary.

 

Memory Limit Exceeded
HDFS_SCAN_NODE (id=0) could not allocate 257.23 MB without exceeding limit.
Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=20.00 GB
Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=20.00 GB
HDFS_SCAN_NODE (id=0): Consumption=20.00 GB
DataStreamSender: Consumption=1.45 KB
Block Manager: Limit=16.00 GB Consumption=0
Memory Limit Exceeded
HDFS_SCAN_NODE (id=0) could not allocate 255.63 MB without exceeding limit.
Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=20.00 GB
Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=20.00 GB
HDFS_SCAN_NODE (id=0): Consumption=20.00 GB
DataStreamSender: Consumption=1.45 KB
Block Manager: Limit=16.00 GB Consumption=0
Memory Limit Exceeded
HDFS_SCAN_NODE (id=0) could not allocate 255.27 MB without exceeding limit.
Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=20.00 GB
Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=20.00 GB
HDFS_SCAN_NODE (id=0): Consumption=20.00 GB
DataStreamSender: Consumption=1.45 KB
Block Manager: Limit=16.00 GB Consumption=0
Memory Limit Exceeded
HDFS_SCAN_NODE (id=0) could not allocate 255.39 MB without exceeding limit.
Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=20.00 GB
Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=20.00 GB
HDFS_SCAN_NODE (id=0): Consumption=20.00 GB
DataStreamSender: Consumption=1.45 KB
Block Manager: Limit=16.00 GB Consumption=0
Memory Limit Exceeded
HDFS_SCAN_NODE (id=0) could not allocate 16.09 KB without exceeding limit.
Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=19.74 GB
Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=19.74 GB
HDFS_SCAN_NODE (id=0): Consumption=19.74 GB
DataStreamSender: Consumption=1.45 KB
Block Manager: Limit=16.00 GB Consumption=0
Memory Limit Exceeded
HDFS_SCAN_NODE (id=0) could not allocate 15.20 KB without exceeding limit.
Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=19.64 GB
Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=19.64 GB
HDFS_SCAN_NODE (id=0): Consumption=19.64 GB
DataStreamSender: Consumption=1.45 KB
Block Manager: Limit=16.00 GB Consumption=0
Memory Limit Exceeded
HDFS_SCAN_NODE (id=0) could not allocate 14.61 KB without exceeding limit.
Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=19.64 GB
Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=19.64 GB
HDFS_SCAN_NODE (id=0): Consumption=19.64 GB
DataStreamSender: Consumption=1.45 KB
Block Manager: Limit=16.00 GB Consumption=0
Memory Limit Exceeded
HDFS_SCAN_NODE (id=0) could not allocate 257.11 MB without exceeding limit.
Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=19.47 GB
Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=19.47 GB
HDFS_SCAN_NODE (id=0): Consumption=19.47 GB
DataStreamSender: Consumption=1.45 KB
Block Manager: Limit=16.00 GB Consumption=0
Memory Limit Exceeded
HDFS_SCAN_NODE (id=0) could not allocate 255.51 MB without exceeding limit.
Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=19.24 GB
Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=19.24 GB
HDFS_SCAN_NODE (id=0): Consumption=19.24 GB
DataStreamSender: Consumption=1.45 KB
Block Manager: Limit=16.00 GB Consumption=0
Memory Limit Exceeded
HDFS_SCAN_NODE (id=0) could not allocate 255.32 MB without exceeding limit.
Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=19.24 GB
Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=19.24 GB
HDFS_SCAN_NODE (id=0): Consumption=19.24 GB
DataStreamSender: Consumption=1.45 KB
Block Manager: Limit=16.00 GB Consumption=0
Memory Limit Exceeded
HDFS_SCAN_NODE (id=0) could not allocate 255.73 MB without exceeding limit.
Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=19.49 GB
Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=19.49 GB
HDFS_SCAN_NODE (id=0): Consumption=19.49 GB
DataStreamSender: Consumption=1.45 KB
Block Manager: Limit=16.00 GB Consumption=0
Memory Limit Exceeded
HDFS_SCAN_NODE (id=0) could not allocate 255.29 MB without exceeding limit.
Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=19.49 GB
Fragment 294eb435fbf8fc63:f529602818758c8b: Consumption=19.49 GB
HDFS_SCAN_NODE (id=0): Consumption=19.49 GB
DataStreamSender: Consumption=1.45 KB
Block Manager: Limit=16.00 GB Consumption=0
Memory Limit Exceeded
HDFS_SCAN_NODE (id=0) could not allocate 256.61 MB without exceeding limit.
Query(294eb435fbf8fc63:f529602818758c80) Limit: memory limit exceeded. Limit=20.00 GB Consumption=20.00 GB
Fragment 294eb435fbf8fc63:f529602818758c8e: Consumption=20.00 GB
HDFS_SCAN_NODE (id=0): Consumption=20.00 GB
DataStreamSender: Consumption=1.45 KB
Block Manager: Limit=16.00 GB Consumption=0
Memory Limit Exceeded
HDFS_SCAN_NODE (id=0) could not allocate 256.05 MB without exceeding limit.
Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=19.69 GB
Fragment 294eb435fbf8fc63:f529602818758c8e: Consumption=19.69 GB
HDFS_SCAN_NODE (id=0): Consumption=19.69 GB
DataStreamSender: Consumption=1.45 KB
Block Manager: Limit=16.00 GB Consumption=0
Memory Limit Exceeded
HDFS_SCAN_NODE (id=0) could not allocate 255.35 MB without exceeding limit.
Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=17.97 GB
Fragment 294eb435fbf8fc63:f529602818758c8e: Consumption=17.97 GB
HDFS_SCAN_NODE (id=0): Consumption=17.72 GB
DataStreamSender: Consumption=1.45 KB
Block Manager: Limit=16.00 GB Consumption=0
Memory Limit Exceeded
HDFS_SCAN_NODE (id=0) could not allocate 1.02 KB without exceeding limit.
Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=17.63 GB
Fragment 294eb435fbf8fc63:f529602818758c8e: Consumption=17.63 GB
HDFS_SCAN_NODE (id=0): Consumption=17.63 GB
DataStreamSender: Consumption=1.45 KB
Block Manager: Limit=16.00 GB Consumption=0
Memory Limit Exceeded
HDFS_SCAN_NODE (id=0) could not allocate 1.01 KB without exceeding limit.
Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=16.94 GB
Fragment 294eb435fbf8fc63:f529602818758c8e: Consumption=16.94 GB
HDFS_SCAN_NODE (id=0): Consumption=16.68 GB
DataStreamSender: Consumption=1.45 KB
Block Manager: Limit=16.00 GB Consumption=0
Memory Limit Exceeded
HDFS_SCAN_NODE (id=0) could not allocate 88.00 KB without exceeding limit.
Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=16.61 GB
Fragment 294eb435fbf8fc63:f529602818758c8e: Consumption=16.61 GB
HDFS_SCAN_NODE (id=0): Consumption=16.36 GB
DataStreamSender: Consumption=1.45 KB
Block Manager: Limit=16.00 GB Consumption=0
Memory Limit Exceeded
HDFS_SCAN_NODE (id=0) could not allocate 255.23 MB without exceeding limit.
Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=16.30 GB
Fragment 294eb435fbf8fc63:f529602818758c8e: Consumption=16.30 GB
HDFS_SCAN_NODE (id=0): Consumption=16.30 GB
DataStreamSender: Consumption=1.45 KB
Block Manager: Limit=16.00 GB Consumption=0
Memory Limit Exceeded
Query(294eb435fbf8fc63:f529602818758c80) Limit: Limit=20.00 GB Consumption=8.02 GB
Fragment 294eb435fbf8fc63:f529602818758c8e: Consumption=8.02 GB
HDFS_SCAN_NODE (id=0): Consumption=8.02 GB
Block Manager: Limit=16.00 GB Consumption=0

2 ACCEPTED SOLUTIONS

avatar
Super Guru
What's the MEM_LIMIT set for this query? Have you tried to increase the limit for this session to see if it helps?

SET MEM_LIMIT=100g;

for example.

View solution in original post

avatar

@alexmc6 I think there's (understandably) some misunderstanding of what the different mechanisms there do. 


Memory estimates only play a role if you set "Max Memory" and leave "Default Query Memory Limit" unset or set to 0. I always recommend against that mode for exactly the reason you mentioned.

View solution in original post

10 REPLIES 10

avatar
Super Guru
What's the MEM_LIMIT set for this query? Have you tried to increase the limit for this session to see if it helps?

SET MEM_LIMIT=100g;

for example.

avatar
Contributor

@EricL,

 

i had an impression that  MEM_LIMIT only poses an soft limit of memory resource and helps in attaining concurrency

 

but the reality is that it's an hard limit  and query would fail exceeding beyond the mem_limit value, so after setting the memory limit to greater value query ran fine with the peak utilization of 27GB.

 

avatar
Super Guru
Yes, it is hard limit and query will fail if goes over it.

avatar
Explorer
EricL:
Can we do the equivalent of "SET MEM_LIMIT=100g;" in cluster wide config?
ie can we enforce this so that no Impala query will suck up all the memory on the Impala service?

avatar

@alexmc6I'd recommend setting a default memory limit for all of your resource pools. See https://www.cloudera.com/documentation/enterprise/latest/topics/impala_howto_rm.html#concept_en4_3sy... for how to get to that page in CM.

 

Note that if you set "Max Memory", it will enable memory-based admission control, which is stricter - it won't admit queries if their memory limits add up to more than the available memory.

 

If you leave "Max Memory" unset, memory-based admission control remains disabled but the memory limit provides some protection against runaway queries, which I think is the incremental step you're looking for.

avatar
Also, a more general tip is that you can set a default value for *any* query option via the dynamic resource pool interface. You can have different values per pool and you can change the values without a cluster restart - you only need to change the config and hit "Refresh" to push out the changes for new queries.

avatar
Explorer

Tim Armstrong wrote:

Also, a more general tip is that you can set a default value for *any* query option via the dynamic resource pool interface. 

 

 

That is really helpful. Thanks!

avatar
Explorer
Sorry Tim.

Setting max limits in resource pools is not an option for us. They are based upon the estimated memory consumption and the estimates are sometimes wildly innacurate. This has resulted in valid production queries being blocked from running.

avatar

@alexmc6 I think there's (understandably) some misunderstanding of what the different mechanisms there do. 


Memory estimates only play a role if you set "Max Memory" and leave "Default Query Memory Limit" unset or set to 0. I always recommend against that mode for exactly the reason you mentioned.