- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
impala memory limit exceed
- Labels:
-
Apache Impala
Created on ā01-12-2017 02:59 AM - edited ā09-16-2022 03:54 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
When we execute a query with groupBy, Having, etc. clauses, Impala shows this error:
Memory limit exceeded The memory limit is set too low to initialize spilling operator (id=2). The minimum required memory to spill this operator is 272.00 MB.
How we can set the minimum required memory?
How we can solve it?
Thanks
Created ā01-12-2017 03:07 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi efumas,
What version of Impala are you running? For more recent versions of Impala the query error log will include a more detailed dump of which query operators are using memory. It will also likely show up in the impalad* logs.
Generally this error means that you don't have enough memory to execute the query. The memory limits that can apply are the total process memory limit (set for an entire Impala daemon when it is started) or the query memory limit (set via the mem_limit query option).
- Tim
Created ā01-13-2017 12:46 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Tim,
We are using: version 2.7.0-cdh5-IMPALA_KUDU-cdh5 RELEASE
We are also using Cloudera Manager to configure all of parameters. Now, we have put :
Impala Daemon Memory Limit [mem_limit ] ----> 8GB
But the problem doesn't solve.
"Memory limit exceeded The memory limit is set too low to initialize spilling operator (id=7). The minimum required memory to spill this operator is 264.00 MB."
ĀæDo you have any idea?
Thanks
Created on ā01-13-2017 01:10 AM - edited ā01-13-2017 01:10 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Tim,
For more information, this is the query log:
- ID de consulta: 114889f08fc355ac:5d5b67a64160deb8
- Tipo de consulta: QUERY
- Estado de consulta: CREATED
- Hora de inicio: 13-ene-2017 9:06:04
- DuraciĆ³n: 9s
- Filas producidas: 0
- AcumulaciĆ³n de memoria: 65.536 byte seconds
- Admission Result: Admitted immediately
- Admission Wait Time: 0 ms
- Bytes transmitidos: 22,5 MiB
- Estado de consulta: Memory limit exceeded The memory limit is set too low to initialize spilling operator (id=7). The minimum required memory to spill this operator is 264.00 MB.
- EstimaciĆ³n por memoria pico por nodo: 2,2 GiB
- Faltan estadĆsticas: true
- Formatos de archivo:
- ID de sesiĆ³n: d47b5662d669d90:4ef3cf9dc2eafbab
- Memory Spilled: 72,0 MiB
- Out of Memory: false
- Pool: root.default
- Porcentaje de tiempo de espera de planificaciĆ³n: 1
- Porcentaje de tiempo de espera de recuperaciĆ³n de cliente: 0
- Subprocesos: porcentaje de tiempo de CPU: 76
- Subprocesos: porcentaje de tiempo de espera de almacenamiento: 0
- Subprocesos: porcentaje de tiempo de espera de envĆo por la red: 22
- Subprocesos: porcentaje de tiempo de espera de recepciĆ³n por la red: 2
- Subprocesos: tiempo de CPU: 1,56s
- Subprocesos: tiempo de espera de almacenamiento: 0 ms
- Subprocesos: tiempo de espera de envĆo por la red: 463 ms
- Subprocesos: tiempo de espera de recepciĆ³n por la red: 39 ms
- Subprocesos: tiempo total: 2,06s
- Tiempo de CPU de trabajo: 1,56s
- Tiempo de espera de planificaciĆ³n: 75 ms
- Tiempo de espera de recuperaciĆ³n de cliente: 0 ms
- Tipo de sesiĆ³n: HIVESERVER2
- Uso de memoria pico agregado: 223,8 MiB
- Uso de memoria pico por nodo: 223,8 MiB
- Usuario conectado: admin
- VersiĆ³n de Impala: impalad version 2.7.0-cdh5-IMPALA_KUDU-cdh5 RELEASE (build fc36c3c7fbbbdfb0e8b1b0e6ee7505531a384550)
Created ā01-19-2017 01:32 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
It looks like the query was only able to get 223MB of memory - perhaps there are other queries running at the same time?
Created ā01-30-2017 08:02 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am having same issues.
I use CDH 5.8.0 CM 5.8.1
WARNINGS:
Memory limit exceeded
The memory limit is set too low to initialize spilling operator (id=3). The minimum required memory to spill this operator is 264.00 MB.
Memory Limit Exceeded
Query(60409f68f36d7b3d:301437049bd7bba0) Limit: Consumption=160.58 MB
Fragment 60409f68f36d7b3d:301437049bd7bba2: Consumption=123.18 MB
AGGREGATION_NODE (id=3): Consumption=122.02 MB
EXCHANGE_NODE (id=2): Consumption=0
DataStreamRecvr: Consumption=1.16 MB
Fragment 60409f68f36d7b3d:301437049bd7bba5: Consumption=37.40 MB
AGGREGATION_NODE (id=1): Consumption=11.03 MB
HDFS_SCAN_NODE (id=0): Consumption=26.23 MB
DataStreamSender: Consumption=80.00 KB
Block Manager: Limit=156.00 MB Consumption=114.00 MB
Could not execute command: select isr, count(isr) as counts from aers.demo_drug_reac_combo_clean group by isr having counts > 1
Impala | 2.6.0+cdh5.8.0+0 |
My query is ultra simple
select isr, count(isr) as counts from aers.demo_drug_reac_combo_clean group by isr having counts > 1
aers.demo_drug_reac_combo_clean contains only 10 million records and 9 cols
Metadata is as follows
| isr | drugname | pt | year | age | age_cod | age_norm | age_group |
| 3175747 | troglitazone | hepatotoxicity nos | 1999 | 68 | YR | 68 | 65-69 |
Hadoop Cluster Setup
====================
3 nodes (HP8300 Elite Desktops) , 32GB RAM each node
Created ā01-31-2017 05:30 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Sanjumani,
My guess is that it wasn't able to get enough memory due to other concurrent queries. The query consumed only 160.58MB of memory and I think probably wasn't able to get more.
If you have access to the Impala debug web UI, you can look at http://hostname:25000/queries to see what other queries are running on that coordinator, and http://hostname:25000/memz?detailed=true to see what is consuming memory on each host.
It's also good to confirm Impala's memory limit setting: you can see "mem_limit" on http://hostname:25000/varz
- Tim
Created ā01-31-2017 08:15 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Awesome Thanks Tim
I did the mem checks specifically
http://hostname:25000/memz?detailed=true
And realized the mem_limit was somehow 6GB for node 1 and 2 but 256MB on node 3 š
I changed all three to 6GB each and the query works now. Really appreciate your help and my belief in Cloudera only becomes 10 fold stronger !
warmly and appreciatively
sanjay
Created ā01-31-2017 08:16 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Awesome Thanks Tim
I did the mem checks specifically
"memz detailed=true"
And realized the mem_limit was somehow 6GB for node 1 and 2 but 256MB on node 3 š
I changed all three to 6GB each and the query works now. Really appreciate your help and my belief in Cloudera only becomes 10 fold stronger !
warmly and appreciatively
sanjay
Created ā05-25-2017 09:38 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hey Guys - I am using CDH5.10.1 and noticed the exact same error. In our case, Required mem_limit was 686MB and we gave it 3gb. At the time, when this query was running, there was no other query on the coordinator. So its quite confusing that it gives this error.
Please let me know, if anyone of you had figured out a solution to this problem.
