Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

impala memory limit exceed

avatar
Explorer

Hi, 

 

When we execute a query with groupBy, Having, etc. clauses, Impala shows this error: 

 

Memory limit exceeded The memory limit is set too low to initialize spilling operator (id=2). The minimum required memory to spill this operator is 272.00 MB.

 

How we can set the minimum required memory?

How we can solve it?

 

Thanks

13 REPLIES 13

avatar

Hi efumas,

  What version of Impala are you running? For more recent versions of Impala the query error log will include a more detailed dump of which query operators are using memory. It will also likely show up in the impalad* logs.

 

Generally this error means that you don't have enough memory to execute the query. The memory limits that can apply are the total process memory limit (set for an entire Impala daemon when it is started) or the query memory limit (set via the mem_limit query option).

 

- Tim

avatar
Explorer

Hi Tim, 

 

We are using: version 2.7.0-cdh5-IMPALA_KUDU-cdh5 RELEASE

 

We are also using Cloudera Manager to configure all of parameters. Now, we have put :

 

Impala Daemon Memory Limit  [mem_limit ] ----> 8GB

 

But the problem doesn't solve.

 

"Memory limit exceeded The memory limit is set too low to initialize spilling operator (id=7). The minimum required memory to spill this operator is 264.00 MB."

 

¿Do you have any idea?

 

Thanks

 

 

 

avatar
Explorer

Hi Tim, 

 

For more information, this is the query log: 

 

ID de consulta: 114889f08fc355ac:5d5b67a64160deb8

- Tipo de consulta: QUERY

- Estado de consulta: CREATED

- Hora de inicio: 13-ene-2017 9:06:04

- Duración: 9s

- Filas producidas: 0

- Acumulación de memoria: 65.536 byte seconds

Admission Result: Admitted immediately

- Admission Wait Time: 0 ms

- Bytes transmitidos: 22,5 MiB

- Estado de consulta: Memory limit exceeded The memory limit is set too low to initialize spilling operator (id=7). The minimum required memory to spill this operator is 264.00 MB.

- Estimación por memoria pico por nodo: 2,2 GiB

- Faltan estadísticas: true

- Formatos de archivo:

- ID de sesión: d47b5662d669d90:4ef3cf9dc2eafbab

- Memory Spilled: 72,0 MiB

- Out of Memory: false

- Pool: root.default

- Porcentaje de tiempo de espera de planificación: 1

- Porcentaje de tiempo de espera de recuperación de cliente: 0

- Subprocesos: porcentaje de tiempo de CPU: 76

- Subprocesos: porcentaje de tiempo de espera de almacenamiento: 0

- Subprocesos: porcentaje de tiempo de espera de envío por la red: 22

- Subprocesos: porcentaje de tiempo de espera de recepción por la red: 2

- Subprocesos: tiempo de CPU: 1,56s

- Subprocesos: tiempo de espera de almacenamiento: 0 ms

- Subprocesos: tiempo de espera de envío por la red: 463 ms

- Subprocesos: tiempo de espera de recepción por la red: 39 ms

- Subprocesos: tiempo total: 2,06s

- Tiempo de CPU de trabajo: 1,56s

- Tiempo de espera de planificación: 75 ms

- Tiempo de espera de recuperación de cliente: 0 ms

- Tipo de sesión: HIVESERVER2

- Uso de memoria pico agregado: 223,8 MiB

- Uso de memoria pico por nodo: 223,8 MiB

- Usuario conectado: admin

- Versión de Impala: impalad version 2.7.0-cdh5-IMPALA_KUDU-cdh5 RELEASE (build fc36c3c7fbbbdfb0e8b1b0e6ee7505531a384550)

avatar

It looks like the query was only able to get 223MB of memory - perhaps there are other queries running at the same time?

avatar
Expert Contributor

I am having same issues.

I use CDH 5.8.0 CM 5.8.1

 

WARNINGS:
Memory limit exceeded
The memory limit is set too low to initialize spilling operator (id=3). The minimum required memory to spill this operator is 264.00 MB.

 

Memory Limit Exceeded
Query(60409f68f36d7b3d:301437049bd7bba0) Limit: Consumption=160.58 MB
Fragment 60409f68f36d7b3d:301437049bd7bba2: Consumption=123.18 MB
AGGREGATION_NODE (id=3): Consumption=122.02 MB
EXCHANGE_NODE (id=2): Consumption=0
DataStreamRecvr: Consumption=1.16 MB
Fragment 60409f68f36d7b3d:301437049bd7bba5: Consumption=37.40 MB
AGGREGATION_NODE (id=1): Consumption=11.03 MB
HDFS_SCAN_NODE (id=0): Consumption=26.23 MB
DataStreamSender: Consumption=80.00 KB
Block Manager: Limit=156.00 MB Consumption=114.00 MB

Could not execute command: select isr, count(isr) as counts from aers.demo_drug_reac_combo_clean group by isr having counts > 1

 

Impala2.6.0+cdh5.8.0+0

 

My query is ultra simple

select isr, count(isr) as counts from aers.demo_drug_reac_combo_clean group by isr having counts > 1

 

aers.demo_drug_reac_combo_clean contains only 10 million records and 9 cols

Metadata is as follows

| isr | drugname | pt | year | age | age_cod | age_norm | age_group |
| 3175747 | troglitazone | hepatotoxicity nos | 1999 | 68 | YR | 68 | 65-69 |
 

Hadoop Cluster Setup

====================

3 nodes (HP8300 Elite Desktops) , 32GB RAM each node

 

 

avatar

Hi Sanjumani,

  My guess is that it wasn't able to get enough memory due to other concurrent queries. The query consumed only 160.58MB of memory and I think probably wasn't able to get more.

 

If you have access to the Impala debug web UI, you can look at http://hostname:25000/queries to see what other queries are running on that coordinator, and http://hostname:25000/memz?detailed=true to see what is consuming memory on each host.

 

It's also good to confirm Impala's memory limit setting: you can see "mem_limit" on http://hostname:25000/varz

 

- Tim

avatar
Expert Contributor

Awesome Thanks Tim

I did the mem checks specifically 

http://hostname:25000/memz?detailed=true

And realized the mem_limit was somehow 6GB for node 1 and 2 but 256MB on node 3 😞 

I changed all three to 6GB each and the query works now. Really appreciate your help and my belief in Cloudera only becomes 10 fold stronger !

warmly and appreciatively

sanjay  

avatar
Expert Contributor

Awesome Thanks Tim

I did the mem checks specifically 

"memz detailed=true"

And realized the mem_limit was somehow 6GB for node 1 and 2 but 256MB on node 3 😞 

I changed all three to 6GB each and the query works now. Really appreciate your help and my belief in Cloudera only becomes 10 fold stronger !

warmly and appreciatively

sanjay  

avatar
Rising Star

Hey Guys -  I am using CDH5.10.1 and noticed the exact same error. In our case, Required mem_limit was 686MB and we gave it 3gb. At the time, when this query was running, there was no other query on the coordinator. So its quite confusing that it gives this error.

 

Please let me know, if anyone of you had figured out a solution to this problem.