<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: impala  memory limit exceed in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/impala-memory-limit-exceed/m-p/49413#M15617</link>
    <description>&lt;P&gt;Hi Tim,&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;For more information, this is the query log:&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;-&amp;nbsp;&lt;SPAN&gt;ID de consulta: &lt;STRONG&gt;114889f08fc355ac:5d5b67a64160deb8&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Tipo de consulta: &lt;STRONG&gt;QUERY&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Estado de consulta: &lt;STRONG&gt;CREATED&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Hora de inicio: &lt;STRONG&gt;13-ene-2017 9:06:04&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Duración: &lt;STRONG&gt;9s&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Filas producidas: &lt;STRONG&gt;0&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Acumulación de memoria: &lt;STRONG&gt;65.536 byte seconds&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;-&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;Admission Result: &lt;STRONG&gt;Admitted immediately&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Admission Wait Time: &lt;STRONG&gt;0 ms&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Bytes transmitidos: &lt;STRONG&gt;22,5 MiB&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Estado de consulta: &lt;STRONG&gt;Memory limit exceeded The memory limit is set too low to initialize spilling operator (id=7). The minimum required memory to spill this operator is 264.00 MB.&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Estimación por memoria pico por nodo: &lt;STRONG&gt;2,2 GiB&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Faltan estadísticas: &lt;STRONG&gt;true&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Formatos de archivo:&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- ID de sesión: &lt;STRONG&gt;d47b5662d669d90:4ef3cf9dc2eafbab&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Memory Spilled: &lt;STRONG&gt;72,0 MiB&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Out of Memory: &lt;STRONG&gt;false&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Pool: &lt;STRONG&gt;root.default&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Porcentaje de tiempo de espera de planificación: &lt;STRONG&gt;1&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Porcentaje de tiempo de espera de recuperación de cliente: &lt;STRONG&gt;0&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Subprocesos: porcentaje de tiempo de CPU: &lt;STRONG&gt;76&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Subprocesos: porcentaje de tiempo de espera de almacenamiento: &lt;STRONG&gt;0&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Subprocesos: porcentaje de tiempo de espera de envío por la red: &lt;STRONG&gt;22&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Subprocesos: porcentaje de tiempo de espera de recepción por la red: &lt;STRONG&gt;2&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Subprocesos: tiempo de CPU: &lt;STRONG&gt;1,56s&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Subprocesos: tiempo de espera de almacenamiento: &lt;STRONG&gt;0 ms&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Subprocesos: tiempo de espera de envío por la red: &lt;STRONG&gt;463 ms&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Subprocesos: tiempo de espera de recepción por la red: &lt;STRONG&gt;39 ms&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Subprocesos: tiempo total: &lt;STRONG&gt;2,06s&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Tiempo de CPU de trabajo: &lt;STRONG&gt;1,56s&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Tiempo de espera de planificación: &lt;STRONG&gt;75 ms&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Tiempo de espera de recuperación de cliente: &lt;STRONG&gt;0 ms&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Tipo de sesión: &lt;STRONG&gt;HIVESERVER2&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Uso de memoria pico agregado: &lt;STRONG&gt;223,8 MiB&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Uso de memoria pico por nodo: &lt;STRONG&gt;223,8 MiB&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Usuario conectado: &lt;STRONG&gt;admin&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Versión de Impala: &lt;STRONG&gt;impalad version 2.7.0-cdh5-IMPALA_KUDU-cdh5 RELEASE (build fc36c3c7fbbbdfb0e8b1b0e6ee7505531a384550)&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;</description>
    <pubDate>Fri, 13 Jan 2017 09:10:53 GMT</pubDate>
    <dc:creator>efumas</dc:creator>
    <dc:date>2017-01-13T09:10:53Z</dc:date>
    <item>
      <title>impala  memory limit exceed</title>
      <link>https://community.cloudera.com/t5/Support-Questions/impala-memory-limit-exceed/m-p/49356#M15614</link>
      <description>&lt;P&gt;&lt;SPAN&gt;Hi,&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;When we execute a query with groupBy, Having, etc. clauses, Impala shows&amp;nbsp;this error:&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Memory limit exceeded The memory limit is set too low to initialize spilling operator (id=2). The minimum required memory to spill this operator is 272.00 MB.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;How we can set the minimum required memory?&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;How we can solve it?&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Thanks&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 16 Sep 2022 10:54:29 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/impala-memory-limit-exceed/m-p/49356#M15614</guid>
      <dc:creator>efumas</dc:creator>
      <dc:date>2022-09-16T10:54:29Z</dc:date>
    </item>
    <item>
      <title>Re: impala  memory limit exceed</title>
      <link>https://community.cloudera.com/t5/Support-Questions/impala-memory-limit-exceed/m-p/49393#M15615</link>
      <description>&lt;P&gt;Hi efumas,&lt;/P&gt;&lt;P&gt;&amp;nbsp; What version of Impala are you running? For more recent versions of Impala the query error log will include a more detailed dump of which query operators are using memory. It will also likely show up in the impalad* logs.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Generally this error means that you don't have enough memory to execute the query. The memory limits that can apply are the total process memory limit (set for an entire Impala daemon when it is started) or the query memory limit (set via the mem_limit query option).&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;- Tim&lt;/P&gt;</description>
      <pubDate>Thu, 12 Jan 2017 23:07:46 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/impala-memory-limit-exceed/m-p/49393#M15615</guid>
      <dc:creator>Tim Armstrong</dc:creator>
      <dc:date>2017-01-12T23:07:46Z</dc:date>
    </item>
    <item>
      <title>Re: impala  memory limit exceed</title>
      <link>https://community.cloudera.com/t5/Support-Questions/impala-memory-limit-exceed/m-p/49412#M15616</link>
      <description>&lt;P&gt;Hi Tim,&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We are using:&amp;nbsp;version 2.7.0-cdh5-IMPALA_KUDU-cdh5 RELEASE&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We are also using Cloudera Manager to configure all of parameters. Now, we have put :&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Impala Daemon Memory Limit &amp;nbsp;[mem_limit&amp;nbsp;] ----&amp;gt; 8GB&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;But the problem doesn't solve.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;"Memory limit exceeded The memory limit is set too low to initialize spilling operator (id=7). The minimum required memory to spill this operator is 264.00 MB."&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;¿Do you have any idea?&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Thanks&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 13 Jan 2017 08:46:17 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/impala-memory-limit-exceed/m-p/49412#M15616</guid>
      <dc:creator>efumas</dc:creator>
      <dc:date>2017-01-13T08:46:17Z</dc:date>
    </item>
    <item>
      <title>Re: impala  memory limit exceed</title>
      <link>https://community.cloudera.com/t5/Support-Questions/impala-memory-limit-exceed/m-p/49413#M15617</link>
      <description>&lt;P&gt;Hi Tim,&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;For more information, this is the query log:&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;-&amp;nbsp;&lt;SPAN&gt;ID de consulta: &lt;STRONG&gt;114889f08fc355ac:5d5b67a64160deb8&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Tipo de consulta: &lt;STRONG&gt;QUERY&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Estado de consulta: &lt;STRONG&gt;CREATED&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Hora de inicio: &lt;STRONG&gt;13-ene-2017 9:06:04&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Duración: &lt;STRONG&gt;9s&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Filas producidas: &lt;STRONG&gt;0&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Acumulación de memoria: &lt;STRONG&gt;65.536 byte seconds&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;-&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;Admission Result: &lt;STRONG&gt;Admitted immediately&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Admission Wait Time: &lt;STRONG&gt;0 ms&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Bytes transmitidos: &lt;STRONG&gt;22,5 MiB&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Estado de consulta: &lt;STRONG&gt;Memory limit exceeded The memory limit is set too low to initialize spilling operator (id=7). The minimum required memory to spill this operator is 264.00 MB.&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Estimación por memoria pico por nodo: &lt;STRONG&gt;2,2 GiB&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Faltan estadísticas: &lt;STRONG&gt;true&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Formatos de archivo:&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- ID de sesión: &lt;STRONG&gt;d47b5662d669d90:4ef3cf9dc2eafbab&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Memory Spilled: &lt;STRONG&gt;72,0 MiB&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Out of Memory: &lt;STRONG&gt;false&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Pool: &lt;STRONG&gt;root.default&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Porcentaje de tiempo de espera de planificación: &lt;STRONG&gt;1&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Porcentaje de tiempo de espera de recuperación de cliente: &lt;STRONG&gt;0&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Subprocesos: porcentaje de tiempo de CPU: &lt;STRONG&gt;76&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Subprocesos: porcentaje de tiempo de espera de almacenamiento: &lt;STRONG&gt;0&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Subprocesos: porcentaje de tiempo de espera de envío por la red: &lt;STRONG&gt;22&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Subprocesos: porcentaje de tiempo de espera de recepción por la red: &lt;STRONG&gt;2&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Subprocesos: tiempo de CPU: &lt;STRONG&gt;1,56s&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Subprocesos: tiempo de espera de almacenamiento: &lt;STRONG&gt;0 ms&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Subprocesos: tiempo de espera de envío por la red: &lt;STRONG&gt;463 ms&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Subprocesos: tiempo de espera de recepción por la red: &lt;STRONG&gt;39 ms&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Subprocesos: tiempo total: &lt;STRONG&gt;2,06s&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Tiempo de CPU de trabajo: &lt;STRONG&gt;1,56s&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Tiempo de espera de planificación: &lt;STRONG&gt;75 ms&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Tiempo de espera de recuperación de cliente: &lt;STRONG&gt;0 ms&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Tipo de sesión: &lt;STRONG&gt;HIVESERVER2&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Uso de memoria pico agregado: &lt;STRONG&gt;223,8 MiB&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Uso de memoria pico por nodo: &lt;STRONG&gt;223,8 MiB&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Usuario conectado: &lt;STRONG&gt;admin&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;- Versión de Impala: &lt;STRONG&gt;impalad version 2.7.0-cdh5-IMPALA_KUDU-cdh5 RELEASE (build fc36c3c7fbbbdfb0e8b1b0e6ee7505531a384550)&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 13 Jan 2017 09:10:53 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/impala-memory-limit-exceed/m-p/49413#M15617</guid>
      <dc:creator>efumas</dc:creator>
      <dc:date>2017-01-13T09:10:53Z</dc:date>
    </item>
    <item>
      <title>Re: impala  memory limit exceed</title>
      <link>https://community.cloudera.com/t5/Support-Questions/impala-memory-limit-exceed/m-p/49735#M15618</link>
      <description>&lt;P&gt;It looks like the query was only able to get 223MB of memory - perhaps there are other queries running at the same time?&lt;/P&gt;</description>
      <pubDate>Thu, 19 Jan 2017 21:32:22 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/impala-memory-limit-exceed/m-p/49735#M15618</guid>
      <dc:creator>Tim Armstrong</dc:creator>
      <dc:date>2017-01-19T21:32:22Z</dc:date>
    </item>
    <item>
      <title>Re: impala  memory limit exceed</title>
      <link>https://community.cloudera.com/t5/Support-Questions/impala-memory-limit-exceed/m-p/50151#M15619</link>
      <description>&lt;P&gt;&lt;FONT face="courier new,courier"&gt;I am having same issues. &lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT face="courier new,courier"&gt;I use CDH 5.8.0 CM 5.8.1&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;FONT face="courier new,courier"&gt;WARNINGS:&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;Memory limit exceeded&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;The memory limit is set too low to initialize spilling operator (id=3). The minimum required memory to spill this operator is 264.00 MB.&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT face="courier new,courier"&gt;&amp;nbsp;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT face="courier new,courier"&gt;Memory Limit Exceeded&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;Query(60409f68f36d7b3d:301437049bd7bba0) Limit: Consumption=160.58 MB&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;Fragment 60409f68f36d7b3d:301437049bd7bba2: Consumption=123.18 MB&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;AGGREGATION_NODE (id=3): Consumption=122.02 MB&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;EXCHANGE_NODE (id=2): Consumption=0&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;DataStreamRecvr: Consumption=1.16 MB&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;Fragment 60409f68f36d7b3d:301437049bd7bba5: Consumption=37.40 MB&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;AGGREGATION_NODE (id=1): Consumption=11.03 MB&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;HDFS_SCAN_NODE (id=0): Consumption=26.23 MB&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;DataStreamSender: Consumption=80.00 KB&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;Block Manager: Limit=156.00 MB Consumption=114.00 MB&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT face="courier new,courier"&gt;Could not execute command: select isr, count(isr) as counts from aers.demo_drug_reac_combo_clean group by isr having counts &amp;gt; 1&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;TABLE&gt;&lt;TBODY&gt;&lt;TR&gt;&lt;TD&gt;&lt;FONT face="courier new,courier"&gt;Impala&lt;/FONT&gt;&lt;/TD&gt;&lt;TD&gt;&lt;FONT face="courier new,courier"&gt;2.6.0+cdh5.8.0+0&lt;/FONT&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;/TBODY&gt;&lt;/TABLE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;FONT face="courier new,courier"&gt;My query is ultra simple&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT face="courier new,courier"&gt;select isr, count(isr) as counts from aers.demo_drug_reac_combo_clean group by isr having counts &amp;gt; 1&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;FONT face="courier new,courier"&gt;aers.demo_drug_reac_combo_clean contains only 10 million records and 9 cols&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT face="courier new,courier"&gt;Metadata is as follows&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT face="courier new,courier"&gt;| isr | drugname | pt | year | age | age_cod | age_norm | age_group |&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;| 3175747 | troglitazone | hepatotoxicity nos | 1999 | 68 | YR | 68 | 65-69 |&lt;/FONT&gt;&lt;BR /&gt;&lt;FONT face="courier new,courier"&gt;&amp;nbsp;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT face="courier new,courier"&gt;Hadoop Cluster Setup&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT face="courier new,courier"&gt;====================&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;3 nodes (HP8300 Elite Desktops) , 32GB RAM each node&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 30 Jan 2017 16:02:14 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/impala-memory-limit-exceed/m-p/50151#M15619</guid>
      <dc:creator>sanjaysubramani</dc:creator>
      <dc:date>2017-01-30T16:02:14Z</dc:date>
    </item>
    <item>
      <title>Re: impala  memory limit exceed</title>
      <link>https://community.cloudera.com/t5/Support-Questions/impala-memory-limit-exceed/m-p/50220#M15620</link>
      <description>&lt;P&gt;Hi Sanjumani,&lt;/P&gt;&lt;P&gt;&amp;nbsp; My guess is that it wasn't able to get enough memory due to other concurrent queries. The query consumed only 160.58MB of memory and I think probably wasn't able to get more.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;If you have access to the Impala debug web UI, you can look at http://hostname:25000/queries to see what other queries are running on that coordinator, and http://hostname:25000/memz?detailed=true to see what is consuming memory on each host.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;It's also good to confirm Impala's memory limit setting: you can see "mem_limit" on http://hostname:25000/varz&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;- Tim&lt;/P&gt;</description>
      <pubDate>Wed, 01 Feb 2017 01:30:22 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/impala-memory-limit-exceed/m-p/50220#M15620</guid>
      <dc:creator>Tim Armstrong</dc:creator>
      <dc:date>2017-02-01T01:30:22Z</dc:date>
    </item>
    <item>
      <title>Re: impala  memory limit exceed</title>
      <link>https://community.cloudera.com/t5/Support-Questions/impala-memory-limit-exceed/m-p/50230#M15621</link>
      <description>&lt;P&gt;Awesome Thanks Tim&lt;/P&gt;&lt;P&gt;I did the mem checks specifically&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;http://hostname:25000/memz?detailed=true&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;And realized the mem_limit was somehow 6GB for node 1 and 2 but 256MB on node 3 &lt;span class="lia-unicode-emoji" title=":disappointed_face:"&gt;😞&lt;/span&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;I changed all three to 6GB each and the query works now. Really appreciate your help and my belief in Cloudera only becomes 10 fold stronger ! &lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;warmly and appreciatively &lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;sanjay&amp;nbsp;&lt;/SPAN&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 01 Feb 2017 04:15:18 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/impala-memory-limit-exceed/m-p/50230#M15621</guid>
      <dc:creator>sanjaysubramani</dc:creator>
      <dc:date>2017-02-01T04:15:18Z</dc:date>
    </item>
    <item>
      <title>Re: impala  memory limit exceed</title>
      <link>https://community.cloudera.com/t5/Support-Questions/impala-memory-limit-exceed/m-p/50231#M15622</link>
      <description>&lt;P&gt;Awesome Thanks Tim&lt;/P&gt;&lt;P&gt;I did the mem checks specifically&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;"memz detailed=true"&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;And realized the mem_limit was somehow 6GB for node 1 and 2 but 256MB on node 3 &lt;span class="lia-unicode-emoji" title=":disappointed_face:"&gt;😞&lt;/span&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;I changed all three to 6GB each and the query works now. Really appreciate your help and my belief in Cloudera only becomes 10 fold stronger ! &lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;warmly and appreciatively &lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;sanjay&amp;nbsp;&lt;/SPAN&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 01 Feb 2017 04:16:00 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/impala-memory-limit-exceed/m-p/50231#M15622</guid>
      <dc:creator>sanjaysubramani</dc:creator>
      <dc:date>2017-02-01T04:16:00Z</dc:date>
    </item>
    <item>
      <title>Re: impala  memory limit exceed</title>
      <link>https://community.cloudera.com/t5/Support-Questions/impala-memory-limit-exceed/m-p/55059#M15623</link>
      <description>&lt;P&gt;Hey Guys - &amp;nbsp;I am using CDH5.10.1 and noticed the exact same error. In our case, Required mem_limit was 686MB and we gave it 3gb. At the time, when this query was running, there was no other query on the coordinator. So its quite confusing that it gives this error.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Please let me know, if anyone of you had figured out a solution to this problem.&lt;/P&gt;</description>
      <pubDate>Thu, 25 May 2017 16:38:17 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/impala-memory-limit-exceed/m-p/55059#M15623</guid>
      <dc:creator>krishnat</dc:creator>
      <dc:date>2017-05-25T16:38:17Z</dc:date>
    </item>
    <item>
      <title>Re: impala  memory limit exceed</title>
      <link>https://community.cloudera.com/t5/Support-Questions/impala-memory-limit-exceed/m-p/55061#M15624</link>
      <description>&lt;P&gt;Hi Krishnat,&lt;/P&gt;&lt;P&gt;&amp;nbsp; It depends on the complexity of the query - that number is per-plan-node, not global. It may need more memory if there are a lot of&amp;nbsp;operators in the plan. Hard to know without seeing the plan or execution summary (or both).&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Historically there were various bugs that might result in this happening in certain cases but I believe all the fixes &amp;nbsp;landed in 5.10.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I agree that the message and behaviour could be&amp;nbsp;a lot more helpful/actionable - improving spill-to-disk is actually my primary focus right now - I'm very excited about the changes we have in the pipeline.&lt;/P&gt;</description>
      <pubDate>Thu, 25 May 2017 17:09:33 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/impala-memory-limit-exceed/m-p/55061#M15624</guid>
      <dc:creator>Tim Armstrong</dc:creator>
      <dc:date>2017-05-25T17:09:33Z</dc:date>
    </item>
    <item>
      <title>Re: impala  memory limit exceed</title>
      <link>https://community.cloudera.com/t5/Support-Questions/impala-memory-limit-exceed/m-p/55069#M15625</link>
      <description>Estimated Per-Host Requirements: Memory=628.99MB VCores=3&lt;BR /&gt;&lt;BR /&gt;PLAN-ROOT SINK&lt;BR /&gt;|&lt;BR /&gt;66:EXCHANGE [UNPARTITIONED]&lt;BR /&gt;| hosts=10 per-host-mem=unavailable&lt;BR /&gt;| tuple-ids=20,42N row-size=210B cardinality=26977&lt;BR /&gt;|&lt;BR /&gt;32:HASH JOIN [LEFT OUTER JOIN, PARTITIONED]&lt;BR /&gt;| hash predicates: campaign = campaign, carrier = carrier, market = market, sessiontype = sessiontype&lt;BR /&gt;| hosts=10 per-host-mem=292.69KB&lt;BR /&gt;| tuple-ids=20,42N row-size=210B cardinality=26977&lt;BR /&gt;|&lt;BR /&gt;|--65:EXCHANGE [HASH(campaign,market,carrier,sessiontype)]&lt;BR /&gt;| | hosts=10 per-host-mem=0B&lt;BR /&gt;| | tuple-ids=42 row-size=101B cardinality=26977&lt;BR /&gt;| |&lt;BR /&gt;| 64:AGGREGATE [FINALIZE]&lt;BR /&gt;| | output: sum:merge(CASE WHEN carrier_count = 2 THEN samples ELSE 0 END), sum:merge(CAST(samples AS FLOAT)), sum:merge(CASE WHEN carrier_count = 3 THEN samples ELSE 0 END), sum:merge(CASE WHEN carrier_count &amp;gt; 1 THEN samples ELSE 0 END), sum:merge(CASE WHEN carrier_count &amp;gt; 1 THEN sum_total_bandwidth ELSE 0 END)&lt;BR /&gt;| | group by: campaign, market, carrier, sessiontype&lt;BR /&gt;| | hosts=10 per-host-mem=10.00MB&lt;BR /&gt;| | tuple-ids=42 row-size=101B cardinality=26977&lt;BR /&gt;| |&lt;BR /&gt;| 63:EXCHANGE [HASH(campaign,market,carrier,sessiontype)]&lt;BR /&gt;| | hosts=10 per-host-mem=0B&lt;BR /&gt;| | tuple-ids=42 row-size=101B cardinality=26977&lt;BR /&gt;| |&lt;BR /&gt;| 31:AGGREGATE [STREAMING]&lt;BR /&gt;| | output: sum(CASE WHEN carrier_count = 2 THEN samples ELSE 0 END), sum(CAST(samples AS FLOAT)), sum(CASE WHEN carrier_count = 3 THEN samples ELSE 0 END), sum(CASE WHEN carrier_count &amp;gt; 1 THEN samples ELSE 0 END), sum(CASE WHEN carrier_count &amp;gt; 1 THEN sum_total_bandwidth ELSE 0 END)&lt;BR /&gt;| | group by: campaign, market, carrier, sessiontype&lt;BR /&gt;| | hosts=10 per-host-mem=10.00MB&lt;BR /&gt;| | tuple-ids=42 row-size=101B cardinality=26977&lt;BR /&gt;| |&lt;BR /&gt;| 16:UNION&lt;BR /&gt;| | hosts=10 per-host-mem=0B&lt;BR /&gt;| | tuple-ids=40 row-size=78B cardinality=26977&lt;BR /&gt;| |&lt;BR /&gt;| |--62:AGGREGATE [FINALIZE]&lt;BR /&gt;| | | output: sum:merge(ltebwcum), count:merge(*)&lt;BR /&gt;| | | group by: campaign, market, carrier, sessiontype, carrier_count, tech_mode&lt;BR /&gt;| | | having: tech_mode = 'LTECA'&lt;BR /&gt;| | | hosts=10 per-host-mem=10.00MB&lt;BR /&gt;| | | tuple-ids=38 row-size=94B cardinality=13400&lt;BR /&gt;| | |&lt;BR /&gt;| | 61:EXCHANGE [HASH(campaign,market,carrier,sessiontype,carrier_count,tech_mode)]&lt;BR /&gt;| | | hosts=10 per-host-mem=0B&lt;BR /&gt;| | | tuple-ids=38 row-size=94B cardinality=13400&lt;BR /&gt;| | |&lt;BR /&gt;| | 30:AGGREGATE [STREAMING]&lt;BR /&gt;| | | output: sum(ltebwcum), count(*)&lt;BR /&gt;| | | group by: a.campaign, a.market, a.carrier, CASE WHEN SESSIONTYPE = 'HTTPDL_CAPACITY_L' THEN 'HTTPDL_CAPACITY' ELSE SESSIONTYPE END, CASE WHEN l_pdschbytes_scc3 &amp;gt; 0 THEN 4 WHEN l_pdschbytes_scc2 &amp;gt; 0 THEN 3 WHEN l_pdschbytes_scc1 &amp;gt; 0 THEN 2 WHEN L_pdschbytes &amp;gt; 0 THEN 1 ELSE 0 END, CASE WHEN l_pdschbytes_scc3 &amp;gt; 0 THEN 'LTECA' WHEN l_pdschbytes_scc2 &amp;gt; 0 THEN 'LTECA' WHEN l_pdschbytes_scc1 &amp;gt; 0 THEN 'LTECA' WHEN L_pdschbytes &amp;gt; 0 THEN 'LTE' ELSE NULL END&lt;BR /&gt;| | | hosts=10 per-host-mem=10.00MB&lt;BR /&gt;| | | tuple-ids=38 row-size=94B cardinality=13400&lt;BR /&gt;| | |&lt;BR /&gt;| | 29:HASH JOIN [INNER JOIN, PARTITIONED]&lt;BR /&gt;| | | hash predicates: a.campaign = campaign, a.carrier = carrier, a.market = a.market, a.filename = filename&lt;BR /&gt;| | | other predicates: unix_timestamp(udfs.totimestamp(a.time_stamp)) &amp;lt;= unix_timestamp(udfs.totimestamp(concat(task_date, ' ', timeend))), unix_timestamp(udfs.totimestamp(a.time_stamp)) &amp;gt;= unix_timestamp(udfs.totimestamp(concat(task_date, ' ', timeinit)))&lt;BR /&gt;| | | runtime filters: RF015 &amp;lt;- campaign, RF017 &amp;lt;- a.market, RF016 &amp;lt;- carrier, RF018 &amp;lt;- filename&lt;BR /&gt;| | | hosts=10 per-host-mem=1018.71KB&lt;BR /&gt;| | | tuple-ids=31,32,34 row-size=468B cardinality=13400&lt;BR /&gt;| | |&lt;BR /&gt;| | |--60:EXCHANGE [HASH(campaign,carrier,a.market,filename)]&lt;BR /&gt;| | | | hosts=10 per-host-mem=0B&lt;BR /&gt;| | | | tuple-ids=32,34 row-size=281B cardinality=33804&lt;BR /&gt;| | | |&lt;BR /&gt;| | | 28:HASH JOIN [INNER JOIN, BROADCAST]&lt;BR /&gt;| | | | hash predicates: a.market = market&lt;BR /&gt;| | | | hosts=10 per-host-mem=17.62KB&lt;BR /&gt;| | | | tuple-ids=32,34 row-size=281B cardinality=33804&lt;BR /&gt;| | | |&lt;BR /&gt;| | | |--58:EXCHANGE [BROADCAST]&lt;BR /&gt;| | | | | hosts=5 per-host-mem=0B&lt;BR /&gt;| | | | | tuple-ids=34 row-size=29B cardinality=569&lt;BR /&gt;| | | | |&lt;BR /&gt;| | | | 57:AGGREGATE [FINALIZE]&lt;BR /&gt;| | | | | group by: market&lt;BR /&gt;| | | | | hosts=5 per-host-mem=10.00MB&lt;BR /&gt;| | | | | tuple-ids=34 row-size=29B cardinality=569&lt;BR /&gt;| | | | |&lt;BR /&gt;| | | | 56:EXCHANGE [HASH(market)]&lt;BR /&gt;| | | | | hosts=5 per-host-mem=0B&lt;BR /&gt;| | | | | tuple-ids=34 row-size=29B cardinality=569&lt;BR /&gt;| | | | |&lt;BR /&gt;| | | | 27:AGGREGATE [STREAMING]&lt;BR /&gt;| | | | | group by: market&lt;BR /&gt;| | | | | hosts=5 per-host-mem=10.00MB&lt;BR /&gt;| | | | | tuple-ids=34 row-size=29B cardinality=569&lt;BR /&gt;| | | | |&lt;BR /&gt;| | | | 26:SCAN HDFS [mobistat.allstats_packet, RANDOM]&lt;BR /&gt;| | | | partitions=1/1 files=6 size=9.32MB&lt;BR /&gt;| | | | predicates: bbdo_approved = 1, campaign = '17D1'&lt;BR /&gt;| | | | table stats: 51137 rows total&lt;BR /&gt;| | | | column stats: all&lt;BR /&gt;| | | | hosts=5 per-host-mem=32.00MB&lt;BR /&gt;| | | | tuple-ids=33 row-size=53B cardinality=1065&lt;BR /&gt;| | | |&lt;BR /&gt;| | | 25:SCAN HDFS [mobistat.cdr_packet a, RANDOM]&lt;BR /&gt;| | | partitions=4680/10328 files=4681 size=4.70GB&lt;BR /&gt;| | | predicates: regexp_like(calldirection, 'HTTPDL_CAPACITY') = TRUE, regexp_like(endresult, 'HTTP SUCCESS') = TRUE, (modpctlte + isnull(modpctlteca, 0)) &amp;gt; 0.999&lt;BR /&gt;| | | table stats: 58699689 rows total&lt;BR /&gt;| | | column stats: all&lt;BR /&gt;| | | hosts=10 per-host-mem=304.00MB&lt;BR /&gt;| | | tuple-ids=32 row-size=252B cardinality=2888511&lt;BR /&gt;| | |&lt;BR /&gt;| | 59:EXCHANGE [HASH(a.campaign,a.carrier,a.market,a.filename)]&lt;BR /&gt;| | | hosts=10 per-host-mem=0B&lt;BR /&gt;| | | tuple-ids=31 row-size=187B cardinality=23267425&lt;BR /&gt;| | |&lt;BR /&gt;| | 24:SCAN HDFS [mobistat.psr_packet_cdma a, RANDOM]&lt;BR /&gt;| | partitions=2332/3707 files=2332 size=45.71GB&lt;BR /&gt;| | predicates: regexp_like(SESSIONTYPE, 'HTTPDL_CAPACITY') = TRUE, CASE WHEN l_pdschbytes_scc3 &amp;gt; 0 THEN 'LTECA' WHEN l_pdschbytes_scc2 &amp;gt; 0 THEN 'LTECA' WHEN l_pdschbytes_scc1 &amp;gt; 0 THEN 'LTECA' WHEN L_pdschbytes &amp;gt; 0 THEN 'LTE' ELSE NULL END IS NOT NULL&lt;BR /&gt;| | runtime filters: RF015 -&amp;gt; a.campaign, RF017 -&amp;gt; a.market, RF016 -&amp;gt; a.carrier, RF018 -&amp;gt; a.filename&lt;BR /&gt;| | table stats: 358488531 rows total&lt;BR /&gt;| | column stats: all&lt;BR /&gt;| | hosts=10 per-host-mem=608.00MB&lt;BR /&gt;| | tuple-ids=31 row-size=187B cardinality=23267425&lt;BR /&gt;| |&lt;BR /&gt;| 55:AGGREGATE [FINALIZE]&lt;BR /&gt;| | output: sum:merge(ltebwcum), count:merge(*)&lt;BR /&gt;| | group by: campaign, market, carrier, sessiontype, carrier_count, tech_mode&lt;BR /&gt;| | having: tech_mode = 'LTECA'&lt;BR /&gt;| | hosts=10 per-host-mem=10.00MB&lt;BR /&gt;| | tuple-ids=29 row-size=94B cardinality=13577&lt;BR /&gt;| |&lt;BR /&gt;| 54:EXCHANGE [HASH(campaign,market,carrier,sessiontype,carrier_count,tech_mode)]&lt;BR /&gt;| | hosts=10 per-host-mem=0B&lt;BR /&gt;| | tuple-ids=29 row-size=94B cardinality=13577&lt;BR /&gt;| |&lt;BR /&gt;| 23:AGGREGATE [STREAMING]&lt;BR /&gt;| | output: sum(ltebwcum), count(*)&lt;BR /&gt;| | group by: a.campaign, a.market, a.carrier, CASE WHEN SESSIONTYPE = 'HTTPDL_CAPACITY_L' THEN 'HTTPDL_CAPACITY' ELSE SESSIONTYPE END, CASE WHEN l_pdschbytes_scc3 &amp;gt; 0 THEN 4 WHEN l_pdschbytes_scc2 &amp;gt; 0 THEN 3 WHEN l_pdschbytes_scc1 &amp;gt; 0 THEN 2 WHEN L_pdschbytes &amp;gt; 0 THEN 1 ELSE 0 END, CASE WHEN l_pdschbytes_scc3 &amp;gt; 0 THEN 'LTECA' WHEN l_pdschbytes_scc2 &amp;gt; 0 THEN 'LTECA' WHEN l_pdschbytes_scc1 &amp;gt; 0 THEN 'LTECA' WHEN L_pdschbytes &amp;gt; 0 THEN 'LTE' ELSE NULL END&lt;BR /&gt;| | hosts=10 per-host-mem=10.00MB&lt;BR /&gt;| | tuple-ids=29 row-size=94B cardinality=13577&lt;BR /&gt;| |&lt;BR /&gt;| 22:HASH JOIN [INNER JOIN, PARTITIONED]&lt;BR /&gt;| | hash predicates: a.campaign = campaign, a.carrier = carrier, a.market = a.market, a.filename = filename&lt;BR /&gt;| | other predicates: unix_timestamp(udfs.totimestamp(a.time_stamp)) &amp;lt;= unix_timestamp(udfs.totimestamp(concat(task_date, ' ', timeend))), unix_timestamp(udfs.totimestamp(a.time_stamp)) &amp;gt;= unix_timestamp(udfs.totimestamp(concat(task_date, ' ', timeinit)))&lt;BR /&gt;| | hosts=10 per-host-mem=1018.71KB&lt;BR /&gt;| | tuple-ids=22,23,25 row-size=468B cardinality=13577&lt;BR /&gt;| |&lt;BR /&gt;| |--53:EXCHANGE [HASH(campaign,carrier,a.market,filename)]&lt;BR /&gt;| | | hosts=10 per-host-mem=0B&lt;BR /&gt;| | | tuple-ids=23,25 row-size=281B cardinality=33804&lt;BR /&gt;| | |&lt;BR /&gt;| | 21:HASH JOIN [INNER JOIN, BROADCAST]&lt;BR /&gt;| | | hash predicates: a.market = market&lt;BR /&gt;| | | hosts=10 per-host-mem=17.62KB&lt;BR /&gt;| | | tuple-ids=23,25 row-size=281B cardinality=33804&lt;BR /&gt;| | |&lt;BR /&gt;| | |--51:EXCHANGE [BROADCAST]&lt;BR /&gt;| | | | hosts=5 per-host-mem=0B&lt;BR /&gt;| | | | tuple-ids=25 row-size=29B cardinality=569&lt;BR /&gt;| | | |&lt;BR /&gt;| | | 50:AGGREGATE [FINALIZE]&lt;BR /&gt;| | | | group by: market&lt;BR /&gt;| | | | hosts=5 per-host-mem=10.00MB&lt;BR /&gt;| | | | tuple-ids=25 row-size=29B cardinality=569&lt;BR /&gt;| | | |&lt;BR /&gt;| | | 49:EXCHANGE [HASH(market)]&lt;BR /&gt;| | | | hosts=5 per-host-mem=0B&lt;BR /&gt;| | | | tuple-ids=25 row-size=29B cardinality=569&lt;BR /&gt;| | | |&lt;BR /&gt;| | | 20:AGGREGATE [STREAMING]&lt;BR /&gt;| | | | group by: market&lt;BR /&gt;| | | | hosts=5 per-host-mem=10.00MB&lt;BR /&gt;| | | | tuple-ids=25 row-size=29B cardinality=569&lt;BR /&gt;| | | |&lt;BR /&gt;| | | 19:SCAN HDFS [mobistat.allstats_packet, RANDOM]&lt;BR /&gt;| | | partitions=1/1 files=6 size=9.32MB&lt;BR /&gt;| | | predicates: bbdo_approved = 1, campaign = '17D1'&lt;BR /&gt;| | | table stats: 51137 rows total&lt;BR /&gt;| | | column stats: all&lt;BR /&gt;| | | hosts=5 per-host-mem=32.00MB&lt;BR /&gt;| | | tuple-ids=24 row-size=53B cardinality=1065&lt;BR /&gt;| | |&lt;BR /&gt;| | 18:SCAN HDFS [mobistat.cdr_packet a, RANDOM]&lt;BR /&gt;| | partitions=4680/10328 files=4681 size=4.70GB&lt;BR /&gt;| | predicates: regexp_like(calldirection, 'HTTPDL_CAPACITY') = TRUE, regexp_like(endresult, 'HTTP SUCCESS') = TRUE, (modpctlte + isnull(modpctlteca, 0)) &amp;gt; 0.999&lt;BR /&gt;| | table stats: 58699689 rows total&lt;BR /&gt;| | column stats: all&lt;BR /&gt;| | hosts=10 per-host-mem=304.00MB&lt;BR /&gt;| | tuple-ids=23 row-size=252B cardinality=2888511&lt;BR /&gt;| |&lt;BR /&gt;| 52:EXCHANGE [HASH(a.campaign,a.carrier,a.market,a.filename)]&lt;BR /&gt;| | hosts=10 per-host-mem=0B&lt;BR /&gt;| | tuple-ids=22 row-size=187B cardinality=23574499&lt;BR /&gt;| |&lt;BR /&gt;| 17:SCAN HDFS [mobistat.psr_packet_gsm a, RANDOM]&lt;BR /&gt;| partitions=2336/6581 files=2336 size=48.59GB&lt;BR /&gt;| predicates: regexp_like(SESSIONTYPE, 'HTTPDL_CAPACITY') = TRUE, CASE WHEN l_pdschbytes_scc3 &amp;gt; 0 THEN 'LTECA' WHEN l_pdschbytes_scc2 &amp;gt; 0 THEN 'LTECA' WHEN l_pdschbytes_scc1 &amp;gt; 0 THEN 'LTECA' WHEN L_pdschbytes &amp;gt; 0 THEN 'LTE' ELSE NULL END IS NOT NULL&lt;BR /&gt;| table stats: 636668107 rows total&lt;BR /&gt;| column stats: all&lt;BR /&gt;| hosts=10 per-host-mem=608.00MB&lt;BR /&gt;| tuple-ids=22 row-size=187B cardinality=23574499&lt;BR /&gt;|&lt;BR /&gt;48:AGGREGATE [FINALIZE]&lt;BR /&gt;| output: sum:merge(carrier_count * samples), sum:merge(samples), sum:merge(sum_256qam), sum:merge(sum_total_frames), sum:merge(sum_total_bandwidth), sum:merge(sum_4tx_samples)&lt;BR /&gt;| group by: campaign, market, carrier, sessiontype&lt;BR /&gt;| hosts=10 per-host-mem=10.00MB&lt;BR /&gt;| tuple-ids=20 row-size=109B cardinality=26977&lt;BR /&gt;|&lt;BR /&gt;47:EXCHANGE [HASH(campaign,market,carrier,sessiontype)]&lt;BR /&gt;| hosts=10 per-host-mem=0B&lt;BR /&gt;| tuple-ids=20 row-size=109B cardinality=26977&lt;BR /&gt;|&lt;BR /&gt;15:AGGREGATE [STREAMING]&lt;BR /&gt;| output: sum(carrier_count * samples), sum(samples), sum(sum_256qam), sum(sum_total_frames), sum(sum_total_bandwidth), sum(sum_4tx_samples)&lt;BR /&gt;| group by: campaign, market, carrier, sessiontype&lt;BR /&gt;| hosts=10 per-host-mem=10.00MB&lt;BR /&gt;| tuple-ids=20 row-size=109B cardinality=26977&lt;BR /&gt;|&lt;BR /&gt;00:UNION&lt;BR /&gt;| hosts=10 per-host-mem=0B&lt;BR /&gt;| tuple-ids=18 row-size=102B cardinality=26977&lt;BR /&gt;|&lt;BR /&gt;|--46:AGGREGATE [FINALIZE]&lt;BR /&gt;| | output: sum:merge(l_dlnum256qam), sum:merge(CAST(total_frames AS FLOAT)), sum:merge(ltebwcum), sum:merge(CASE WHEN l_dlmaxnumlayer = 4 THEN 1 ELSE 0 END), count:merge(*)&lt;BR /&gt;| | group by: campaign, market, carrier, sessiontype, carrier_count, tech_mode&lt;BR /&gt;| | hosts=10 per-host-mem=10.00MB&lt;BR /&gt;| | tuple-ids=16 row-size=118B cardinality=13400&lt;BR /&gt;| |&lt;BR /&gt;| 45:EXCHANGE [HASH(campaign,market,carrier,sessiontype,carrier_count,tech_mode)]&lt;BR /&gt;| | hosts=10 per-host-mem=0B&lt;BR /&gt;| | tuple-ids=16 row-size=118B cardinality=13400&lt;BR /&gt;| |&lt;BR /&gt;| 14:AGGREGATE [STREAMING]&lt;BR /&gt;| | output: sum(l_dlnum256qam), sum(CAST((l_dlnum256qam + l_dlnum64qam + l_dlnum16qam + l_dlnumqpsk) AS FLOAT)), sum(ltebwcum), sum(CASE WHEN a.l_dlmaxnumlayer = 4 THEN 1 ELSE 0 END), count(*)&lt;BR /&gt;| | group by: a.campaign, a.market, a.carrier, CASE WHEN SESSIONTYPE = 'HTTPDL_CAPACITY_L' THEN 'HTTPDL_CAPACITY' ELSE SESSIONTYPE END, CASE WHEN l_pdschbytes_scc3 &amp;gt; 0 THEN 4 WHEN l_pdschbytes_scc2 &amp;gt; 0 THEN 3 WHEN l_pdschbytes_scc1 &amp;gt; 0 THEN 2 WHEN L_pdschbytes &amp;gt; 0 THEN 1 ELSE 0 END, CASE WHEN l_pdschbytes_scc3 &amp;gt; 0 THEN 'LTECA' WHEN l_pdschbytes_scc2 &amp;gt; 0 THEN 'LTECA' WHEN l_pdschbytes_scc1 &amp;gt; 0 THEN 'LTECA' WHEN L_pdschbytes &amp;gt; 0 THEN 'LTE' ELSE NULL END&lt;BR /&gt;| | hosts=10 per-host-mem=10.00MB&lt;BR /&gt;| | tuple-ids=16 row-size=118B cardinality=13400&lt;BR /&gt;| |&lt;BR /&gt;| 13:HASH JOIN [INNER JOIN, PARTITIONED]&lt;BR /&gt;| | hash predicates: a.campaign = campaign, a.carrier = carrier, a.market = a.market, a.filename = filename&lt;BR /&gt;| | other predicates: unix_timestamp(udfs.totimestamp(a.time_stamp)) &amp;lt;= unix_timestamp(udfs.totimestamp(concat(task_date, ' ', timeend))), unix_timestamp(udfs.totimestamp(a.time_stamp)) &amp;gt;= unix_timestamp(udfs.totimestamp(concat(task_date, ' ', timeinit)))&lt;BR /&gt;| | runtime filters: RF005 &amp;lt;- campaign, RF006 &amp;lt;- carrier, RF007 &amp;lt;- a.market, RF008 &amp;lt;- filename&lt;BR /&gt;| | hosts=10 per-host-mem=1018.71KB&lt;BR /&gt;| | tuple-ids=9,10,12 row-size=488B cardinality=13400&lt;BR /&gt;| |&lt;BR /&gt;| |--44:EXCHANGE [HASH(campaign,carrier,a.market,filename)]&lt;BR /&gt;| | | hosts=10 per-host-mem=0B&lt;BR /&gt;| | | tuple-ids=10,12 row-size=281B cardinality=33804&lt;BR /&gt;| | |&lt;BR /&gt;| | 12:HASH JOIN [INNER JOIN, BROADCAST]&lt;BR /&gt;| | | hash predicates: a.market = market&lt;BR /&gt;| | | hosts=10 per-host-mem=17.62KB&lt;BR /&gt;| | | tuple-ids=10,12 row-size=281B cardinality=33804&lt;BR /&gt;| | |&lt;BR /&gt;| | |--42:EXCHANGE [BROADCAST]&lt;BR /&gt;| | | | hosts=5 per-host-mem=0B&lt;BR /&gt;| | | | tuple-ids=12 row-size=29B cardinality=569&lt;BR /&gt;| | | |&lt;BR /&gt;| | | 41:AGGREGATE [FINALIZE]&lt;BR /&gt;| | | | group by: market&lt;BR /&gt;| | | | hosts=5 per-host-mem=10.00MB&lt;BR /&gt;| | | | tuple-ids=12 row-size=29B cardinality=569&lt;BR /&gt;| | | |&lt;BR /&gt;| | | 40:EXCHANGE [HASH(market)]&lt;BR /&gt;| | | | hosts=5 per-host-mem=0B&lt;BR /&gt;| | | | tuple-ids=12 row-size=29B cardinality=569&lt;BR /&gt;| | | |&lt;BR /&gt;| | | 11:AGGREGATE [STREAMING]&lt;BR /&gt;| | | | group by: market&lt;BR /&gt;| | | | hosts=5 per-host-mem=10.00MB&lt;BR /&gt;| | | | tuple-ids=12 row-size=29B cardinality=569&lt;BR /&gt;| | | |&lt;BR /&gt;| | | 10:SCAN HDFS [mobistat.allstats_packet, RANDOM]&lt;BR /&gt;| | | partitions=1/1 files=6 size=9.32MB&lt;BR /&gt;| | | predicates: bbdo_approved = 1, campaign = '17D1'&lt;BR /&gt;| | | table stats: 51137 rows total&lt;BR /&gt;| | | column stats: all&lt;BR /&gt;| | | hosts=5 per-host-mem=32.00MB&lt;BR /&gt;| | | tuple-ids=11 row-size=53B cardinality=1065&lt;BR /&gt;| | |&lt;BR /&gt;| | 09:SCAN HDFS [mobistat.cdr_packet a, RANDOM]&lt;BR /&gt;| | partitions=4680/10328 files=4681 size=4.70GB&lt;BR /&gt;| | predicates: regexp_like(calldirection, 'HTTPDL_CAPACITY') = TRUE, regexp_like(endresult, 'HTTP SUCCESS') = TRUE, (modpctlte + isnull(modpctlteca, 0)) &amp;gt; 0.999&lt;BR /&gt;| | table stats: 58699689 rows total&lt;BR /&gt;| | column stats: all&lt;BR /&gt;| | hosts=10 per-host-mem=304.00MB&lt;BR /&gt;| | tuple-ids=10 row-size=252B cardinality=2888511&lt;BR /&gt;| |&lt;BR /&gt;| 43:EXCHANGE [HASH(a.campaign,a.carrier,a.market,a.filename)]&lt;BR /&gt;| | hosts=10 per-host-mem=0B&lt;BR /&gt;| | tuple-ids=9 row-size=207B cardinality=23267425&lt;BR /&gt;| |&lt;BR /&gt;| 08:SCAN HDFS [mobistat.psr_packet_cdma a, RANDOM]&lt;BR /&gt;| partitions=2332/3707 files=2332 size=45.71GB&lt;BR /&gt;| predicates: regexp_like(SESSIONTYPE, 'HTTPDL_CAPACITY') = TRUE, CASE WHEN l_pdschbytes_scc3 &amp;gt; 0 THEN 'LTECA' WHEN l_pdschbytes_scc2 &amp;gt; 0 THEN 'LTECA' WHEN l_pdschbytes_scc1 &amp;gt; 0 THEN 'LTECA' WHEN L_pdschbytes &amp;gt; 0 THEN 'LTE' ELSE NULL END IS NOT NULL&lt;BR /&gt;| runtime filters: RF005 -&amp;gt; a.campaign, RF006 -&amp;gt; a.carrier, RF007 -&amp;gt; a.market, RF008 -&amp;gt; a.filename&lt;BR /&gt;| table stats: 358488531 rows total&lt;BR /&gt;| column stats: all&lt;BR /&gt;| hosts=10 per-host-mem=608.00MB&lt;BR /&gt;| tuple-ids=9 row-size=207B cardinality=23267425&lt;BR /&gt;|&lt;BR /&gt;39:AGGREGATE [FINALIZE]&lt;BR /&gt;| output: sum:merge(l_dlnum256qam), sum:merge(CAST(total_frames AS FLOAT)), sum:merge(ltebwcum), sum:merge(CASE WHEN l_dlmaxnumlayer = 4 THEN 1 ELSE 0 END), count:merge(*)&lt;BR /&gt;| group by: campaign, market, carrier, sessiontype, carrier_count, tech_mode&lt;BR /&gt;| hosts=10 per-host-mem=10.00MB&lt;BR /&gt;| tuple-ids=7 row-size=118B cardinality=13577&lt;BR /&gt;|&lt;BR /&gt;38:EXCHANGE [HASH(campaign,market,carrier,sessiontype,carrier_count,tech_mode)]&lt;BR /&gt;| hosts=10 per-host-mem=0B&lt;BR /&gt;| tuple-ids=7 row-size=118B cardinality=13577&lt;BR /&gt;|&lt;BR /&gt;07:AGGREGATE [STREAMING]&lt;BR /&gt;| output: sum(l_dlnum256qam), sum(CAST((l_dlnum256qam + l_dlnum64qam + l_dlnum16qam + l_dlnumqpsk) AS FLOAT)), sum(ltebwcum), sum(CASE WHEN a.l_dlmaxnumlayer = 4 THEN 1 ELSE 0 END), count(*)&lt;BR /&gt;| group by: a.campaign, a.market, a.carrier, CASE WHEN SESSIONTYPE = 'HTTPDL_CAPACITY_L' THEN 'HTTPDL_CAPACITY' ELSE SESSIONTYPE END, CASE WHEN l_pdschbytes_scc3 &amp;gt; 0 THEN 4 WHEN l_pdschbytes_scc2 &amp;gt; 0 THEN 3 WHEN l_pdschbytes_scc1 &amp;gt; 0 THEN 2 WHEN L_pdschbytes &amp;gt; 0 THEN 1 ELSE 0 END, CASE WHEN l_pdschbytes_scc3 &amp;gt; 0 THEN 'LTECA' WHEN l_pdschbytes_scc2 &amp;gt; 0 THEN 'LTECA' WHEN l_pdschbytes_scc1 &amp;gt; 0 THEN 'LTECA' WHEN L_pdschbytes &amp;gt; 0 THEN 'LTE' ELSE NULL END&lt;BR /&gt;| hosts=10 per-host-mem=10.00MB&lt;BR /&gt;| tuple-ids=7 row-size=118B cardinality=13577&lt;BR /&gt;|&lt;BR /&gt;06:HASH JOIN [INNER JOIN, PARTITIONED]&lt;BR /&gt;| hash predicates: a.campaign = campaign, a.carrier = carrier, a.market = a.market, a.filename = filename&lt;BR /&gt;| other predicates: unix_timestamp(udfs.totimestamp(a.time_stamp)) &amp;lt;= unix_timestamp(udfs.totimestamp(concat(task_date, ' ', timeend))), unix_timestamp(udfs.totimestamp(a.time_stamp)) &amp;gt;= unix_timestamp(udfs.totimestamp(concat(task_date, ' ', timeinit)))&lt;BR /&gt;| runtime filters: RF000 &amp;lt;- campaign, RF001 &amp;lt;- carrier&lt;BR /&gt;| hosts=10 per-host-mem=1018.71KB&lt;BR /&gt;| tuple-ids=0,1,3 row-size=488B cardinality=13577&lt;BR /&gt;|&lt;BR /&gt;|--37:EXCHANGE [HASH(campaign,carrier,a.market,filename)]&lt;BR /&gt;| | hosts=10 per-host-mem=0B&lt;BR /&gt;| | tuple-ids=1,3 row-size=281B cardinality=33804&lt;BR /&gt;| |&lt;BR /&gt;| 05:HASH JOIN [INNER JOIN, BROADCAST]&lt;BR /&gt;| | hash predicates: a.market = market&lt;BR /&gt;| | hosts=10 per-host-mem=17.62KB&lt;BR /&gt;| | tuple-ids=1,3 row-size=281B cardinality=33804&lt;BR /&gt;| |&lt;BR /&gt;| |--35:EXCHANGE [BROADCAST]&lt;BR /&gt;| | | hosts=5 per-host-mem=0B&lt;BR /&gt;| | | tuple-ids=3 row-size=29B cardinality=569&lt;BR /&gt;| | |&lt;BR /&gt;| | 34:AGGREGATE [FINALIZE]&lt;BR /&gt;| | | group by: market&lt;BR /&gt;| | | hosts=5 per-host-mem=10.00MB&lt;BR /&gt;| | | tuple-ids=3 row-size=29B cardinality=569&lt;BR /&gt;| | |&lt;BR /&gt;| | 33:EXCHANGE [HASH(market)]&lt;BR /&gt;| | | hosts=5 per-host-mem=0B&lt;BR /&gt;| | | tuple-ids=3 row-size=29B cardinality=569&lt;BR /&gt;| | |&lt;BR /&gt;| | 04:AGGREGATE [STREAMING]&lt;BR /&gt;| | | group by: market&lt;BR /&gt;| | | hosts=5 per-host-mem=10.00MB&lt;BR /&gt;| | | tuple-ids=3 row-size=29B cardinality=569&lt;BR /&gt;| | |&lt;BR /&gt;| | 03:SCAN HDFS [mobistat.allstats_packet, RANDOM]&lt;BR /&gt;| | partitions=1/1 files=6 size=9.32MB&lt;BR /&gt;| | predicates: bbdo_approved = 1, campaign = '17D1'&lt;BR /&gt;| | table stats: 51137 rows total&lt;BR /&gt;| | column stats: all&lt;BR /&gt;| | hosts=5 per-host-mem=32.00MB&lt;BR /&gt;| | tuple-ids=2 row-size=53B cardinality=1065&lt;BR /&gt;| |&lt;BR /&gt;| 02:SCAN HDFS [mobistat.cdr_packet a, RANDOM]&lt;BR /&gt;| partitions=4680/10328 files=4681 size=4.70GB&lt;BR /&gt;| predicates: regexp_like(calldirection, 'HTTPDL_CAPACITY') = TRUE, regexp_like(endresult, 'HTTP SUCCESS') = TRUE, (modpctlte + isnull(modpctlteca, 0)) &amp;gt; 0.999&lt;BR /&gt;| table stats: 58699689 rows total&lt;BR /&gt;| column stats: all&lt;BR /&gt;| hosts=10 per-host-mem=304.00MB&lt;BR /&gt;| tuple-ids=1 row-size=252B cardinality=2888511&lt;BR /&gt;|&lt;BR /&gt;36:EXCHANGE [HASH(a.campaign,a.carrier,a.market,a.filename)]&lt;BR /&gt;| hosts=10 per-host-mem=0B&lt;BR /&gt;| tuple-ids=0 row-size=207B cardinality=23574499&lt;BR /&gt;|&lt;BR /&gt;01:SCAN HDFS [mobistat.psr_packet_gsm a, RANDOM]&lt;BR /&gt;partitions=2336/6581 files=2336 size=48.59GB&lt;BR /&gt;predicates: regexp_like(SESSIONTYPE, 'HTTPDL_CAPACITY') = TRUE, CASE WHEN l_pdschbytes_scc3 &amp;gt; 0 THEN 'LTECA' WHEN l_pdschbytes_scc2 &amp;gt; 0 THEN 'LTECA' WHEN l_pdschbytes_scc1 &amp;gt; 0 THEN 'LTECA' WHEN L_pdschbytes &amp;gt; 0 THEN 'LTE' ELSE NULL END IS NOT NULL&lt;BR /&gt;runtime filters: RF000 -&amp;gt; a.campaign, RF001 -&amp;gt; a.carrier&lt;BR /&gt;table stats: 636668107 rows total&lt;BR /&gt;column stats: all&lt;BR /&gt;hosts=10 per-host-mem=608.00MB&lt;BR /&gt;tuple-ids=0 row-size=207B cardinality=23574499</description>
      <pubDate>Thu, 25 May 2017 19:29:43 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/impala-memory-limit-exceed/m-p/55069#M15625</guid>
      <dc:creator>krishnat</dc:creator>
      <dc:date>2017-05-25T19:29:43Z</dc:date>
    </item>
    <item>
      <title>Re: impala  memory limit exceed</title>
      <link>https://community.cloudera.com/t5/Support-Questions/impala-memory-limit-exceed/m-p/55071#M15626</link>
      <description>Tim - See the next post for the TEXT PLAN. Please let me know, if you figure whats causing the error message.</description>
      <pubDate>Thu, 25 May 2017 19:34:01 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/impala-memory-limit-exceed/m-p/55071#M15626</guid>
      <dc:creator>krishnat</dc:creator>
      <dc:date>2017-05-25T19:34:01Z</dc:date>
    </item>
    <item>
      <title>Re: impala  memory limit exceed</title>
      <link>https://community.cloudera.com/t5/Support-Questions/impala-memory-limit-exceed/m-p/55082#M15627</link>
      <description>&lt;P&gt;That query probably has multiple big joins and aggregations and needs more memory to complete. A very rough rule of thumb for minimum memory in releases CDH5.9-CDH5.12 is the following.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;For each hash join, the minimum of 150MB or the amount of data on the right side of the node (e.g. if you have a few thousand rows on the right side, maybe a MB or two).&lt;/LI&gt;&lt;LI&gt;For each merge aggregation, the minimum of 300MB or the size of grouped data in-memory (e.g. if you only have a few thousand groups, maybe a MB or two).&lt;/LI&gt;&lt;LI&gt;For each sort, about 50-60MB&lt;/LI&gt;&lt;LI&gt;For each analytic, about 20MB&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;If you add all those up and add another 25% you'll get a ballpark number for how much memory the query will require to execute.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I'm working on reducing those numbers and making the system give a clearer yes/no answer on whether it can run the query before it starts executing.&lt;/P&gt;</description>
      <pubDate>Thu, 25 May 2017 21:20:25 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/impala-memory-limit-exceed/m-p/55082#M15627</guid>
      <dc:creator>Tim Armstrong</dc:creator>
      <dc:date>2017-05-25T21:20:25Z</dc:date>
    </item>
  </channel>
</rss>

