Reply
Highlighted
Contributor
Posts: 30
Registered: ‎04-07-2016

Compute incremental stats "out of memory" HdfsScanNode exceed

Hi,

I am trying to compute incremental stats for one large table (~200gb). 
But I have an out of memory error :

Memory limit exceeded The memory limit is set too low to initialize spilling operator (id=3). The minimum required memory to spill this operator is 272.00 MB.
 
It is a little bit strange to see that because I have a memory limit in the daemon and in the shell set to 80gb.
But anyway, I investigated a little bit more and found this in the logs : 
W0124 12:13:21.800235  6746 HdfsScanNode.java:654] Per-host mem cost 8.25GB exceeded per-host upper bound 7.50GB.
I got the same error if I do it for the all the table or just one partition at the time.
I couldn't find the parameter to increase HdfsScanNode uper bound. 

Any idea on how I could solve that?

thanks

ps: I am using cdh 5.9.1


 
Cloudera Employee
Posts: 279
Registered: ‎10-16-2013

Re: Compute incremental stats "out of memory" HdfsScanNode exceed

Hi Maurin,

 

could you share the query profile?

 

The warning in HdfsScanNode only pertains to the memory requirements estimated by the planner. It does not affect runtime or memory allocation in any way, so has nothing to do the failure of compute incremental stats.

 

I suspect that somehow your memory limit is not set correctly.

Which client are you using? Are you behind a load balancer?

 

Alex

Contributor
Posts: 30
Registered: ‎04-07-2016

Re: Compute incremental stats "out of memory" HdfsScanNode exceed

Hi, 
I am trying to run the query by directly connecting on impala unsing impala-shell in one of the daemons machines. I only use HA proxy with no load balancer. 
this is waht I get with the profile : 

compute incremental stats my_table;
Query: compute incremental stats my_table
WARNINGS:
Memory limit exceeded
The memory limit is set too low to initialize spilling operator (id=3). The minimum required memory to spill this operator is 272.00 MB.


Column some_column_name does not have statistics, recomputing stats for the whole table
[my_machine:21000] > profile;
Query Runtime Profile:
Query (id=c2428c691af2dcaa:55837fed00000000):
  Summary:
    Session ID: bb4992202447c47e:fe9f65a64f6da581
    Session Type: BEESWAX
    Start Time: 2017-01-25 10:58:31.528250000
    End Time: 2017-01-25 10:59:00.884573000
    Query Type: DDL
    Query State: EXCEPTION
    Query Status:
Memory limit exceeded
The memory limit is set too low to initialize spilling operator (id=3). The minimum required memory to spill this operator is 272.00 MB.


    Impala Version: impalad version 2.7.0-cdh5.9.1 RELEASE (build 24ad6df788d66e4af9496edb26ac4d1f1d2a1f2c)
    User: my_user
    Connected User: my_user
    Delegated User:
    Network Address: ::ffff:172.16.0.221:46893
    Default Db: my_db
    Sql Statement: compute incremental stats my_table
    Coordinator: my_machine:22000
    Query Options (non default):
    DDL Type: COMPUTE_STATS
    : 0.000ns
    Query Timeline: 29s356ms
       - Start execution: 66.160us (66.160us)
       - Planning finished: 28.190ms (28.124ms)
       - Request finished: 29s090ms (29s062ms)
       - Unregister query: 29s356ms (265.969ms)
  ImpalaServer:
     - ClientFetchWaitTimer: 0.000ns
     - RowMaterializationTimer: 0.000ns
Contributor
Posts: 30
Registered: ‎04-07-2016

Re: Compute incremental stats "out of memory" HdfsScanNode exceed

where you able to look at the profile by any chance?
thanks
Announcements