I see that the following error is triggered regularly on my cluster:
> Memory limit exceeded
> FunctionContextImpl::AllocateLocal's allocations exceeded memory limits
Such error, appears when :
- requests address a very large number of tuples
- requests seems to use only built-in UDF/UDAF like SUM, COUNT, or DISTINCT
I see that if I reran the request, the request then succeed almost everytime. However, i have haproxy in use with lot of impalads behind it.
1) Is FunctionContextImpl::AllocateLocal called by theses built-in UDAF ? It is a typical error that means that impalad is running out of memory for the current request ? If yes, increasing memory of each impalad could solve the problem easily ?
2) Could the problem be related to memory leaks ?
I mean that I have also self-made C++ UDF that are in use with others Impala requests. Such requests (which address smaller number of tuples) succeed. However in such UDFs I use FunctionContextImpl::Allocate/Free ( not the AllocateLocal one ), and the StringVal constructor with context parameter.
So basically, if a memory leak is actually happening in the self-made UDF, could it be related to the previous error ? I mean the "AllocateLocal's allocations exceeded memory" error that occurs on request which are not using the self-made UDF.