Support Questions

Find answers, ask questions, and share your expertise
Announcements
Check out our newest addition to the community, the Cloudera Data Analytics (CDA) group hub.

Impala using SWAP despite Kill When Out of Memory set to active

Rising Star

Hi,

I have various hosts which start to use swap and this happenes when impala goes beyond it's mem_limit.

At present mem_limit is set to 128GB per data node, however as the charts below show it spiked and went above this which then trigged the swap usage (which does not get released).

My question is why is impala using more memory than allowed and going to swap when I have set kill out of memory to be activ?  Swapiness is set to 1.

Impala memory.PNG

6 REPLIES 6

Expert Contributor

Hi,

 

The decision of whether to use swap or not is done by the kernel. Impala will not have a control over it. If incase impalad running on a node reaches near the mem_limit set, then the query will error out with memory limit exceeded error.

 

The below link explains when swapping occurs

 

http://careers.directi.com/display/tu/Understanding+and+optimizing+Memory+utilization

Quote from the link:
==============
Swapping occurs in one of two scenarios -
* When the kernel needs to allocate a page of memory to a process and finds that there is no memory available. In this case the kernel must swap out the least used pages of an existing process into the swap space (on disk) and allocate those page frames to the requesting process
* There is a kernel parameter that determines swappiness of the kernel. The value is between 1 to 100 and is set to around 60 by default. A value of 100 means that the kernel will be considerably agressive when it comes to preferring allocatoin of memory to disk cache over processes. A value of 60 can result in occasional swapping out of process owned pages onto disk to make room for additional pages for the disk cache

 

--

 

These are indications of the node's memory being fully utilised and the subsequent step would be to investigate on 1. if the node being overcommitted with multiple roles, then offload some roles to other nodes

2. If you notice the node swapping to disk even if there is enough free RAM, then it would be best to check if NUMA is disabled - Here is a link which describes such scenarios https://blog.jcole.us/2010/09/28/mysql-swap-insanity-and-the-numa-architecture/ 

3. If the cluster is suffering from overload, then its best to think about scaling out.

Champion

To add  points to @venkatsambath 

 

Cloduera recommends to you use vm.swapiness=10. when it comes to RHEL 6.4 or any higer version of kernel

for better performance.

vi  /proc/sys/vm/swappiness

https://www.cloudera.com/documentation/enterprise/5-3-x/topics/cdh_admin_performance.html

Rising Star

Thanks for the response, its a great help.

One thing I still don't understand if why does the impala daemon resident memort chart I attached, show that the memory goes above that of impala memory limit?

Rising Star

Thanks for the response, its a great help.

One thing I still don't understand if why does the impala daemon resident memort chart I attached, show that the memory goes above that of impala memory limit?

@chriswalton007Not sure if this is what you're seeing, but the Impala daemon memory limit does not include the embedded JVM: https://issues.apache.org/jira/browse/IMPALA-691

Rising Star

Thanks Tim, its possible that is the case.

Am I correct in thinking there is no option to set the JVM memory size limit for impala?  I can't seem to find it within Cloudera?

In the bug link you provided it refers to FE (JVM).  Is FE an acroymn for something?

Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.