Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

YARN-471

avatar
New Contributor

Hi!

We’re facing YARN-4714 and Swap memory issues, as shown on the attached printscreens.

Increasing ‘spark.yarn.executor.memoryOverhead’ to default ‘0.6’ solved the issue for our spark application on NameNode.

Do you think, it would be reasonable to set the value to ‘0.6-0.75’ in the file ‘yarn-site.xml’, cluster-wide for both NameNodes and DataNodes?

Thank you for your response, in advance.

ClouderaManager_HDFS_SwapMemoryIssues.PNGDataNode_SwampMemory.PNGHDFS_saveTable_YARN4714.PNG

3 REPLIES 3

avatar
Community Manager

@StanislavJ, Welcome to our community! To help you get the best possible answer, I have tagged our Yarn experts @Babasaheb who may be able to assist you further.

Please feel free to provide any additional information or details about your query, and we hope that you will find a satisfactory solution to your question.



Regards,

Vidya Sargur,
Community Manager


Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.
Learn more about the Cloudera Community:

avatar
Expert Contributor

Hello @StanislavJ ,

The Linux kernel parameter, vm.swappiness, is a value from 0-100 that controls the swapping of application data (as anonymous pages) from physical memory to virtual memory on disk. The higher the value, the more aggressively inactive processes are swapped out from physical memory. The lower the value, the less they are swapped, forcing filesystem buffers to be emptied.

On most systems, vm.swappiness is set to 60 by default. This is not suitable for Hadoop clusters because processes are sometimes swapped even when enough memory is available. This can cause lengthy garbage collection pauses for important system daemons, affecting stability and performance.

Cloudera recommends that you set vm.swappiness to a value between 1 and 10, preferably 1, for minimum swapping on systems where the RHEL kernel is 2.6.32-642.el6 or higher.

To view your current setting for vm.swappiness, run:

#cat /proc/sys/vm/swappiness

To set vm.swappiness to 1, run:

#sudo sysctl -w vm.swappiness=1

Also, To give an overview, swapping alerts are generated in Cloudera Manager when host swapping or role process swap usage exceeds a defined threshold.

The warning threshold of "500 MiB" will mean that any swap usage beyond this on a given host will generate an alert and critical if set to any would generate an alert even if a small amount of swapping occurs. The swap memory usage threshold value can be set at the "host" level or at the "process/service" level.

>> To set the threshold at the process level settings can be done as follows:

From CM UI >> Clusters >> yarn >> Configuration >> search for "Process Swap Memory Thresholds" >> (For resource manager) Warning and Critical >> Select Specify >> Specify value here (You can set the values in Bytes/KB/MB/GB) >> Save changes

You can increase the value and then further suggest you monitor the cluster for the swap usage and adjust the values accordingly.

If you found this response assisted with your query, please take a moment to log in and click on KUDOS 🙂 & ”Accept as Solution" below this post.

Thank you,

Babasaheb Jagtap

avatar
Community Manager

@StanislavJ , Did the response assist in resolving your query? If it did, kindly mark the relevant reply as the solution, as it will aid others in locating the answer more easily in the future. 



Regards,

Vidya Sargur,
Community Manager


Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.
Learn more about the Cloudera Community: