Support Questions

Find answers, ask questions, and share your expertise

Recommended value for vm.overcommit_memory for a Spark cluster?

avatar
Expert Contributor

What is the recommended configuration for the vm.overcommit_memory [0|1|2] and vm.overcommit_ratio settings in sysctl?

Looking specifically for a Spark cluster.

I found the following link that suggests vm.overcommit_memory should be set to 1 for mapreduce streaming use cases:

https://www.safaribooksonline.com/library/view/hadoop-operations/9781449327279/ch04.html

Do we have any best practices around this?

1 ACCEPTED SOLUTION

avatar
Master Mentor

@Laurence Da Luz

This is very useful https://www.kernel.org/doc/Documentation/vm/overcommit-accounting

and you are right, The best practice is to set vm.overcommit_memory to 1

View solution in original post

5 REPLIES 5

avatar
Master Mentor

avatar
Master Mentor

@Laurence Da Luz can you accept the answer?

avatar
Master Mentor

@Laurence Da Luz

This is very useful https://www.kernel.org/doc/Documentation/vm/overcommit-accounting

and you are right, The best practice is to set vm.overcommit_memory to 1

avatar
Explorer

This answer incorrectly summarized the content of the link referenced.  The resource listed suggests setting vm.overcommit_memory=1, not vm.overcommit_ratio.

avatar
Guru

@Augustine ,

 

Thanks for the feedback. I have corrected the answer and thank you for reporting this.

 

Cheers,

Li

Li Wang, Technical Solution Manager


Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.

Learn more about the Cloudera Community:

Terms of Service

Community Guidelines

How to use the forum