Created 11-13-2015 05:51 AM
What is the recommended configuration for the vm.overcommit_memory [0|1|2] and vm.overcommit_ratio settings in sysctl?
Looking specifically for a Spark cluster.
I found the following link that suggests vm.overcommit_memory should be set to 1 for mapreduce streaming use cases:
https://www.safaribooksonline.com/library/view/hadoop-operations/9781449327279/ch04.html
Do we have any best practices around this?
Created on 11-21-2015 12:34 AM - last edited on 12-19-2019 01:53 PM by lwang
This is very useful https://www.kernel.org/doc/Documentation/vm/overcommit-accounting
and you are right, The best practice is to set vm.overcommit_memory to 1
Created 11-15-2015 03:10 AM
Created 02-02-2016 01:46 AM
@Laurence Da Luz can you accept the answer?
Created on 11-21-2015 12:34 AM - last edited on 12-19-2019 01:53 PM by lwang
This is very useful https://www.kernel.org/doc/Documentation/vm/overcommit-accounting
and you are right, The best practice is to set vm.overcommit_memory to 1
Created 12-19-2019 12:59 PM
This answer incorrectly summarized the content of the link referenced. The resource listed suggests setting vm.overcommit_memory=1, not vm.overcommit_ratio.
Created 12-19-2019 01:53 PM
Thanks for the feedback. I have corrected the answer and thank you for reporting this.
Cheers,
Li
Li Wang, Technical Solution Manager