Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Best practices for ulimits? (number of open file decriptors and processes)

Solved Go to solution

Best practices for ulimits? (number of open file decriptors and processes)

What are the recommended starting values for ulimits for each component?

The Ambari 2.1 doc says 10000, but should some services be started higher, say 32k?

Is there a good way to estimate these values based on cluster size, memory/cpu, number of blocks, etc? How can we proactively adjust ulimit, to avoid waiting until a service fails because the limit is hit?

1 ACCEPTED SOLUTION

Accepted Solutions

Re: Best practices for ulimits? (number of open file decriptors and processes)

Ulimit does not depend on the size of the cluster but more on the individual node, workloads and user concurrency. I set these values as best practice, which BTW is a large enough value and is probably never reached.

* - nofile 32768

* - nproc 65536

Ambari actually changes the limit of the core files created as part of the commands executed like in this case -

319-screen-shot-2015-10-23-at-21442-pm.png

1 REPLY 1

Re: Best practices for ulimits? (number of open file decriptors and processes)

Ulimit does not depend on the size of the cluster but more on the individual node, workloads and user concurrency. I set these values as best practice, which BTW is a large enough value and is probably never reached.

* - nofile 32768

* - nproc 65536

Ambari actually changes the limit of the core files created as part of the commands executed like in this case -

319-screen-shot-2015-10-23-at-21442-pm.png