Community Articles
Find and share helpful community-sourced technical articles
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

SYMPTOMS: Ambari configures "hdfs - nofile hdfs - nofile 128000" in /etc/security/limits.d/hdfs.conf but when the datanode (or any other) process is started by Ambari, it still runs with a limit of only 8129 open files:

sudo grep open /proc/19608/limits 
Max open files 8192 8192 files 

(19608 is the pid of the datanode process in this case).

As a consequence we get millions of entries in datanode logs resulting in a lot of disk / IO bottleneck.

WARN mortbay.log (Slf4jLog.java:warn(89)) - 
EXCEPTION java.io.IOException: Too many open files 

ROOT CAUSE: Ambari should check uncommented "session required pam_limits.so" line in /etc/pam.d/su and /etc/pam.d/sudo , otherwise ulimit values from /etc/security/limits.d/hdfs.conf (+ yarn.conf, hive.conf, ams.conf) won't be applied. This is a known issue reported in internal BUG-38892.

SOLUTION: Not yet fixed.

WORKAROUND: Uncomment the line "session required pam_limits.so " in /etc/pam.d/su on each node and restart services.Log a case with HWX support team to get a patch for the bug.

1,866 Views
Comments
New Contributor

Thank you! I've pulled most of my hair out researching this problem. I'm thankful you posted a workaround!

Don't have an account?
Coming from Hortonworks? Activate your account here
Version history
Revision #:
1 of 1
Last update:
‎12-24-2016 03:55 PM
Updated by:
 
Contributors
Top Kudoed Authors