- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
Created on ‎12-24-2016 03:55 PM
SYMPTOMS: Ambari configures "hdfs - nofile hdfs - nofile 128000" in /etc/security/limits.d/hdfs.conf but when the datanode (or any other) process is started by Ambari, it still runs with a limit of only 8129 open files:
sudo grep open /proc/19608/limits Max open files 8192 8192 files
(19608 is the pid of the datanode process in this case).
As a consequence we get millions of entries in datanode logs resulting in a lot of disk / IO bottleneck.
WARN mortbay.log (Slf4jLog.java:warn(89)) - EXCEPTION java.io.IOException: Too many open files
ROOT CAUSE: Ambari should check uncommented "session required pam_limits.so" line in /etc/pam.d/su and /etc/pam.d/sudo , otherwise ulimit values from /etc/security/limits.d/hdfs.conf (+ yarn.conf, hive.conf, ams.conf) won't be applied. This is a known issue reported in internal BUG-38892.
SOLUTION: Not yet fixed.
WORKAROUND: Uncomment the line "session required pam_limits.so " in /etc/pam.d/su on each node and restart services.Log a case with HWX support team to get a patch for the bug.
Created on ‎04-07-2017 08:55 PM
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content
Thank you! I've pulled most of my hair out researching this problem. I'm thankful you posted a workaround!
Created on ‎07-27-2018 09:41 AM
- Mark as Read
- Mark as New
- Bookmark
- Permalink
- Report Inappropriate Content