- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Too many open files
- Labels:
-
Cloudera Enterprise Data Hub
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Often when provisioning clusters nodes will be cancelled due to Cloudera Director not being able to open more file handles:
[2019-07-23 17:31:18.585 +0000] ERROR [p-ebcce1c842e9-BackupClouderaManagerConfig] - - - com.cloudera.launchpad.pipeline.ssh.SshJobFailFastWithOutputLogging - com.cloudera.launchpad.pipeline.util.PipelineRunner: Attempt to execute job failed java.net.SocketException: Too many open files at java.net.Socket.createImpl(Socket.java:460) at java.net.Socket.connect(Socket.java:587) at net.schmizz.sshj.SocketClient.connect(SocketClient.java:126) at com.cloudera.launchpad.sshj.SshJClient.attemptConnection(SshJClient.java:343) at com.cloudera.launchpad.sshj.SshJClient.attemptConnection(SshJClient.java:318) at com.cloudera.launchpad.sshj.SshJClient.access$000(SshJClient.java:68)
How to increase the file handles as ulimit and limits.conf do not seem to work?
Created ‎07-23-2019 10:42 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
This is because the limit file (/proc/<director_pid>/limit) of the process has a "Max open files" of 1024 which is to low for most operations.
A solution for this since it uses systemd on RHEL/CentOS 7 is to do the following:
# make a folder for custom systemd changes for this service
mkdir -p /etc/systemd/system/cloudera-director-server.service.d/
# make an override conf file so that a Director upgrade will not break the changes
vim /etc/systemd/system/cloudera-director-server.service.d/override.conf # then add the following in that file and save/quit it [Service] LimitNOFILE=65536
# next reload the daemon
systemctl daemon-reload
# finally restart Director
systemctl restart cloudera-director-server
Then if you check the limit file in the new process you will see it show 65536 as the "Max open files".
Hopefully this can help someone in the future. Cheers!
Created ‎07-23-2019 10:42 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
This is because the limit file (/proc/<director_pid>/limit) of the process has a "Max open files" of 1024 which is to low for most operations.
A solution for this since it uses systemd on RHEL/CentOS 7 is to do the following:
# make a folder for custom systemd changes for this service
mkdir -p /etc/systemd/system/cloudera-director-server.service.d/
# make an override conf file so that a Director upgrade will not break the changes
vim /etc/systemd/system/cloudera-director-server.service.d/override.conf # then add the following in that file and save/quit it [Service] LimitNOFILE=65536
# next reload the daemon
systemctl daemon-reload
# finally restart Director
systemctl restart cloudera-director-server
Then if you check the limit file in the new process you will see it show 65536 as the "Max open files".
Hopefully this can help someone in the future. Cheers!
