Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

NiFi RouteText Processor - Too Many Open Files

avatar
Master Guru

I am running a csv file with approx 300,000 records through routetext processor. I am getting the following error to many files open:

NiFi App Log:

2016-01-05 23:53:57,540 WARN [Timer-Driven Process Thread-10] o.a.n.c.t.ContinuallyRunProcessorTask org.apache.nifi.processor.exception.FlowFileAccessException: Exception in callback: java.io.FileNotFoundException: /opt/nifi-1.1.0.0-10/content_repository/100/1452038037470-66660 (Too many open files) at org.apache.nifi.controller.repository.StandardProcessSession.append(StandardProcessSession.java:2048) ~[nifi-framework-core-1.1.0.0-10.jar:1.1.0.0-10] at org.apache.nifi.processors.standard.RouteText.appendLine(RouteText.java:499) ~[na:na] at org.apache.nifi.processors.standard.RouteText.access$100(RouteText.java:79) ~[na:na] at org.apache.nifi.processors.standard.RouteText$1.process(RouteText.java:433) ~[na:na] at org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:1806) ~[nifi-framework-core-1.1.0.0-10.jar:1.1.0.0-10] at org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:1777) ~[nifi-framework-core-1.1.0.0-10.jar:1.1.0.0-10] at org.apache.nifi.processors.standard.RouteText.onTrigger(RouteText.java:360) ~[na:na] at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) ~[nifi-api-1.1.0.0-10.jar:1.1.0.0-10] at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1146) ~[nifi-framework-core-1.1.0.0-10.jar:1.1.0.0-10] at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:139) [nifi-framework-core-1.1.0.0-10.jar:1.1.0.0-10] at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:49) [nifi-framework-core-1.1.0.0-10.jar:1.1.0.0-10] at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:119) [nifi-framework-core-1.1.0.0-10.jar:1.1.0.0-10] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [na:1.7.0_91] at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) [na:1.7.0_91] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178) [na:1.7.0_91] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [na:1.7.0_91] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_91] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_91] at java.lang.Thread.run(Thread.java:745) [na:1.7.0_91] Caused by: java.io.FileNotFoundException: /opt/nifi-1.1.0.0-10/content_repository/100/1452038037470-66660 (Too many open files) at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_91] at java.io.FileOutputStream.<init>(FileOutputStream.java:221) ~[na:1.7.0_91] at org.apache.nifi.controller.repository.FileSystemRepository.write(FileSystemRepository.java:862) ~[nifi-framework-core-1.1.0.0-10.jar:1.1.0.0-10] at org.apache.nifi.controller.repository.FileSystemRepository.write(FileSystemRepository.java:831) ~[nifi-framework-core-1.1.0.0-10.jar:1.1.0.0-10] at org.apache.nifi.controller.repository.StandardProcessSession.append(StandardProcessSession.java:2008) ~[nifi-framework-core-1.1.0.0-10.jar:1.1.0.0-10] ... 18 common frames omitted

I have run the following

hadoop dfsadmin -report and all is fine

I have checked

ulimit -Sn

ulimit -Hn

which both have 10000 limit.

1 ACCEPTED SOLUTION

avatar

Hello @Sunile Manjee. As Andrew mentions the http://nifi.apache.org/quickstart.html does outline how to alter the settings to ensure the NiFi process is able to have sufficient open files. One common gotcha is that the setting is tied to a different user than your NiFi process is executed as and in making sure that the session NiFi starts with does indeed have that new open files setting reflected. Are you running 'bin/nifi.sh start' in the same terminal that you run 'ulimit -a' to see if the setting has taken effect?

Another good technique you can use is to run 'lsof -p 12345' assuming the pid of NiFi is 12345 and it will show you all the open file handles that NiFi process has.

Thanks

Joe

View solution in original post

5 REPLIES 5

avatar

10000 is not an adequate setting. Please update your system config as outlined here http://nifi.apache.org/quickstart.html

avatar
Master Guru

Thanks @Andrew Grande but after making the adjustments per quickstart on the sandbox/vm it is still producing same error.

avatar

Hello @Sunile Manjee. As Andrew mentions the http://nifi.apache.org/quickstart.html does outline how to alter the settings to ensure the NiFi process is able to have sufficient open files. One common gotcha is that the setting is tied to a different user than your NiFi process is executed as and in making sure that the session NiFi starts with does indeed have that new open files setting reflected. Are you running 'bin/nifi.sh start' in the same terminal that you run 'ulimit -a' to see if the setting has taken effect?

Another good technique you can use is to run 'lsof -p 12345' assuming the pid of NiFi is 12345 and it will show you all the open file handles that NiFi process has.

Thanks

Joe

avatar
Master Guru

thank you for the help. I here are the steps I performed on my sandbox to fix the issue

Added to /etc/security/limits:

* hard nofile 50000

* soft nofile 50000

* hard nproc 10000

* soft nproc 10000

add to /etc/security/limits.d/90-nproc.conf

* soft nproc 10000

Added to /etc/sysctl.conf: fs.file-max = 50000 Then re-read the sysctl.conf: /sbin/sysctl -p

Shut down ALL services through ambari

reboot centos

root user -

ulimit -a

And done. All works.

avatar

Great. Thanks for providing that follow-up!