- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
How can I keep track of keep-alive response on NiFi?
- Labels:
-
Apache NiFi
Created ‎01-25-2017 06:08 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I am currently experiencing some issues when sending a response from NiFi's HandleHttpResponse processor. I keep getting an IOException constantly for some of the messages and I think it could be because the server we are using is configured to close and clear the connection after 3 failed consecutive keep-alives. Because of this I was wondering if there is a way for me to keep track and to verify NiFi's keep-alive responses? I am using NiFi version NiFi-1.1.0. In addition, I wanted to know if there are currently any reported bugs regarding jetty in NiFi that could be causing this error constantly for some messages? I would appreciate any insight on this issue. Below I have pasted the full log of the error:
ERROR [Timer-Driven Process Thread-10] o.a.n.p.standard.HandleHttpResponse org.apache.nifi.processor.exception.ProcessException: IOException thrown from HandleHttpResponse[id=38c1eb9d-139b-3ce4-aca4-5f0e7c7b3f8f]: org.eclipse.jetty.io.EofException at org.apache.nifi.controller.repository.StandardProcessSession.exportTo(StandardProcessSession.java:2762) ~[na:na] at org.apache.nifi.processors.standard.HandleHttpResponse.onTrigger(HandleHttpResponse.java:166) ~[nifi-standard-processors-1.1.0.jar:1.1.0] at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) [nifi-api-1.1.0.jar:1.1.0] at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099) [nifi-framework-core-1.1.0.jar:1.1.0] at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136) [nifi-framework-core-1.1.0.jar:1.1.0] at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) [nifi-framework-core-1.1.0.jar:1.1.0] at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132) [nifi-framework-core-1.1.0.jar:1.1.0] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_45] at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_45] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_45] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_45] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_45] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_45] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]Caused by: org.eclipse.jetty.io.EofException: null at org.eclipse.jetty.io.ChannelEndPoint.flush(ChannelEndPoint.java:197) ~[jetty-io-9.3.9.v20160517.jar:9.3.9.v20160517] at org.eclipse.jetty.io.WriteFlusher.flush(WriteFlusher.java:419) ~[jetty-io-9.3.9.v20160517.jar:9.3.9.v20160517] at org.eclipse.jetty.io.WriteFlusher.completeWrite(WriteFlusher.java:375) ~[jetty-io-9.3.9.v20160517.jar:9.3.9.v20160517] at org.eclipse.jetty.io.SelectChannelEndPoint$3.run(SelectChannelEndPoint.java:107) ~[jetty-io-9.3.9.v20160517.jar:9.3.9.v20160517] at org.eclipse.jetty.io.SelectChannelEndPoint.onSelected(SelectChannelEndPoint.java:193) ~[jetty-io-9.3.9.v20160517.jar:9.3.9.v20160517] at org.eclipse.jetty.io.ManagedSelector$SelectorProducer.processSelected(ManagedSelector.java:283) ~[jetty-io-9.3.9.v20160517.jar:9.3.9.v20160517] at org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:181) ~[jetty-io-9.3.9.v20160517.jar:9.3.9.v20160517] at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:249) ~[jetty-util-9.3.9.v20160517.jar:9.3.9.v20160517] at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) ~[jetty-util-9.3.9.v20160517.jar:9.3.9.v20160517] at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136) ~[jetty-util-9.3.9.v20160517.jar:9.3.9.v20160517] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) ~[jetty-util-9.3.9.v20160517.jar:9.3.9.v20160517] at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) ~[jetty-util-9.3.9.v20160517.jar:9.3.9.v20160517] ... 1 common frames omittedCaused by: java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.write0(Native Method) ~[na:1.8.0_45] at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) ~[na:1.8.0_45] at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) ~[na:1.8.0_45] at sun.nio.ch.IOUtil.write(IOUtil.java:51) ~[na:1.8.0_45] at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471) ~[na:1.8.0_45] at org.eclipse.jetty.io.ChannelEndPoint.flush(ChannelEndPoint.java:175) ~[jetty-io-9.3.9.v20160517.jar:9.3.9.v20160517]
Created ‎01-26-2017 04:08 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The stack trace shows "connection reset by peer". There are some good explanations of what this tells us on the Internet but the moral of the story is it suggests the connection NiFi was writing to was closed and NiFi was notified of that. It happened while trying to write the response which is an exceptional condition so you get this stack trace. I think we'd need to understand the systems involved in this web request/response cycle to help diagnose much further.
Created ‎01-26-2017 04:08 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The stack trace shows "connection reset by peer". There are some good explanations of what this tells us on the Internet but the moral of the story is it suggests the connection NiFi was writing to was closed and NiFi was notified of that. It happened while trying to write the response which is an exceptional condition so you get this stack trace. I think we'd need to understand the systems involved in this web request/response cycle to help diagnose much further.
