Member since
09-29-2016
54
Posts
9
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2815 | 07-03-2017 09:12 PM | |
2171 | 05-22-2017 08:19 PM |
04-20-2017
09:12 PM
Hello, I am currently trying to test a flow using the PutHiveStreaming processor in NiFi-1.1.0. In my flow I am getting a csv file, inferring the Avro schema, converting the file from cvs to avro and from there sending it into the PutHiveStreaming processor. However I keep getting a "HiveWriter$ConnectFailure: Failed connecting to EndPoint" error in NiFi. I am using HDP version 2.5.3 for Hive. I have seen that there were some issues with NiFi and HDP 2.5 previously as mentioned here:https://issues.apache.org/jira/browse/NIFI-2828, but to my understanding they were fixed on version NiFi-1.1.0 which is the one I am using. Does anyone have any ideas of what could be causing this error? Any insight on this would be greatly appreciated. The full NiFi error log is the following: ERROR [Timer-Driven Process Thread-18] o.a.n.processors.hive.PutHiveStreaming
org.apache.nifi.util.hive.HiveWriter$ConnectFailure: Failed connecting to EndPoint {metaStoreUri='thrift://<hive-metastore-node>:9083', database='analytics', table='agent', partitionVals=[] }
at org.apache.nifi.util.hive.HiveWriter.<init>(HiveWriter.java:80) ~[nifi-hive-processors-1.1.0.jar:1.1.0]
at org.apache.nifi.util.hive.HiveUtils.makeHiveWriter(HiveUtils.java:45) ~[nifi-hive-processors-1.1.0.jar:1.1.0]
at org.apache.nifi.processors.hive.PutHiveStreaming.makeHiveWriter(PutHiveStreaming.java:829) ~[nifi-hive-processors-1.1.0.jar:1.1.0]
at org.apache.nifi.processors.hive.PutHiveStreaming.getOrCreateWriter(PutHiveStreaming.java:740) ~[nifi-hive-processors-1.1.0.jar:1.1.0]
at org.apache.nifi.processors.hive.PutHiveStreaming.lambda$onTrigger$7(PutHiveStreaming.java:464) ~[nifi-hive-processors-1.1.0.jar:1.1.0]
at org.apache.nifi.processors.hive.PutHiveStreaming$$Lambda$396/393641412.process(Unknown Source) ~[na:na]
at org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2082) ~[na:na]
at org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2053) ~[na:na]
at org.apache.nifi.processors.hive.PutHiveStreaming.onTrigger(PutHiveStreaming.java:391) ~[nifi-hive-processors-1.1.0.jar:1.1.0]
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) [nifi-api-1.1.0.jar:1.1.0]
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099) [nifi-framework-core-1.1.0.jar:1.1.0]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136) [nifi-framework-core-1.1.0.jar:1.1.0]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) [nifi-framework-core-1.1.0.jar:1.1.0]
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132) [nifi-framework-core-1.1.0.jar:1.1.0]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_45]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_45]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_45]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_45]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_45]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_45]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
Caused by: org.apache.nifi.util.hive.HiveWriter$TxnBatchFailure: Failed acquiring Transaction Batch from EndPoint: {metaStoreUri='thrift://<hive-metastore-node>:9083', database='analytics', table='agent', partitionVals=[] }
at org.apache.nifi.util.hive.HiveWriter.nextTxnBatch(HiveWriter.java:255) ~[nifi-hive-processors-1.1.0.jar:1.1.0]
at org.apache.nifi.util.hive.HiveWriter.<init>(HiveWriter.java:74) ~[nifi-hive-processors-1.1.0.jar:1.1.0]
... 20 common frames omitted
Caused by: org.apache.hive.hcatalog.streaming.TransactionError: Unable to acquire lock on {metaStoreUri='thrift://<hive-metastore-node>:9083', database='analytics', table='agent', partitionVals=[] }
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.beginNextTransactionImpl(HiveEndPoint.java:578) ~[hive-hcatalog-streaming-1.2.1.jar:1.2.1]
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.beginNextTransaction(HiveEndPoint.java:547) ~[hive-hcatalog-streaming-1.2.1.jar:1.2.1]
at org.apache.nifi.util.hive.HiveWriter.nextTxnBatch(HiveWriter.java:252) ~[nifi-hive-processors-1.1.0.jar:1.1.0]
... 21 common frames omitted
Caused by: org.apache.thrift.transport.TTransportException: null
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132) ~[libthrift-0.9.2.jar:0.9.2]
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86) ~[libthrift-0.9.2.jar:0.9.2]
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429) ~[libthrift-0.9.2.jar:0.9.2]
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318) ~[libthrift-0.9.2.jar:0.9.2]
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219) ~[libthrift-0.9.2.jar:0.9.2]
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69) ~[libthrift-0.9.2.jar:0.9.2]
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_lock(ThriftHiveMetastore.java:3906) ~[hive-metastore-1.2.1.jar:1.2.1]
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.lock(ThriftHiveMetastore.java:3893) ~[hive-metastore-1.2.1.jar:1.2.1]
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.lock(HiveMetaStoreClient.java:1863) ~[hive-metastore-1.2.1.jar:1.2.1]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_45]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_45]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_45]
at java.lang.reflect.Method.invoke(Method.java:497) ~[na:1.8.0_45]
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:152) ~[hive-metastore-1.2.1.jar:1.2.1]
at com.sun.proxy.$Proxy246.lock(Unknown Source) ~[na:na]
at org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.beginNextTransactionImpl(HiveEndPoint.java:573) ~[hive-hcatalog-streaming-1.2.1.jar:1.2.1]
... 23 common frames omitted
... View more
Labels:
- Labels:
-
Apache NiFi
03-30-2017
11:00 PM
Hello, I currently have a flow in NiFi that receives flowfiles and routes them based on topic, however every flowfile received in the flow is a bash that contains multiple messages and the number of lines that each message contains can vary so I cannot split by number of lines. Is there a way in NiFi that I can split based on a specific text sequence? The main point of doing this is that I want to know how many messages come inside each bash so if there could be a way to count how many times a specific word happens inside the content of the flowfile or to split the flowfile based on text content it would be really helpful cause based o number of splits I would know how many messages are in each bash. Is there a way to do something like this in NiFi? I am using NiFi version NiFi-1.1.0. Any suggestions would truly be appreciated!
... View more
Labels:
- Labels:
-
Apache NiFi
02-16-2017
05:00 PM
Hello, I seem to be running into an
issue using the NiFi version NiFi-1.1.0 where our HandleHttpRequest processor
keeps throwing the following error: ERROR [Timer-Driven Process
Thread-20] o.a.n.p.standard.HandleHttpRequest
HandleHttpRequest[id=11e56f5b-b901-3672-9424-05fc8f846a3e]
HandleHttpRequest[id=11e56f5b-b901-3672-9424-05fc8f846a3e] failed to process
due to org.apache.nifi.processor.exception.FlowFileAccessException: Failed to
import data from HttpInputOverHTTP@802fcbf[c=0,s=ERROR:java.util.concurrent.TimeoutException:
Idle timeout expired: 30025/30000 ms] for
StandardFlowFileRecord[uuid=3e0a3a13-0d05-44fb-af71-1fa0b2519db8,claim=,offset=0,name=1866449175849578,size=0]
due to org.apache.nifi.processor.exception.FlowFileAccessException: Unable to
create ContentClaim due to java.io.IOException:
java.util.concurrent.TimeoutException: Idle timeout expired: 30025/30000 ms;
rolling back session: org.apache.nifi.processor.exception.FlowFileAccessException:
Failed to import data from HttpInputOverHTTP@802fcbf[c=0,s=ERROR:java.util.concurrent.TimeoutException:
Idle timeout expired: 30025/30000 ms] for
StandardFlowFileRecord[uuid=3e0a3a13-0d05-44fb-af71-1fa0b2519db8,claim=,offset=0,name=1866449175849578,size=0]
due to org.apache.nifi.processor.exception.FlowFileAccessException: Unable to
create ContentClaim due to java.io.IOException:
java.util.concurrent.TimeoutException: Idle timeout expired: 30025/30000 ms I been looking through the
forums and NiFi community and saw that a fix had been applied in version
NiFi-1.0.0 as mentioned in the following link:https://issues.apache.org/jira/browse/NIFI-1732 and I see that the fix consisted of hardcoding the default value from 30
seconds to “async.setTimeout(Long.MAX_VALUE); // timeout is handled by HttpContextMap”.
However even though there has been a fix applied, I am still coming across the
error. Does anyone know of any other way to fix this issue or have an idea of what
could cause this error to happen? Any help would be greatly appreciated!
... View more
Labels:
- Labels:
-
Apache NiFi
01-25-2017
09:52 PM
@mpayne thanks, this clarifies a lot!
... View more
01-25-2017
06:08 PM
Hi, I am currently experiencing some issues when sending a response from NiFi's HandleHttpResponse processor. I keep getting an IOException constantly for some of the messages and I think it could be because the server we are using is configured to close and clear the connection after 3 failed consecutive keep-alives. Because of this I was wondering if there is a way for me to keep track and to verify NiFi's keep-alive responses? I am using NiFi version NiFi-1.1.0. In addition, I wanted to know if there are currently any reported bugs regarding jetty in NiFi that could be causing this error constantly for some messages? I would appreciate any insight on this issue. Below I have pasted the full log of the error: ERROR [Timer-Driven Process Thread-10] o.a.n.p.standard.HandleHttpResponse org.apache.nifi.processor.exception.ProcessException: IOException thrown from HandleHttpResponse[id=38c1eb9d-139b-3ce4-aca4-5f0e7c7b3f8f]: org.eclipse.jetty.io.EofException at org.apache.nifi.controller.repository.StandardProcessSession.exportTo(StandardProcessSession.java:2762) ~[na:na] at org.apache.nifi.processors.standard.HandleHttpResponse.onTrigger(HandleHttpResponse.java:166) ~[nifi-standard-processors-1.1.0.jar:1.1.0] at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) [nifi-api-1.1.0.jar:1.1.0] at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099) [nifi-framework-core-1.1.0.jar:1.1.0] at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136) [nifi-framework-core-1.1.0.jar:1.1.0] at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) [nifi-framework-core-1.1.0.jar:1.1.0] at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132) [nifi-framework-core-1.1.0.jar:1.1.0] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_45] at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_45] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_45] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_45] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_45] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_45] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]Caused by: org.eclipse.jetty.io.EofException: null at org.eclipse.jetty.io.ChannelEndPoint.flush(ChannelEndPoint.java:197) ~[jetty-io-9.3.9.v20160517.jar:9.3.9.v20160517] at org.eclipse.jetty.io.WriteFlusher.flush(WriteFlusher.java:419) ~[jetty-io-9.3.9.v20160517.jar:9.3.9.v20160517] at org.eclipse.jetty.io.WriteFlusher.completeWrite(WriteFlusher.java:375) ~[jetty-io-9.3.9.v20160517.jar:9.3.9.v20160517] at org.eclipse.jetty.io.SelectChannelEndPoint$3.run(SelectChannelEndPoint.java:107) ~[jetty-io-9.3.9.v20160517.jar:9.3.9.v20160517] at org.eclipse.jetty.io.SelectChannelEndPoint.onSelected(SelectChannelEndPoint.java:193) ~[jetty-io-9.3.9.v20160517.jar:9.3.9.v20160517] at org.eclipse.jetty.io.ManagedSelector$SelectorProducer.processSelected(ManagedSelector.java:283) ~[jetty-io-9.3.9.v20160517.jar:9.3.9.v20160517] at org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:181) ~[jetty-io-9.3.9.v20160517.jar:9.3.9.v20160517] at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:249) ~[jetty-util-9.3.9.v20160517.jar:9.3.9.v20160517] at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) ~[jetty-util-9.3.9.v20160517.jar:9.3.9.v20160517] at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136) ~[jetty-util-9.3.9.v20160517.jar:9.3.9.v20160517] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) ~[jetty-util-9.3.9.v20160517.jar:9.3.9.v20160517] at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) ~[jetty-util-9.3.9.v20160517.jar:9.3.9.v20160517] ... 1 common frames omittedCaused by: java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.write0(Native Method) ~[na:1.8.0_45] at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) ~[na:1.8.0_45] at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) ~[na:1.8.0_45] at sun.nio.ch.IOUtil.write(IOUtil.java:51) ~[na:1.8.0_45] at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471) ~[na:1.8.0_45] at org.eclipse.jetty.io.ChannelEndPoint.flush(ChannelEndPoint.java:175) ~[jetty-io-9.3.9.v20160517.jar:9.3.9.v20160517]
... View more
Labels:
- Labels:
-
Apache NiFi
01-24-2017
10:32 PM
Hello, I recently began receiving this warning in my nifi cluster: WARN [Replicate Request Thread-10] o.a.n.c.c.h.r.ThreadPoolRequestReplicator Response time from node was slow for each of the last 3 requests made. To see more information about timing, enable DEBUG logging for org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator. I am using a 4 node cluster and I am running NiFi version NiFi-1.1.0. I had never seen this warning before and I am unsure of what is causing it. Does anyone have any ideas of what could be causing this sudden warning?
... View more
Labels:
- Labels:
-
Apache NiFi
01-24-2017
12:36 AM
@Sunile Manjee, authorizers.xml is the same in all of the nodes. Since I been using the cluster since previous versions of nifi, I call the authorized-users.xml file in the legacy authorized users file path in all the nodes and all of them have the same authorized-users.xml also.
... View more
01-23-2017
11:56 PM
Hi, I am currently using NiFi version NiFi-1.1.0 and I have a cluster with three nodes. I tried adding a fourth node to this cluster but I started getting the error of "Failed to connect node to cluster because local flow is different than cluster flow". I configured the node and made sure that it did not have a flow.xml.gz because I was trying to get it to inherit the flow.xml.gz from the cluster. When it threw the error I tried giving it the same flow.xml.gz file as the other nodes in the cluster but that also failed. I am also using a authorized-users.xml file which I call for the other nodes in their authorizers.xml file. I thought that maybe this was causing the issue so I deleted the authorized-users.xml file from the new node and tried to connect it with no flow.xml.gz file and no authorized-users.xml file, but this also failed. Would I have to stop all nodes and then start them all at the same time in order to add the new node to the cluster? Is there a way to add the new node to the cluster without getting these issues? Any suggestions would be greatly appreciated. Below I leave the error message that I keep receiving from the new node. org.apache.nifi.controller.UninheritableFlowException: Failed to connect node to cluster because local flow is different than cluster flow.
at org.apache.nifi.controller.StandardFlowService.loadFromConnectionResponse(StandardFlowService.java:894) ~[nifi-framework-core-1.1.0.jar:1.1.0]
at org.apache.nifi.controller.StandardFlowService.load(StandardFlowService.java:493) ~[nifi-framework-core-1.1.0.jar:1.1.0]
at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:770) [nifi-jetty-1.1.0.jar:1.1.0]
at org.apache.nifi.NiFi.<init>(NiFi.java:156) [nifi-runtime-1.1.0.jar:1.1.0]
at org.apache.nifi.NiFi.main(NiFi.java:262) [nifi-runtime-1.1.0.jar:1.1.0]
Caused by: org.apache.nifi.controller.UninheritableFlowException: Proposed Authorizer is not inheritable by the flow controller because of Authorizer differences: Proposed Authorizations do not match current Authorizations
at org.apache.nifi.controller.StandardFlowSynchronizer.sync(StandardFlowSynchronizer.java:253) ~[nifi-framework-core-1.1.0.jar:1.1.0]
at org.apache.nifi.controller.FlowController.synchronize(FlowController.java:1461) ~[nifi-framework-core-1.1.0.jar:1.1.0]
at org.apache.nifi.persistence.StandardXMLFlowConfigurationDAO.load(StandardXMLFlowConfigurationDAO.java:83) ~[nifi-framework-core-1.1.0.jar:1.1.0]
at org.apache.nifi.controller.StandardFlowService.loadFromBytes(StandardFlowService.java:678) ~[nifi-framework-core-1.1.0.jar:1.1.0]
at org.apache.nifi.controller.StandardFlowService.loadFromConnectionResponse(StandardFlowService.java:872) ~[nifi-framework-core-1.1.0.jar:1.1.0]
... View more
Labels:
- Labels:
-
Apache NiFi
01-12-2017
11:09 PM
@Matt thanks for the clarification, it helps a lot!
... View more
01-12-2017
10:01 PM
Hello, I currently run my cluster in NiFi using LDAP authentication however I was wondering if I could configure NiFi to allow SSO in addition to LDAP authentication or if I can only specifically use one method of authentication at a time? The reason I ask is because I am trying to grant access to additional users but these users cannot access through LDAP, so I wanted to know if it was possible to allow these additional users through SSO in some way in NiFi without having to configure Kerberos authentication instead of LDAP. I know that since NiFi 1.0.0 they added the "Identity Mapping Properties ". Would these mapping properties be able to help? Or is there any other way this could be possible?
... View more
Labels:
- Labels:
-
Apache NiFi
- « Previous
- Next »