Member since
01-05-2017
153
Posts
10
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4435 | 02-20-2018 07:40 PM | |
3263 | 05-04-2017 06:46 PM |
06-13-2017
02:52 PM
We figured it out Bryan. We didn't have a message demarcator set and once we set it, the error went away! Thank you!
... View more
06-12-2017
02:55 PM
Not sure what your second comment meant if it was for me or not. I'm dealing with Nifi throughout the whole stack from the raw log files on a remote server through the Remote Process Group into the Input Port and into the HDFS. I guess that means I'm not a "user".
... View more
06-07-2017
04:03 PM
Thanks for the help. Your estimate on the Run Schedule was a bit high though. When I changed it to even 30 seconds, it bottlenecked badly right before MergeContent. You were right though - when I lowered it to 1 sec, I have very little bottleneck and the error is gone.
... View more
12-10-2018
04:40 PM
Thank you @Bryan Bende for this response My ConsumeKafka_0_10 works fine and i obviously don't have an errors like @Eric Lloyd but i can't see my data in the processor. Is that what we call back-pressure ? could this can be resolved with increasing the Maximum Timer Driven Thread Count and Maximum Event Drive Thread Count ? what does that means ? Could you please give some suggesion. Thanks you in advance.
... View more
05-04-2017
06:18 PM
Thanks. Using an external script seems worse in regards to processing time than regex would be and while custom Java processor seems appealing, I don't believe thats the direction we wish to go.
... View more
05-04-2017
07:07 PM
Changing the Concurrent Tasks in ExtractText to 3 and reducing the Run Duration to 500ms fixed the problem.
... View more
04-06-2018
07:39 PM
Hi All, I am also having similar issue. I tried creating topic from command line and topic created successfully and able to list it. But when I navigate to /kafka-logs, I couldn't find any files listed over there. # bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
Created topic "test".
# bin/kafka-topics.sh --list --zookeeper localhost:2181
test Also, below is the log message I see in state_change.log 2018-04-06 00:53:58,214] TRACE Controller 1001 epoch 1 changed partition [test,0] state from NonExistentPartition to NewPartition with assigned replicas 1001 (state.change.logger)
[2018-04-06 00:53:58,223] TRACE Controller 1001 epoch 1 changed state of replica 1001 for partition [test,0] from NonExistentReplica to NewReplica (state.change.logger)
[2018-04-06 00:53:58,281] TRACE Controller 1001 epoch 1 changed partition [test,0] from NewPartition to OnlinePartition with leader 1001 (state.change.logger)
[2018-04-06 00:53:58,282] TRACE Controller 1001 epoch 1 sending become-leader LeaderAndIsr request (Leader:1001,ISR:1001,LeaderEpoch:0,ControllerEpoch:1) to broker 1001 for partition [test,0] (state.change.logger)
[2018-04-06 00:53:58,293] TRACE Controller 1001 epoch 1 sending UpdateMetadata request (Leader:1001,ISR:1001,LeaderEpoch:0,ControllerEpoch:1) to broker 1001 for partition test-0 (state.change.logger)
[2018-04-06 00:53:58,297] TRACE Controller 1001 epoch 1 changed state of replica 1001 for partition [test,0] from NewReplica to OnlineReplica (state.change.logger)
[2018-04-06 00:53:58,372] TRACE Controller 1001 epoch 1 received response {error_code=31,partitions=[{topic=test,partition=0,error_code=31}]} for a request sent to broker XX.XX.XX.XX:6667 (id: 1001 rack: null) (state.change.logger)
[2018-04-06 00:53:58,384] TRACE Controller 1001 epoch 1 received response {error_code=31} for a request sent to broker 35.197.50.244:6667 (id: 1001 rack: null) (state.change.logger)
~ My HDP Version 2.3.0 and this setup is done in GCP. Also, for kafka to be listened over public IP, I added below additional parameters advertised.host.name advertised.port host.name , Hi, I am also having similar issue and restarting kafka didn;t help. Itried creating topic using below command. # bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
Created topic "test".
List Command below is listing the topic name
# bin/kafka-topics.sh --list --zookeeper localhost:2181
test But when I go to /kafka-logs, I unable to fiind any files partitions created over there. Also below is the message that I see in state-change.log 2018-04-06 00:53:58,214] TRACE Controller 1001 epoch 1 changed partition [test,0] state from NonExistentPartition to NewPartition with assigned replicas 1001 (state.change.logger)
[2018-04-06 00:53:58,223] TRACE Controller 1001 epoch 1 changed state of replica 1001 for partition [test,0] from NonExistentReplica to NewReplica (state.change.logger)
[2018-04-06 00:53:58,281] TRACE Controller 1001 epoch 1 changed partition [test,0] from NewPartition to OnlinePartition with leader 1001 (state.change.logger)
[2018-04-06 00:53:58,282] TRACE Controller 1001 epoch 1 sending become-leader LeaderAndIsr request (Leader:1001,ISR:1001,LeaderEpoch:0,ControllerEpoch:1) to broker 1001 for partition [test,0] (state.change.logger)
[2018-04-06 00:53:58,293] TRACE Controller 1001 epoch 1 sending UpdateMetadata request (Leader:1001,ISR:1001,LeaderEpoch:0,ControllerEpoch:1) to broker 1001 for partition test-0 (state.change.logger)
[2018-04-06 00:53:58,297] TRACE Controller 1001 epoch 1 changed state of replica 1001 for partition [test,0] from NewReplica to OnlineReplica (state.change.logger)
[2018-04-06 00:53:58,372] TRACE Controller 1001 epoch 1 received response {error_code=31,partitions=[{topic=test,partition=0,error_code=31}]} for a request sent to broker x.x.x.x:6667 (id: 1001 rack: null) (state.change.logger)
[2018-04-06 00:53:58,384] TRACE Controller 1001 epoch 1 received response {error_code=31} for a request sent to broker 35.197.50.244:6667 (id: 1001 rack: null) (state.change.logger)
~ Please help me. This is sitting on Google cloud and below configurations have been to be able to listen over public IP advertised.host.name host.name advertised.port
... View more
04-24-2017
05:52 PM
That Matt. Im having a data loss issue I cannot figure out and this clarified that thinking they weren't in the "queue of the processor" isn't the culprit...
... View more
04-20-2017
05:42 PM
BTW I ended up using the Footer property in MergeContent and it worked wonderfully with no regex involved.
... View more
04-17-2017
01:07 PM
2 Kudos
@Eric Lloyd You don't have to uninstall NiFi to start clean. Just clean out all the directories under the content, flowfile, provenance repositories and you should be good to go. When you ran out of disk space it corrupted the flowfile repository, which is how NiFi knows the status of the flow files in the graph. If possible, I would recommend at a minimum of moving the flowfile repository to it's own disk partition.
... View more