Member since
12-03-2017
148
Posts
25
Kudos Received
11
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1192 | 11-03-2023 12:17 AM | |
2921 | 12-12-2022 09:16 PM | |
1110 | 07-14-2022 03:25 AM | |
1774 | 07-28-2021 04:42 AM | |
2046 | 06-23-2020 10:08 PM |
11-27-2024
02:25 AM
1 Kudo
Hello Experts, I was using "ConsumeAzureEventHub" processor with nifi 1.16.3 and when I configure 'Storage Container Name' field to store consumer group state, processor was automatically creating the container (if not present) in the storage account when processor was started. But in Nifi 1.25, I am seeing a different behavior where it does not auto create the container on processor start, instead just it show container does not exist error. Is this is the expected behaviour in 1.25? if so what is the solution? should we separately create the container before hand and then use in processor? Thanks, Mahendra
... View more
Labels:
- Labels:
-
Apache NiFi
10-03-2024
04:32 AM
1 Kudo
Hello Experts, We have 2 node nifi cluster running on k8 cluster. We want to distribute the incoming http request on specific port to be load balanced across both nodes equally (round robin), do we have anyway in kubernetes for this? Tried headless service & ClusterIP but did not work as expected. Is there any other way to achieve this without external load balancers like AWS ELB etc. Thanks Mahendra
... View more
Labels:
- Labels:
-
Apache NiFi
09-04-2024
10:18 PM
1 Kudo
@araujo @bbende @MattWho - do you have any suggestions?
... View more
09-04-2024
07:54 AM
Hello @Mais - Were you able to deserialise and consume both key & value ? In my case I am able to get deserialised value but dont see key anywehere!
... View more
09-04-2024
05:34 AM
Hello Experts, I have a nifi kafka consumer (ConsumeKafka_2_6) where the kafka message body (value/flow file content) and message key (kafka.key in flow file attributes) both are avro serialized as per the Confluent kafka way of serializing. When we use "ConvertRecord + AvroReader CS + ConfluentSchemaRegistry CS" to convert message body (value/flow file content) , it works fine as it is deserilising the magic byte and schema id to the correct value. But we are trying below to deserilise kafka.key as well the same as value. Bringing the FF attribute kafka key to content (using ReplaceText processor with ${kafka.key}) and use "onvert Record + Avro Reader + ConfluentSchemaRegistry" to deserialize, in this case nifi is resulting in a wrong schema id - 3567 instead of the correct schema id 3545. Is it happening because when Nifi reads kafka.key (originally byte array) and pushes to downstream in FF as FF attribute and which is string? Is there any other way I can fix this or any other right approach? Thanks in advance! @bbende @MattWho @mattw Mahendra
... View more
Labels:
- Labels:
-
Apache NiFi
08-26-2024
05:27 AM
1 Kudo
Hello Experts, We have couple of Eventhub consumers running in nifi 1.16.3 version. The output connection is configured with default backpressure - 10,000 messages and 1GB. But I see 'ConsumeAzureEventhub' is keep on consuming data even after crossing backpressure. What is the reason for this behavior and how to fix this? @mattw Thanks Mahendra Thanks Mahendra
... View more
Labels:
- Labels:
-
Apache NiFi
05-30-2024
11:24 PM
1 Kudo
What an explanation ! Cleared my doubts. Thank you so much @MattWho .
... View more
05-30-2024
08:08 AM
Hello Experts, I see this red color high lighted number "2(1)"on Apache nifi processor. Is this something related to background process (processor thread) failing or something? I face the issue of this custom processor getting stuck once in a while, trying to understand the issue. This processor just invokes an http post endpoint to upload a file. Any help/suggestion is appreciated. Thanks, Mahendra
... View more
Labels:
- Labels:
-
Apache NiFi
05-14-2024
01:13 AM
1 Kudo
@MattWho - would appreciate if you have any comment on this issue. Thanks in advance.
... View more
05-11-2024
02:13 AM
1 Kudo
Hello experts, I am facing an issue in one of the Nifi server where we have multiple consume eventhub flows. The flow file repository disc is getting full but content and provenance repos are not. Have attached the screen shot of all repos usage and content of flowfile repo. journals folder is occupying very large amount of data. nifi.properties (related to flofile repo): nifi.flowfile.repository.always.sync=false nifi.flowfile.repository.checkpoint.interval=2 mins nifi.flowfile.repository.directory=/flowfile nifi.flowfile.repository.implementation=org.apache.nifi.controller.repository.WriteAheadFlowFileRepository nifi.flowfile.repository.partitions=256 nifi.flowfile.repository.retain.orphaned.flowfiles=true nifi.flowfile.repository.wal.implementation=org.apache.nifi.wali.SequentialAccessWriteAheadLog Can anyone help me understand what is the issue? how to resolve this? Thanks, Mahendra
... View more
Labels:
- Labels:
-
Apache NiFi