Member since
01-16-2023
11
Posts
0
Kudos Received
0
Solutions
03-28-2023
04:50 AM
1 Kudo
PublishKafka writes messages only to those Kafka nodes that are leaders for a given topic: partition. Now it's Kafka internal job to keep the In-Sync Replicas in sync with its leader. So with respect to your question: When the Publisher client is set to run ,client sends a (read/write) request the bootstrap server, listed in the configuration bootstrap.servers to get the metadata info about topic: partition details, that's how the client knows who are all leaders in given topic partitions and the Publisher client writes into leaders of topic: partition With "Guarantee single node" and if kafka broker node goes down which was happen to be a leader for topic: partition then Kafka will assign a new leader from ISR list for topic: partition and through Kafka client setting metadata.max.age.ms producer refreshed its metadata information will get to know who is next leader to produce. If you found this response assisted with your issue, please take a moment and click on "Accept as Solution" below this post. Thank you
... View more
03-23-2023
11:43 AM
@srilakshmi Yes, Apache NiFi 1.9.0 was released over 4 years ago on February 19, 2019. Many bugs, improvements and security fixes have made there may into the product since then. The latest release as of this post is 1.20. While i can't verify 100% from what exists in this thread that you are experiencing NIFI-9688, the odds are pretty strong. You can fin the release notes for Apache NiFi here: https://cwiki.apache.org/confluence/display/NIFI/Release+Notes#ReleaseNotes-Version1.20.0 If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped. Thank you, Matt
... View more
03-15-2023
09:20 AM
Hi, I am using NIFI to consume and publish data using kafka. I have some queries on offset commit to ensure there is no data loss. Currently as per NIFI logs, the consumerconfig shows that "enable.auto.commit= false", this means that the offset are committed manually. Internal are these commits happening synchronously or asynchronously in NIFI? how to check this? And also in offset we have two types in kafka 1. current offset (Sent records)- This is used to avoid resending same records again to the same consumer 2. committed offset (processed records) - It is used to avoid resending same records to a new consumer in the event of partition rebalance. Regarding committed offset, This is kind of a indication sent back to kafka broker by consumer stating that the messages are consumed successfully? is my understanding right? Any help would be appreciated.
... View more
Labels:
- Labels:
-
Apache NiFi
03-14-2023
05:56 AM
@srilakshmi The PublishKafka and PublishKafkaRecord processors do not write any new attributes to the FlowFile when there is a failure. It simply logs the failure to the nifi-app.log and routes the FlowFile to the failure relationship. So on the FlowFile there is no unique error written that can be used for dynamic routing on failure. It could be expensive to write stack traces that come out of Client code to NiFi FlowFiles considering FlowFile attributes/metadata resides in the NiFi heap memory. This may be a topic you want to raise in Apache NiFi jira as a feature/improvement request on these processors to get feedback from Apache NiFi community committers. If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped. Thank you, Matt
... View more
03-01-2023
11:19 PM
Hi, I am observing that kafka publish is failing in NIFI when i send one success and failure data alternatively. The scenario we are testing is, i am sending one success data and other one failure data. when i say failure data, i have removed the permission for the topic so that publish fails. for success data , topic is - svc_123 for failure data , topic is - svc_456 i removed publish permission for topic "svc_456" so that publish fails and goes to retry flow. i am sending data alternatively for topic "svc_123" and "svc_456". In this scenario, i see that even for topic "svc_123" the publish is failing. Not sure why the publish is failing for topic "svc_123" even though all permissions are given. And this scenario is seen when we send one success and one failure data as i have explained above. From NIFI logs no much info on why publish fails for "svc_123". To add on, when i send 3 messages for svc_123 followed by 3 messages for svc_456 , its working as expected, the issue is seen when data is sent alternatively Any help would be appreciated Thanks
... View more
Labels:
- Labels:
-
Apache NiFi
01-17-2023
01:23 PM
@srilakshmi Logging does not happen at the process group level. Processors logging is based on the processor class. So there is nothing in the log output produced by a processor within a process group that is going to tell you in which process group that particular processor belongs. That being said, you may be able to prefix every processor's name within the same Process group with some string that identifies the process group. This processor name would generally be included in the the log output produced by the processor. Then you may be able to use logback filters (have not tried this myself) to filter log output based on these unique strings. https://logback.qos.ch/manual/filters.html NiFi bulletins (bulletins are log output to the NiFi UI and have a rolling 5 minute life in the UI) however do include details about the parent Process Group in which the component generating the bulletin resides. You could build a dataflow in yoru NiFi to handle bulletin notification through the use of the SiteToSiteBulletinReportingTask which is used to send bulletin to a destination remote import port on a target NiFi. A dataflow on the target NiFi could be built to parse the received bulletin records by the bulletinGroupName json path property so that all records from same PG are kept together. These 'like' records could then be written out to local filesystem based on the PG name, remote system, used to send email notifications, etc... Example of what a Bulletin sent using the SiteToSiteBulletinReportingTask looks like: {
"objectId" : "541dbd22-aa4b-4a1a-ad58-5d9a0b730e42",
"platform" : "nifi",
"bulletinId" : 2200,
"bulletinCategory" : "Log Message",
"bulletinGroupId" : "7e7ad459-0185-1000-ffff-ffff9e0b1503",
"bulletinGroupName" : "PG2-Bulletin",
"bulletinGroupPath" : "NiFi Flow / Matt's PG / PG2-Bulletin",
"bulletinLevel" : "DEBUG",
"bulletinMessage" : "UpdateAttribute[id=8c5b3806-9c3a-155b-ba15-260075ce9a6f] Updated attributes for StandardFlowFileRecord[uuid=1b0cb23a-75d8-4493-ba82-c6ea5c7d1ce3,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1672661850924-5, container=default, section=5], offset=969194, length=1024],offset=0,name=bulletin-${nextInt()).txt,size=1024]; transferring to 'success'",
"bulletinNodeId" : "e75bf99f-095c-4672-be53-bb5510b3eb5c",
"bulletinSourceId" : "8c5b3806-9c3a-155b-ba15-260075ce9a6f",
"bulletinSourceName" : "PG1-UpdateAttribute",
"bulletinSourceType" : "PROCESSOR",
"bulletinTimestamp" : "2023-01-04T20:38:27.776Z"
} In the above produced bulletin json you see the BulletinGroupName and the BulletinMessage (the actual log output). If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped. Thank you, Matt
... View more
01-16-2023
11:17 PM
Currently i am observing that there is size limit on the maximum characters on "log prefix" property in logAttribute nifi processor. is there a way i can increase the log prefix size?
... View more
Labels:
- Labels:
-
Apache NiFi