Member since
06-26-2015
515
Posts
138
Kudos Received
114
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2257 | 09-20-2022 03:33 PM | |
| 6010 | 09-19-2022 04:47 PM | |
| 3236 | 09-11-2022 05:01 PM | |
| 3702 | 09-06-2022 02:23 PM | |
| 5772 | 09-06-2022 04:30 AM |
08-10-2022
01:25 AM
Then the service will be unavailable until you recover at least one of them.
... View more
08-08-2022
03:00 AM
Hi @araujo , Much thankful to your answer.. Yes it worked for me after fixing user. I had to enable native password for the user.. so, I ran this command "ALTER USER 'replication'@'localhost' IDENTIFIED WITH mysql_native_password BY 'somepassword' " to fix user (One other observation while connecting through SQL client is, if I just leave the database name empty then it connects with replication user, if any db name is provided, it will throws error.)
... View more
08-06-2022
09:02 AM
@shrikantbm& team, Yes, in this case, we need to check cleanup.policy of the topic __consumer_offsets. If the existing cleanup.policy=compact then the log segment of this topic will not be deleted. You should follow the below steps to conclude and resolve this issue initially. 1) Check what is current cleanup.policy of the topic __consumer_offsets. You can check it using the command: kafka-topics.sh --bootstrap-server <broker-hostname:9092> --describe OR kafka-topics.sh --zookeeper <zookeeper-hostname:2181> --describe --topics-with-overrides Note: topic_name is the name for which you are facing an issue 2) If you want to clear the old log segment of this topic, then you should set cleanup.policy like cleanup.policy=compact,delete,retention.ms=<30days> compact = when the kafka-log is rolled over, it will be compacted delete - once the offset.retention.ms is reached, the older logs will be removed retention.ms=<30days> > the old log segment will be deleted after 30 days. Note: 30 days are just an example here and this setting will be in ms. You should set it as per your requirement after checking it with the application team and their need. For "delete", the property "log.cleaner.enable" must be set to "true" After configuring this cleanup policy data will be deleted as per retention.ms as suggested above. If you will not set retention.ms then old log segment will be deleted as per retention period set in the CM / Ambari >> kafka >> Conf. The setting is log.retention.hours = <7 Days default> in CM >> Kafka, check what it is in your case so that log segment older than 7 days will be deleted. Kafka will keep checking the old log segment with the help of the property log.retention.check.interval.ms . Important note: The "delete" on consumer offsets is that you may lose offsets which can lead to duplication/data loss. So check it with your application team before setting a deletion policy. 3) If you still face the same issue, then broker logs need to be reviewed for the root cause of the issue and make the changes accordingly. If you found this information helped with your query, please take a moment to log in and click on KUDOS 🙂 and "Accept as Solution" below this post. Thank you.
... View more
08-04-2022
05:28 AM
I don't think there is. At least, I can't think of one.
... View more
08-03-2022
07:59 PM
1 Kudo
I create an XMLRecordSetWriter in the Controller Services, then using a ConvertRecord processor I'm able to read the xml record and then immediately write it out with a new root tag, which I can then pass to my next processor. I discovered this when I was reading the documentation for the XMLRecordSetWriter. Very first line in the documentation. 😃 https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.7.0/org.apache.nifi.xml.XMLRecordSetWriter/index.html
... View more
08-03-2022
04:24 PM
@AmiJhones , I've answer a similar question before: https://community.cloudera.com/t5/Support-Questions/Json-Jolt-to-Remove-All-Nulls-from-Json/td-p/336008 Cheers, André
... View more
08-03-2022
04:51 AM
@AbhishekSingh , The CaptureChangeMySQL processor acts as a MySQL replication slave. It requires the REPLICATION_SLAVE privilege in MySQL. Following the steps in the documentation and create a separate user to use in your processor. Cheers, André Cheers, André
... View more
08-01-2022
11:19 AM
1 Kudo
Thank you so much! I was looking around for the way to add arbitrary properties to the nifi-registry.properties and never hit on the right wording to find that one. Yep, figured I could always add the file manually if I could just do what you showed how to.
... View more
08-01-2022
06:22 AM
1 Kudo
@Brenigan , 1. It depends on the context and the level of &n. In the example above, &1 return the element in the transports array (e.g. "PUSH"), while &2 returns the numeric index of that element in the array (e.g. 0). 2. &4 and &2 are numeric array indexes. outer[&4] means that the output will be in the &4 position of an array called outer. That element of the array will have and attribute called inner and the &2 position of the inner array will have two attributes, t and etc, with the specified values. Cheers, André
... View more
08-01-2022
05:49 AM
@AbhishekSingh 1. @araujo response is 100% correct. 2. Just to add to @araujo respsonse here... NiFi-Registry has nothing to do with controlling what user can and can't do on the NiFi canvas. It simply allows users if it is installed to version control process groups. Even once a NiFi process group has been version controlled, authorized users in NiFi can still make changes to dataflows (even those that re version controlled). One they do make a change to a version controlled Process Group, that process group will indicate that a local change has been made and the authorized user will have the option to commit that local change as a new version of the dataflow. Controlling what users can do with dataflows is handled via authorization policies which NiFi handled very granularly. Authenticated users can be restricted to only specific Process Groups. Your NiFi admin user can setup NiFi authorization for other user per Process Group if they want by selecting the Process Group and clicking on the "key" icon in the "operate panel" on the left side of the NiFi canvas. If you found any of the responses provided assisted with your query, please take a moment to login and click on "Accept as Solution" below each of those posts. Thank you, Matt
... View more