Member since
06-26-2015
509
Posts
136
Kudos Received
114
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1332 | 09-20-2022 03:33 PM | |
3868 | 09-19-2022 04:47 PM | |
2274 | 09-11-2022 05:01 PM | |
2371 | 09-06-2022 02:23 PM | |
3727 | 09-06-2022 04:30 AM |
08-14-2022
03:51 PM
1 Kudo
Hi @LorencH , I'm of the opinion that if security is a concern (as it should be for any deployment) you should never rely on the permissions that come within the tarball. Your deployment procedure, automated or not, should always extract the files and explicitly "chown" and "chmod" the appropriate files to set the desired ownership and permissions. I don't know of the reasons to eliminate the tarball, though. Cheers, André
... View more
08-11-2022
04:05 AM
Thanks Andre. I will try and get back to you. Appreciate for the quick response.
... View more
08-10-2022
01:25 AM
Then the service will be unavailable until you recover at least one of them.
... View more
08-08-2022
03:00 AM
Hi @araujo , Much thankful to your answer.. Yes it worked for me after fixing user. I had to enable native password for the user.. so, I ran this command "ALTER USER 'replication'@'localhost' IDENTIFIED WITH mysql_native_password BY 'somepassword' " to fix user (One other observation while connecting through SQL client is, if I just leave the database name empty then it connects with replication user, if any db name is provided, it will throws error.)
... View more
08-06-2022
09:02 AM
@shrikantbm& team, Yes, in this case, we need to check cleanup.policy of the topic __consumer_offsets. If the existing cleanup.policy=compact then the log segment of this topic will not be deleted. You should follow the below steps to conclude and resolve this issue initially. 1) Check what is current cleanup.policy of the topic __consumer_offsets. You can check it using the command: kafka-topics.sh --bootstrap-server <broker-hostname:9092> --describe OR kafka-topics.sh --zookeeper <zookeeper-hostname:2181> --describe --topics-with-overrides Note: topic_name is the name for which you are facing an issue 2) If you want to clear the old log segment of this topic, then you should set cleanup.policy like cleanup.policy=compact,delete,retention.ms=<30days> compact = when the kafka-log is rolled over, it will be compacted delete - once the offset.retention.ms is reached, the older logs will be removed retention.ms=<30days> > the old log segment will be deleted after 30 days. Note: 30 days are just an example here and this setting will be in ms. You should set it as per your requirement after checking it with the application team and their need. For "delete", the property "log.cleaner.enable" must be set to "true" After configuring this cleanup policy data will be deleted as per retention.ms as suggested above. If you will not set retention.ms then old log segment will be deleted as per retention period set in the CM / Ambari >> kafka >> Conf. The setting is log.retention.hours = <7 Days default> in CM >> Kafka, check what it is in your case so that log segment older than 7 days will be deleted. Kafka will keep checking the old log segment with the help of the property log.retention.check.interval.ms . Important note: The "delete" on consumer offsets is that you may lose offsets which can lead to duplication/data loss. So check it with your application team before setting a deletion policy. 3) If you still face the same issue, then broker logs need to be reviewed for the root cause of the issue and make the changes accordingly. If you found this information helped with your query, please take a moment to log in and click on KUDOS 🙂 and "Accept as Solution" below this post. Thank you.
... View more
08-04-2022
05:28 AM
I don't think there is. At least, I can't think of one.
... View more
08-03-2022
07:59 PM
1 Kudo
I create an XMLRecordSetWriter in the Controller Services, then using a ConvertRecord processor I'm able to read the xml record and then immediately write it out with a new root tag, which I can then pass to my next processor. I discovered this when I was reading the documentation for the XMLRecordSetWriter. Very first line in the documentation. 😃 https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-record-serialization-services-nar/1.7.0/org.apache.nifi.xml.XMLRecordSetWriter/index.html
... View more
08-03-2022
04:24 PM
@AmiJhones , I've answer a similar question before: https://community.cloudera.com/t5/Support-Questions/Json-Jolt-to-Remove-All-Nulls-from-Json/td-p/336008 Cheers, André
... View more
08-03-2022
04:51 AM
@AbhishekSingh , The CaptureChangeMySQL processor acts as a MySQL replication slave. It requires the REPLICATION_SLAVE privilege in MySQL. Following the steps in the documentation and create a separate user to use in your processor. Cheers, André Cheers, André
... View more