Member since
03-10-2017
170
Posts
80
Kudos Received
32
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1382 | 08-12-2024 08:42 AM | |
| 2210 | 05-30-2024 04:11 AM | |
| 2970 | 05-29-2024 06:58 AM | |
| 2044 | 05-16-2024 05:05 AM | |
| 1455 | 04-23-2024 01:46 AM |
06-30-2023
06:25 AM
@wallacei That is correct. NiFi is not part of CDP Base. You need a trial for CFM.
... View more
06-08-2023
08:08 AM
@JeffB Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks.
... View more
04-27-2023
11:12 PM
Raised HDFS thread for the above issue https://community.cloudera.com/t5/Support-Questions/second-replica-is-not-found-while-writing-a-simple-file-to/td-p/369684
... View more
03-29-2023
06:05 AM
I'm afraid that the GenerateTableFetch does not help me either because it uses the highest value on a column to determine whether to read new added records in the SQL server. In my case, I'm trying to extract the already existing records that have recently been modified. If there is no processor than can directly deal with this, i may add a column to the table view which displays the time when the record was last modified (i.e. LastModificationDateTime) and use this column to retain the "Maximum-value Columns" value. Back to the second part of my doubt, regarding the updating of this record in HDFS, how can I deal with this? I have the following approach in mind: 1 - Fetch and Read the parquet file where the "ChangedID" was stored. 2 - Modify the record in that file. 3 - Save up or replace that file in HDFS Is this approach correct? Is there any better solution?
... View more
03-28-2023
04:50 AM
1 Kudo
PublishKafka writes messages only to those Kafka nodes that are leaders for a given topic: partition. Now it's Kafka internal job to keep the In-Sync Replicas in sync with its leader. So with respect to your question: When the Publisher client is set to run ,client sends a (read/write) request the bootstrap server, listed in the configuration bootstrap.servers to get the metadata info about topic: partition details, that's how the client knows who are all leaders in given topic partitions and the Publisher client writes into leaders of topic: partition With "Guarantee single node" and if kafka broker node goes down which was happen to be a leader for topic: partition then Kafka will assign a new leader from ISR list for topic: partition and through Kafka client setting metadata.max.age.ms producer refreshed its metadata information will get to know who is next leader to produce. If you found this response assisted with your issue, please take a moment and click on "Accept as Solution" below this post. Thank you
... View more
03-28-2023
04:34 AM
@swanifi, Welcome to our community! To help you get the best possible answer, I have tagged in our NiFi experts @ckumar @MattWho @SAMSAL @cotopaul may be able to assist you further. Please feel free to provide any additional information or details about your query, and we hope that you will find a satisfactory solution to your question.
... View more
03-27-2023
06:59 AM
Thanks, i am getting the following error " ERROR org.apache.nifi.processors.parquet.PutParquet: PutParquet[id=c6dee132-cb63-3b8b-9148-ec10de8044c4] HDFS Configuration error - java.lang.IllegalArgumentException: Can't get Kerberos realm: java.lang.IllegalArgumentException: KrbException: Cannot locate default realm ↳ causes: java.lang.IllegalArgumentException: Can't get Kerberos realm "
... View more
03-17-2023
07:30 AM
@anoop89 This is an unrelated issue to this original thread. Please start a new question. Fell free to @ckumar and @MattWho in your question so we get notified. This issue is related to authorization of your user. Thanks, Matt
... View more
03-05-2023
11:24 PM
@FROZEN2, if the reply has resolved your issue, can you please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future?
... View more
02-28-2023
04:07 AM
1 Kudo
It seems the same question discussed at https://community.cloudera.com/t5/Support-Questions/Passing-list-of-directories-to-ListHdfs-Processor/td-p/364798 To address the limited number of tables is to limit the number of files in HDFS terms, Listing strategy can only be controlled by File Filter and file-filter-mode on what can be listed. The Listting processing has a two-step -->What to list " controlled by the filter " -->From Whereto list "where in your case looks like sub directory under root has a widespread and number of nested sub dirs are huge where the processor is spending time on recursive searching. If you found this response assisted with your issue, please take a moment to login and click on "Accept as Solution" below this post. Thank you, Chandan
... View more