Member since
03-10-2017
123
Posts
47
Kudos Received
24
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
301 | 06-05-2023 04:32 AM | |
546 | 04-24-2023 05:51 AM | |
426 | 03-28-2023 04:50 AM | |
608 | 03-28-2023 04:19 AM | |
733 | 03-27-2023 05:15 AM |
08-31-2023
05:01 AM
You can evaluate the following for better approach 1. Batch use case with ListSFTP/FetchSFTP 2 . ExecuteStateless processor Thank you
... View more
08-30-2023
06:02 AM
2 Kudos
InvokeHTTP does not provide a way to read user Authentication in the form of secrets from files or environment variables.
... View more
06-30-2023
02:37 AM
1 Kudo
You need paywall credentials to get the CFM parcels. You can get the paywall credentials from your contact from the Cloudera accounts Team. I hope this helps. Thank you
... View more
06-05-2023
04:32 AM
Partition level HDFS directory disk usage is not avaible since this works on gievn direceoty path only and not at the disk level. Thank you
... View more
04-24-2023
05:51 AM
1 Kudo
This is not a permission issue at this point but more of an issue between NameNode and DataNode. I would request you start a new thread for HDFS. Thank you
... View more
04-24-2023
02:22 AM
1 Kudo
Looking at the error snipped, this seems to be an HDFS-level issue, but just to make sure I assume you are be using NiFi: PutHDFS processor to write into the HDFS cluster thus, I would check following Check if processor is configured with latest copy of hdfs-site.xml and core-site.xml files under Hadoop configuration resources. Try to write into same hdfs location from hdfs client outside of NiFi ? and see if this works or not to isolate if this is hdfs issue or configuration issue in NiFi processor end. Thank you
... View more
04-12-2023
06:33 AM
There is No specific processor built only for Oracle but if you are talking about Oracle DB then one can use ExecuteSQL/PutSQL with DBCPConnectionPool Controller where DBCPConnectionPool controller ie generic implementation to connect any DB and it requires a local copy of Database specific client driver and driver class name. Please refer https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-dbcp-service-nar/1.21.0/org.apache.nifi.dbcp.DBCPConnectionPool/index.html If you found this response assisted with your issue, please take a moment to login and click on "Accept as Solution" below this post. Thank you, Chandan
... View more
03-29-2023
05:47 AM
No, So you can evaluate GenerateTableFetch -->ExecuteSQL Where GenerateTableFetch "Maximum-value Columns" setting can help. refr https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.20.0/org.apache.nifi.processors.standard.GenerateTableFetch/index.html If you found this response assisted with your issue, please take a moment and click on "Accept as Solution" below this post. Thank you
... View more
03-29-2023
03:00 AM
CaptureChangeMySQL processor would be fit for your requirement. If you found this response assisted with your issue, please take a moment and click on "Accept as Solution" below this post. Thank you
... View more
03-28-2023
04:50 AM
1 Kudo
PublishKafka writes messages only to those Kafka nodes that are leaders for a given topic: partition. Now it's Kafka internal job to keep the In-Sync Replicas in sync with its leader. So with respect to your question: When the Publisher client is set to run , client sends a (read/write) request the bootstrap server, listed in the configuration bootstrap.servers to get the metadata info about topic: partition details, that's how the client knows who are all leaders in given topic partitions and the Publisher client writes into leaders of topic: partition With " Guarantee single node" and if kafka broker node goes down which was happen to be a leader for topic: partition then Kafka will assign a new leader from ISR list for topic: partition and through Kafka client setting metadata.max.age.ms producer refreshed its metadata information will get to know who is next leader to produce. If you found this response assisted with your issue, please take a moment and click on "Accept as Solution" below this post. Thank you
... View more