Member since
11-17-2021
1128
Posts
257
Kudos Received
29
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3057 | 11-05-2025 10:13 AM | |
| 494 | 10-16-2025 02:45 PM | |
| 1056 | 10-06-2025 01:01 PM | |
| 829 | 09-24-2025 01:51 PM | |
| 632 | 08-04-2025 04:17 PM |
12-26-2024
03:12 PM
1 Kudo
Hi @Maansh ! Could you use CodeCommit as Nifi's GIT repository? Could you share how you did it? Regards!
... View more
12-23-2024
05:18 PM
1 Kudo
@Riyadbank Welcome to the Cloudera Community! To help you get the best possible solution, I have tagged our hue experts @dturnau @nramanaiah who may be able to assist you further. Please keep us updated on your post, and we hope you find a satisfactory solution to your query.
... View more
12-23-2024
06:50 AM
1 Kudo
@BK84 I suggest starting a new community question for your new query. When you start you r new question, please provide more detail on your ask. Not clear what you mean by "trigger". What is the use case you are trying to implement? Thank you, Matt
... View more
12-20-2024
11:31 AM
1 Kudo
Thanks for the references @MattWho. Currently I've implemented a workaround with counters, and querying the nifi-api with http requests to get around this. It's definitely not a bullet proof implementation, with corner cases that need to be handled separately, but it's a start to build off of.
... View more
12-19-2024
12:26 PM
@PeterC Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks.
... View more
12-18-2024
05:29 PM
@Bharati Any updates? I have tried the REST API method. Getting my app’s expiry time works just fine. But put request get a 401 unauthorized error. It’s a shared cluster and I don’t have the admin-level authorization.
... View more
12-17-2024
04:48 PM
1 Kudo
@Shelton Thank you for your advice. As I use the latest version of NiFi and it requires Java 21, I added the following line in bootstrap.conf and confirmed the warning messages disappeared. java.arg.EnableNativeAccess=--enable-native-access=ALL-UNNAMED I appreciate your help. Thank you,
... View more
12-17-2024
12:41 PM
1 Kudo
@JSSSS The error is this "java.io.IOException: File /user/JS/input/DIC.txt._COPYING_ could only be written to 0 of the 1 minReplication nodes. There are 3 datanode(s) running and 3 node(s) are excluded in this operation." All the 3 datanode according to the log are excludeNodes=[192.168.1.81:9866, 192.168.1.125:9866, 192.168.1.8> with replication factor of 3 , writes should succeed to all the 3 datanodes else the write fails. The cluster may have under-replicated or unavailable blocks due to excluded nodes HDFS cannot use these nodes, possibly due to: Disk space issues. Write errors or disk failures. Network connectivity problems between the NameNode and DataNodes. 1. Verify if the DataNodes are live and connected to the NameNode hdfs dfsadmin -report Look for the "Live nodes" and "Dead nodes" section If all 3 DataNodes are excluded, they might show up as dead or decommissioned. Ensure the DataNodes have sufficient disk space for the write operation df -h Look at the HDFS data directories (/hadoop/hdfs/data) If disk space is full, clear unnecessary files or increase disk capacity hdfs dfs -rm -r /path/to/old/unused/files View the list of excluded nodes cat $HADOOP_HOME/etc/hadoop/datanodes.exclude If nodes are wrongly excluded: Remove their entries from datanodes.exclude. Refresh the NameNode to apply changes hdfs dfsadmin -refreshNodes Block Placement Policy: If the cluster has DataNodes with specific restrictions (e.g., rack awareness), verify the block placement policy grep dfs.block.replicator.classname $HADOOP_HOME/etc/hadoop/hdfs-site.xml Default: org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault Happy hadooping
... View more
12-13-2024
08:41 AM
1 Kudo
@Zifo1 When using Site-to-SIte via Remote Process Groups (RPG) and Remote Input or Output ports between NiFi clusters, it is most efficient to push rather then pull data (FlowFiles). The NiFi RPG always acts as the client side of the connection. It will either send FlowFiles to a Remote Input Port or fetch FlowFiles from a Remote Output port. I would avoid fetching from Remote Output ports. You get better FlowFiles distribution across teh destination cluster when you send FlowFiles from the RPG. If the FlowFiles traverse both directions, you would simply setup a RPG on both NiFi clusters to push FlowFiles to the Remote Input Ports on opposite clusters. Details about Site-To-Site can be found here: https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#site-to-site As far as the RPG goes, I recommend using the "RAW" transport protocol over HTTP. RAW requires that the dedicated RAW port is configured in the server side NiFi's nifi.properties file. RAW establishes a raw socket connection on the dedicated configured port. HTTP utilizes the same HTTPS port that all other NiFi interactions use. You'll need to make sure the network connectivity exists between both your NiFi Clusters on both the HTTP(s) and RAW ports. HTTP is always used to fetch Site-to-Site Details. Setting up the client side (Remote Process Group) Documentation is here: https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#configure-site-to-site-client-nifi-instance Setting up the sever side (NiFi with Remote Input or Remote Output ports) documentation can be found here: https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#configure-site-to-site-server-nifi-instance Even with Site-To-Site the communications between the two NiFi clusters requires both authentication and authorization. Authentication is established via a mutual TLS handshake initiated by the RPG. For Site-to-Site, the keystore and truststore setup en each NiFi's nifi.properties file are used in the MutualTLS exchange. NOTE: The NiFi out-of-box auto generated keystores and truststores are not suitable for negotiating a successful Mutual TLS handshake. There are numerous authorization policies that must be setup on the server side (remote ports NiFi) so that the client side (NiFi with RPG) is able to successfully send FlowFiles over Site-to-Site: 1. Retrieve Site-to-Site Details - This policy authorizes the client NiFi nodes (so all nodes in the client side NiFi cluster must be authorized) to retrieve site-to-site details from the server side NiFi. This includes details like number of nodes, load on those nodes, authorized remote ports, site-to-site raw port, https port, etc. 2. Receive data via Site-To-Site - This policy is setup on Remote Input ports to authorize the client side NiFi nodes to send FlowFiles to this specific port. 3. Send data via Site-to-Site - This policy is setup on the Remote Output Ports and allows authorized client nodes to fetch FlowFiles from the Remote output port. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
12-12-2024
03:18 AM
1 Kudo
@DianaTorresDone. https://community.cloudera.com/t5/Support-Questions/Maven-repository-hortonworks-is-not-working-for-couple-of/m-p/398748/highlight/true#M250299
... View more