Member since
11-17-2021
921
Posts
234
Kudos Received
22
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
136 | 12-13-2024 07:54 AM | |
153 | 11-15-2024 12:41 PM | |
389 | 10-14-2024 02:54 PM | |
361 | 10-10-2024 05:46 AM | |
850 | 08-06-2024 03:21 PM |
12-23-2024
05:18 PM
@Riyadbank Welcome to the Cloudera Community! To help you get the best possible solution, I have tagged our hue experts @dturnau @nramanaiah who may be able to assist you further. Please keep us updated on your post, and we hope you find a satisfactory solution to your query.
... View more
12-23-2024
05:07 PM
@adamvalenta Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks.
... View more
12-23-2024
07:33 AM
I've found that you have to modify the start.sh script itself rather than relying on passed environment variables. By default it will apply the host name of the container to the https host unless you prevent it from doing so in the start.sh script. Which then takes precedent over any http configuration you've done. I'd also try messing with `nifi.remote.input.secure` I'm uncertain if that is also required to be set to false
... View more
12-23-2024
06:50 AM
@BK84 I suggest starting a new community question for your new query. When you start you r new question, please provide more detail on your ask. Not clear what you mean by "trigger". What is the use case you are trying to implement? Thank you, Matt
... View more
12-20-2024
11:31 AM
1 Kudo
Thanks for the references @MattWho. Currently I've implemented a workaround with counters, and querying the nifi-api with http requests to get around this. It's definitely not a bullet proof implementation, with corner cases that need to be handled separately, but it's a start to build off of.
... View more
12-19-2024
12:26 PM
@PeterC Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks.
... View more
12-19-2024
12:21 PM
@mohdriyaz @Dahlgren Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks.
... View more
12-18-2024
05:29 PM
@Bharati Any updates? I have tried the REST API method. Getting my app’s expiry time works just fine. But put request get a 401 unauthorized error. It’s a shared cluster and I don’t have the admin-level authorization.
... View more
12-17-2024
04:48 PM
1 Kudo
@Shelton Thank you for your advice. As I use the latest version of NiFi and it requires Java 21, I added the following line in bootstrap.conf and confirmed the warning messages disappeared. java.arg.EnableNativeAccess=--enable-native-access=ALL-UNNAMED I appreciate your help. Thank you,
... View more
12-17-2024
12:41 PM
1 Kudo
@JSSSS The error is this "java.io.IOException: File /user/JS/input/DIC.txt._COPYING_ could only be written to 0 of the 1 minReplication nodes. There are 3 datanode(s) running and 3 node(s) are excluded in this operation." All the 3 datanode according to the log are excludeNodes=[192.168.1.81:9866, 192.168.1.125:9866, 192.168.1.8> with replication factor of 3 , writes should succeed to all the 3 datanodes else the write fails. The cluster may have under-replicated or unavailable blocks due to excluded nodes HDFS cannot use these nodes, possibly due to: Disk space issues. Write errors or disk failures. Network connectivity problems between the NameNode and DataNodes. 1. Verify if the DataNodes are live and connected to the NameNode hdfs dfsadmin -report Look for the "Live nodes" and "Dead nodes" section If all 3 DataNodes are excluded, they might show up as dead or decommissioned. Ensure the DataNodes have sufficient disk space for the write operation df -h Look at the HDFS data directories (/hadoop/hdfs/data) If disk space is full, clear unnecessary files or increase disk capacity hdfs dfs -rm -r /path/to/old/unused/files View the list of excluded nodes cat $HADOOP_HOME/etc/hadoop/datanodes.exclude If nodes are wrongly excluded: Remove their entries from datanodes.exclude. Refresh the NameNode to apply changes hdfs dfsadmin -refreshNodes Block Placement Policy: If the cluster has DataNodes with specific restrictions (e.g., rack awareness), verify the block placement policy grep dfs.block.replicator.classname $HADOOP_HOME/etc/hadoop/hdfs-site.xml Default: org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault Happy hadooping
... View more