Member since
07-30-2019
3387
Posts
1617
Kudos Received
999
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 201 | 11-05-2025 11:01 AM | |
| 405 | 10-20-2025 06:29 AM | |
| 545 | 10-10-2025 08:03 AM | |
| 373 | 10-08-2025 10:52 AM | |
| 410 | 10-08-2025 10:36 AM |
03-28-2017
12:19 PM
1 Kudo
@Bram Klinkenberg
The first thing that seems out of place to me is that in the following two lines the user DN is "CN=admin": ./tls-toolkit.sh standalone -n 'nifi.domeinbram.nl' -C 'CN=admin' -o keys/
and
<propertyname="Initial Admin Identity">CN=admin</property> However, your users.xml file shows a DN of "cn=admin": <useridentifier="1a0ab441-da40-30dd-b28c-c4a4c710d03c"identity="cn=admin"/> They must match exactly or it is treated as a different identity. If you tail the nifi-user.log while you you try to access the UI, you will see lines output for authentication and authorization. You will see in that log the exact DN being passed to the authorizer. Is it mixed case or all lower case? The DN shown in the nifi-user.log must match exactly with what is in the users.xml file NiFi only generates the users.xml and authorizations.xml files on first startup when NiFi is secured. Subsequent changes to the authorizers.xml file will not trigger any changes/updates to pre-existing users.xml and/or authorizations.xml files. In your case since you are just getting started and you have no other users yet to worry about, you can simply delete these files and restart. NiFi will re-create them since they do not exist based on the current settings in the authorizers.xml file. In your case, you could also just manually edit the users.xml file since it appears to be a very simple change. Thanks, Matt
... View more
03-27-2017
07:27 PM
@Gowthaman Jayachandran You can also retrieve bulletins via a call to NiFi's rest-api: curl 'http://<hostname>:<port>/nifi-api/flow/bulletin-board'
... View more
03-27-2017
07:16 PM
1 Kudo
@Bala Vignesh N V The latest documentation for Apache NiFi can be found here: https://nifi.apache.org/docs.html You will want look in the "Getting Started" section for installing on a Linux based platform. Thank you, Matt
... View more
03-27-2017
05:42 PM
@Gowthaman Jayachandran The NiFi master bulletin board does not constantly run in the background, Its simply shows a continuous stream of bulletins being produced by processor components on the canvas while it is open. The bulletins are nothing more then snippets of the WARN or ERROR log lines that are being written to the nifi-app.log. The processors that are actually producing bulletins often have a a failure relationship where such FlowFiles will be routed upon failure. This is where you want to handle your notifications. Perhaps setting up a failure loop that will attempt to processes the FlowFile x number of times before kicking it out of the loop. If a file gets kicked out of the loop, you could use and putEmail processor to notify you of the failure. See below for template of such a failure loop: https://cwiki.apache.org/confluence/download/attachments/57904847/Retry_Count_Loop.xml?version=1&modificationDate=1433271239000&api=v2 For processor components where there is no failure relationship, you could use a MonitorActivity processor in your dataflow. This processor can be configured with a threshold and if a FlowFile has not passed within that threshold, a FlowFile is generated that can also be send to a PutEmail processor for alerting purposes. It also will Generate a FlowFile when activity has been restored. Thanks, Matt
... View more
03-24-2017
02:25 PM
Try changing the values to a very small number from their defaults: autopurge.purgeInterval=1
autopurge.snapRetainCount=3 A restart of zookeeper (In your case Nifi) will be needed for changes to take affect.
... View more
03-24-2017
02:21 PM
@mayki wogno Both zookeeper and NiFi can be very resource intensive applications. Fine for development, but recommend setting up your own external zookeeper cluster for using in production environments. It is possible load is affecting the zookeeper cleanup. You can use the linked Zookeeper maintenance guide to clean-up your zk version-2 directory. Snapshots are nothing more then backups in time. Considering that the information that NiFi stores in ZK is ever changing, I personally don't see much value in being able to restore from backup. (Going back to different retained state). Thanks, Matt
... View more
03-24-2017
02:10 PM
@mayki wogno
Is the same directory the same size of everyone of your zookeeper nodes? If not you may be having an issue on only one of your znodes. You should be able to shutdown the zookeeper node and purge all those files. The pertain files will be re-written from the other znodes in the zookeeper cluster when it rejoins the zookeeper cluster. Zookeeper is storing information about who is your current cluster coordinator, primary node, and any cluster wide state various from various processor in your dataflows. I am assuming you are running the embedded zookeeper here. In that case the zookeeper.properties file should control the auto purge of the snapshots through the following properties: autopurge.purgeInterval=24
autopurge.snapRetainCount=30 The transaction logs should be handle via routine maintenance which you can find here: http://archive.cloudera.com/cdh4/cdh/4/zookeeper/zookeeperAdmin.html#sc_maintenance Thanks, Matt
... View more
03-24-2017
12:33 PM
@Praveen Singh Standard out from your script is written to the content of the FlowFile generated by the ExecuteProcess NiFi processor. So perhaps just tweaking your script to write to standard out rather then a file on disk is all you need to do.
... View more
03-23-2017
02:48 PM
@Sanaz Janbakhsh If you found the information provided useful, please accept that answer. Thank you, Matt
... View more
03-23-2017
02:44 PM
@Sanaz Janbakhsh It is "zero master clustering". All nodes in an HDF 2.0 (NiFi 1.x) cluster run the dataflow and do work on FlowFiles. An election is conducted and at completion of that election one node will be elected as the cluster coordinator and one node will be elected as the primary node (run primary node only configured processors). Which node in the cluster is assigned these roles can change at anytime should the previously elected node should stop sending heartbeats in the configured threshold. It also possible for same node to be elected both roles. This also means that any node in a HDF 2.0 cluster can be used for establishing Site-to-Site (S2S) connections. Ind old NiFi S2S to a cluster required that the RPG point at the NCM. Thanks, Matt
... View more