Member since
07-30-2019
3426
Posts
1631
Kudos Received
1010
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 379 | 01-13-2026 11:14 AM | |
| 726 | 01-09-2026 06:58 AM | |
| 775 | 12-17-2025 05:55 AM | |
| 836 | 12-15-2025 01:29 PM | |
| 690 | 12-15-2025 06:50 AM |
03-27-2017
07:27 PM
@Gowthaman Jayachandran You can also retrieve bulletins via a call to NiFi's rest-api: curl 'http://<hostname>:<port>/nifi-api/flow/bulletin-board'
... View more
03-27-2017
07:16 PM
1 Kudo
@Bala Vignesh N V The latest documentation for Apache NiFi can be found here: https://nifi.apache.org/docs.html You will want look in the "Getting Started" section for installing on a Linux based platform. Thank you, Matt
... View more
03-27-2017
05:42 PM
@Gowthaman Jayachandran The NiFi master bulletin board does not constantly run in the background, Its simply shows a continuous stream of bulletins being produced by processor components on the canvas while it is open. The bulletins are nothing more then snippets of the WARN or ERROR log lines that are being written to the nifi-app.log. The processors that are actually producing bulletins often have a a failure relationship where such FlowFiles will be routed upon failure. This is where you want to handle your notifications. Perhaps setting up a failure loop that will attempt to processes the FlowFile x number of times before kicking it out of the loop. If a file gets kicked out of the loop, you could use and putEmail processor to notify you of the failure. See below for template of such a failure loop: https://cwiki.apache.org/confluence/download/attachments/57904847/Retry_Count_Loop.xml?version=1&modificationDate=1433271239000&api=v2 For processor components where there is no failure relationship, you could use a MonitorActivity processor in your dataflow. This processor can be configured with a threshold and if a FlowFile has not passed within that threshold, a FlowFile is generated that can also be send to a PutEmail processor for alerting purposes. It also will Generate a FlowFile when activity has been restored. Thanks, Matt
... View more
03-24-2017
02:25 PM
Try changing the values to a very small number from their defaults: autopurge.purgeInterval=1
autopurge.snapRetainCount=3 A restart of zookeeper (In your case Nifi) will be needed for changes to take affect.
... View more
03-24-2017
02:21 PM
@mayki wogno Both zookeeper and NiFi can be very resource intensive applications. Fine for development, but recommend setting up your own external zookeeper cluster for using in production environments. It is possible load is affecting the zookeeper cleanup. You can use the linked Zookeeper maintenance guide to clean-up your zk version-2 directory. Snapshots are nothing more then backups in time. Considering that the information that NiFi stores in ZK is ever changing, I personally don't see much value in being able to restore from backup. (Going back to different retained state). Thanks, Matt
... View more
03-24-2017
02:10 PM
@mayki wogno
Is the same directory the same size of everyone of your zookeeper nodes? If not you may be having an issue on only one of your znodes. You should be able to shutdown the zookeeper node and purge all those files. The pertain files will be re-written from the other znodes in the zookeeper cluster when it rejoins the zookeeper cluster. Zookeeper is storing information about who is your current cluster coordinator, primary node, and any cluster wide state various from various processor in your dataflows. I am assuming you are running the embedded zookeeper here. In that case the zookeeper.properties file should control the auto purge of the snapshots through the following properties: autopurge.purgeInterval=24
autopurge.snapRetainCount=30 The transaction logs should be handle via routine maintenance which you can find here: http://archive.cloudera.com/cdh4/cdh/4/zookeeper/zookeeperAdmin.html#sc_maintenance Thanks, Matt
... View more
03-24-2017
12:33 PM
@Praveen Singh Standard out from your script is written to the content of the FlowFile generated by the ExecuteProcess NiFi processor. So perhaps just tweaking your script to write to standard out rather then a file on disk is all you need to do.
... View more
03-23-2017
02:48 PM
@Sanaz Janbakhsh If you found the information provided useful, please accept that answer. Thank you, Matt
... View more
03-23-2017
02:44 PM
@Sanaz Janbakhsh It is "zero master clustering". All nodes in an HDF 2.0 (NiFi 1.x) cluster run the dataflow and do work on FlowFiles. An election is conducted and at completion of that election one node will be elected as the cluster coordinator and one node will be elected as the primary node (run primary node only configured processors). Which node in the cluster is assigned these roles can change at anytime should the previously elected node should stop sending heartbeats in the configured threshold. It also possible for same node to be elected both roles. This also means that any node in a HDF 2.0 cluster can be used for establishing Site-to-Site (S2S) connections. Ind old NiFi S2S to a cluster required that the RPG point at the NCM. Thanks, Matt
... View more
03-23-2017
02:12 PM
2 Kudos
@Diego Labrador Anytime you encounter the message "Unable to perform the desired action due to insufficient permissions. Contact the system administrator.", you are having an authorization issue. Authentication issues present different errors. You should inspect your nifi-user.log while trying to access the UI to see what the exact string is being passed to the authorizer. By default with ldap as your configured login identity provider, the full DN for the user who logged in is passed to the authorizer. By the looks of the above you configured on the CN= as your initial admin identity. The string passed to the authorizer will be shown in nifi-user.log and must matcha exactly (Case sensitive and spaces count as valid characters also. Thanks, Mat
... View more