I am currently using NIFI 1.1.2. When i try to access the Data Provenance on any processor it throws up an error and most of the time the NIFI instance drops off the cluster. I was able to access Data Provenance for some of the processors initially and later on started facing the same issue even on that. Please let me know if someone has encountered the same issue and solution for the same.
@Wynner, There is no error in logs. I just get the Application Core Error on the screen. Since it happens only on production system , cannot take chance to do it again.
Whenever a node drops/disconnects from a cluster. NiFi will log in the nifi-app.log why the node was disconnected. Are seeing the node disconnect or are you only seeing the "Application core" error displayed in the UI? This typically indicates you are having some sort of authentication request replication issue between your nodes. I would suggest very carefully going through both your nifi-app.log and nifi-user.log when this condition occurs looking for any node disconnection, authentication, or timeout logs.
Typically the bottom line here is some optimization and performance tuning of your dataflow and core NiFi. The default values set in NiFi are not meant to be taken as some best performance values. The defaults are meant to provide a stable NiFi running on the lightest of hardware. When running with higher volumes of data, storing large amounts of provenance history, running on higher end hardware, etc. these values often need to be adjusted to achieve the best performance in NiFi. I suggest going through the NiFi admin guide and look at the many configurable thread configuration options and what they do. You may also want to go through your dataflow and make sure you are not excessively allocating resources to any given processor (ie - to many concurrent tasks). Also check the max timer driven and max event driven settings found in the controller setting UI under the main hamburger menu in the upper right corner of the UI. These values dictate the max number of threads to consume at any given time per node to be shared by all your components. Typical values are between 2X and 4X your available cores/node. So if each node in your cluster has 4 cores, this value would be set to 8 to 16.