Member since
07-30-2019
3406
Posts
1623
Kudos Received
1008
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 334 | 12-17-2025 05:55 AM | |
| 395 | 12-15-2025 01:29 PM | |
| 382 | 12-15-2025 06:50 AM | |
| 360 | 12-05-2025 08:25 AM | |
| 602 | 12-03-2025 10:21 AM |
01-30-2023
05:56 AM
@ajignacio 1. When you created the Provenance repository, did you move the old provenance data there or start new with an empty directory? If the provenance repo is corrupt, you can start fresh by deleting everything in the provenance_repository directory. Provenance data does not impact your active FlowFiles queued with your NiFi dataflows, so no data loss there. All you lose is the lineage history that eventually ages off anyway. 2. What Java JDK version is your NiFi Using? 3. Can you share the complete stack trace from the nifi-app.log? 4. Any other errors/warns in the app.log (Memory or open file limit exceptions, any other warns or exceptions related to provenance, etc...)? Thank you, Matt
... View more
01-30-2023
05:43 AM
@myuintelli2021 Hello Ming, NiFi 1.15.3 will support JDK 1.8 or 1.11. We do strongly encourage users to be on the latest update version of either of those with NiFI. So not sure what update release of JDK your are on and which JDK provider (Oracle, OpenJDK, etc) you are using. I unfortunately do not have access ti a Windows 2019 datacenter edition to see if I can reproduce myself to evaluate further. I strongly encourage you to raise a community question to take your query further. A new question will gather more attention rather than trying to diagnosis and resolve your specific issue with the comments of a community article. Thank you, Matt
... View more
01-30-2023
05:35 AM
@Haden 1. If you could share the complete InvokeHTTP Processor configuration and the cli command you executed that worked, that would be helpful in providing some guidance. 2. Was the cli executed from one of the NiFi servers? 3. Have you tried putting the InvokeHHTP processor class in DEBUG in the NiFi logback.xml to see if it provides anymore detail? Thank you, Matt
... View more
01-23-2023
12:25 PM
1 Kudo
@steven-matison That 3 part series on ExecuteScript was written by a very talented different Matt in this community @mburgess.
... View more
01-23-2023
12:21 PM
@myuintelli2021 Logback is not something written by or for Apache NiFi. NiFi's default logback has changed very little for nifi-app.log, nifi-user.log, and nifi-bootstrap.log. All of which by default use the "RollingFileAppender". In the latest releases you may see that some additional appenders are now present; however, it is not uncommon for users to create additional appenders themselves in the logback.xml so that specific loggers can output to user defined log files rather than the defaults. Logback supports numerous rolling policies: https://logback.qos.ch/manual/appenders.html These have existed along as NiFi has been using Logback. Not sure if maybe the issue is specific to the version of Windows or JDK you are using? If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped. Thank you, Matt
... View more
01-17-2023
02:08 PM
@myuintelli2021 I just installed Apache NiFi 1.19.1 on my MAC and only modified the <maxFileSize> from default 100MB to 2 MB. Log rotation is working as expected. My current logback configuration for nifi-app.log: <configuration scan="true" scanPeriod="30 seconds">
<shutdownHook class="ch.qos.logback.core.hook.DelayingShutdownHook" />
<contextListener class="ch.qos.logback.classic.jul.LevelChangePropagator">
<resetJUL>true</resetJUL>
</contextListener>
<appender name="APP_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${org.apache.nifi.bootstrap.config.log.dir}/nifi-app.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<!--
For daily rollover, use 'app_%d.log'.
For hourly rollover, use 'app_%d{yyyy-MM-dd_HH}.log'.
To GZIP rolled files, replace '.log' with '.log.gz'.
To ZIP rolled files, replace '.log' with '.log.zip'.
-->
<fileNamePattern>${org.apache.nifi.bootstrap.config.log.dir}/nifi-app_%d{yyyy-MM-dd_HH}.%i.log</fileNamePattern>
<maxFileSize>2MB</maxFileSize>
<!-- keep 30 log files worth of history -->
<maxHistory>30</maxHistory>
</rollingPolicy>
<immediateFlush>true</immediateFlush>
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
</encoder>
</appender> Log directory showing logs rotated based on size (adds .<num> before .log suffix) reaching max: -rw-r--r-- 1 nifiadmin nifi 452B Jan 17 16:39 nifi-user.log
-rw-r--r-- 1 nifiadmin nifi 2.0M Jan 17 16:53 nifi-app_2023-01-17_16.0.log
-rw-r--r-- 1 nifiadmin nifi 2.0M Jan 17 16:58 nifi-app_2023-01-17_16.1.log
drwxr-xr-x 13 nifiadmin nifi 416B Jan 17 16:58 .
-rw-r--r-- 1 nifiadmin nifi 96K Jan 17 16:58 nifi-request.log
-rw-r--r-- 1 nifiadmin nifi 178K Jan 17 16:58 nifi-app.log I don't have access to a Windows environment to test there as it appears that is where you are testing, but it also works as expected in a linux environment I also have. I recommend opening a community question if you are looking for assistance with your setup. A community article comments is not he correct place to troubleshoot configurations or environmental issues. I encourage you to include details when you raise that question to include at least your complete OS version, complete java (JDK) version being used, and complete NiFi version. Thank you, Matt
... View more
01-17-2023
01:23 PM
@srilakshmi Logging does not happen at the process group level. Processors logging is based on the processor class. So there is nothing in the log output produced by a processor within a process group that is going to tell you in which process group that particular processor belongs. That being said, you may be able to prefix every processor's name within the same Process group with some string that identifies the process group. This processor name would generally be included in the the log output produced by the processor. Then you may be able to use logback filters (have not tried this myself) to filter log output based on these unique strings. https://logback.qos.ch/manual/filters.html NiFi bulletins (bulletins are log output to the NiFi UI and have a rolling 5 minute life in the UI) however do include details about the parent Process Group in which the component generating the bulletin resides. You could build a dataflow in yoru NiFi to handle bulletin notification through the use of the SiteToSiteBulletinReportingTask which is used to send bulletin to a destination remote import port on a target NiFi. A dataflow on the target NiFi could be built to parse the received bulletin records by the bulletinGroupName json path property so that all records from same PG are kept together. These 'like' records could then be written out to local filesystem based on the PG name, remote system, used to send email notifications, etc... Example of what a Bulletin sent using the SiteToSiteBulletinReportingTask looks like: {
"objectId" : "541dbd22-aa4b-4a1a-ad58-5d9a0b730e42",
"platform" : "nifi",
"bulletinId" : 2200,
"bulletinCategory" : "Log Message",
"bulletinGroupId" : "7e7ad459-0185-1000-ffff-ffff9e0b1503",
"bulletinGroupName" : "PG2-Bulletin",
"bulletinGroupPath" : "NiFi Flow / Matt's PG / PG2-Bulletin",
"bulletinLevel" : "DEBUG",
"bulletinMessage" : "UpdateAttribute[id=8c5b3806-9c3a-155b-ba15-260075ce9a6f] Updated attributes for StandardFlowFileRecord[uuid=1b0cb23a-75d8-4493-ba82-c6ea5c7d1ce3,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1672661850924-5, container=default, section=5], offset=969194, length=1024],offset=0,name=bulletin-${nextInt()).txt,size=1024]; transferring to 'success'",
"bulletinNodeId" : "e75bf99f-095c-4672-be53-bb5510b3eb5c",
"bulletinSourceId" : "8c5b3806-9c3a-155b-ba15-260075ce9a6f",
"bulletinSourceName" : "PG1-UpdateAttribute",
"bulletinSourceType" : "PROCESSOR",
"bulletinTimestamp" : "2023-01-04T20:38:27.776Z"
} In the above produced bulletin json you see the BulletinGroupName and the BulletinMessage (the actual log output). If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped. Thank you, Matt
... View more
01-17-2023
12:55 PM
@hkh The only downside to this dynamic approach is that the passwords are in plaintext as attributes on the FlowFile. This means that these passwords could be read by users who are authorized to access your NiFi through listing of FlowFiles on a queued connection or through running a provenance query on a processor in that flow and inspecting the returned results. I have no alternative solution to offer, but wanted you to be aware of downside of adding sensitive values to FlowFile attributes. Thanks, Matt
... View more
01-17-2023
05:34 AM
@davehkd No, do not set root node to a path. The root node should be set to same value and "/nifi" is fine. This is the root node that all NiFi instance will use. Lets say you were using the recommended external Zookeeper rather then the internal Zookeeper. You may choose to use that one Zookeeper cluster to support multiple independent NiFi clusters. To prevent ZK from thinking all nodes are part of same cluster, each NiFi cluster would use a different root node value. So for second NiFi cluster you might use "/nifi2". This value has nothing to do with an install path. Thanks, Matt
... View more
01-12-2023
01:30 PM
@SachinMehndirat There is NO replication of data from the four NiFi repositories across all NiFi nodes in a NiFi cluster. Each NiFi node in the cluster is only aware of and only excutes against the FlowFile on that specific node. As such, NiFi nodes can not share a common set of repositories. Each node must have their own repositories and it is important to protect those repositories from data loss (flowfile_repository and content_repository being most important). - flowfile_repository - contain metadata/attributes about FlowFiles actively processing thorugh your NiFi dataflow(s). This includes metadata on location of content of queued FlowFiles. - content_repository - contains content claims that can hold the content for 1 too many FlowFiles actively being processed or temporarily archived post termination at end of dataflow(s) - provenance_repository - contains historical lineage information about FlowFile currently or previously processed through your NiFi dataflows. - database_repository - contains flow configuration history which is a record of changes made via NiFi UI (adding, modifying, deleting, stopping, starting, etc...). Also contain info about users currently authenticated in to the NiFi node. Processors that record cluster wide state would use zookeeper to store and retrieve that stored state needed by all nodes. Processors that use local state will write that state to NiFi locally configured state directory. So in addition to protect the repositories mentioned above from dataloss, you'll also want to make sure local state (unique to each node in the NiFi cluster) directory is also protected. The embedded documentation in NiFi for each component has a section "State management:" that will tell you if that component use local and/or cluster state. You may find some of the info found in the following articles useful: https://community.cloudera.com/t5/Community-Articles/HDF-CFM-NIFI-Best-practices-for-setting-up-a-high/ta-p/244999 https://community.cloudera.com/t5/Community-Articles/Understanding-how-NiFi-s-Content-Repository-Archiving-works/ta-p/249418 https://blogs.apache.org/nifi/entry/load-balancing-across-the-cluster If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped. Thank you, Matt
... View more