Member since
07-30-2019
3466
Posts
1641
Kudos Received
1015
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 408 | 03-23-2026 05:44 AM | |
| 314 | 02-18-2026 09:59 AM | |
| 560 | 01-27-2026 12:46 PM | |
| 989 | 01-20-2026 05:42 AM | |
| 1305 | 01-13-2026 11:14 AM |
12-10-2019
09:52 AM
@xpelive There are two state directories. The one you shared is used by the state-management.xml local state provider. The state directory I am talking about should have been created inside the NiFi conf directory. If it does not exist, try creating it manually and make sure your NiFi service user can navigate to it and has proper permission to read and write and delete files within the directory. Thanks, Matt
... View more
12-10-2019
09:46 AM
1 Kudo
@Biswa Your "Replacement Value" is set to "$1". This means that the replacement value is the value associated to the first java regex capture group found in your configured "Search Value". The issue is your "Search value" contains no capture groups. Perhaps providing a sample input content and desired output content would help here. Hope this helps you, Matt
... View more
12-09-2019
02:00 PM
@dk123 You may want to use the replaceText processor to update your FlowFile's json content. My suggestion would be to replace all occurrences of $ with either \$ or \\$ Hope this helps, Matt
... View more
12-09-2019
01:26 PM
@xpelive For site-to-site, NiFi attempts to create peers files inside a "state" directory within the NiFi conf directory. These peers files allow a NiFi instance to share peer information across multiple identical S2S connections (like having multiple Remote Process Groups (RPGs) all configured with same target URL(s)). This helps reduce overhead associated with every identical instance of S2S retrieving the same details form the same target. It sounds like your S2S connection is working and the peers info is being returned, but it is unable to store that peer information locally. This is why your S2S connection still works even though this peers bulletin is being thrown. I would suggest you make sure a "state" directory exists within NiFi's conf directory and the NiFi service user (user that owns the NiFi java process) has proper ownership and permissions to navigate that full directory path and read/write to that directory. Hope this helps you, Matt
... View more
12-05-2019
10:35 AM
@rki Your NiFi Expression Language (EL) statement shared expects that the inbound FlowFile already has FlowFile attribute named "end_time" with some value assigned to it. What does that value look like? How was it created? ${end_time:lt(${now():toNumber():minus(86400000)})} Let's break down the embedded NiFi EL statement first: ${now():toNumber():minus(86400000)} The now() function returns the current timestamp. The toNumber() function converts that timestamp in to milliseconds since midnight GMT Jan 1, 1970. The minus() function subtracts the number passed to the function (86400000) from the above calculated milliseconds. Assuming that the "end_time" attribute returns some number that represents the number of milliseconds since midnight GMT Jan 1, 1970 also and that number is less than the value calculated by the embedded NiFi EL, the NiFi EL will return "true". Essential all files were the end_time is older then 24 hours from the current timestamp. The FlowFile would then get routed to the relationship named by your RouteOnAttribute custom property. If false is returned and no other custom properties match, the FlowFile would be routed to unmatched. If you are really trying to only route FlowFiles were the "end_time" milliseconds falls within the last 24 hours only, then you would want to use the ge() function instead of the lt() function. Hope this helps, Matt
... View more
12-05-2019
07:23 AM
1 Kudo
@sunilb You may want to look at using the listS3 processor to list the files from your S3 bucket. This will produce one 0 byte (actual file content is not retrieved by this processor) FlowFile for each S3 file that is listed. Each of these generated FlowFile will have attributes/metadata about the file that was listed. This includes the "filename". You can then route the success relationship from the listS3 processor to a RouteOnAttribute processor where you route those FlowFiles where the "filename" attribute value ends with ".txt" on to a FetchS3Object processor (This processor uses the "filename" attribute from the inbound FlowFile to fetch the actual content for that S3 file and add it to the FlowFile). Any FlowFile where the filename attribute does not end in ".txt" could just be auto-terminated. RouteOnAttribute configuration: Here is an example of what this portion of the dataflow would look like: The connection between RouteOnAttribute and FetchS3Object processors should be configured to use the Round Robin Load Balancing Strategy if your NiFi is setup as a cluster. The ListS3 processor should only be configured to run on the NiFi cluster's primary node (you'll notice the mall "P" on the icon of the listS3 processor in upper left corner). So the load balancing strategy will redistribute the listed FlowFiles amongst all nodes in your cluster before actually fetching the content for more efficient/performant use of resources. Hope this helps, Matt
... View more
12-04-2019
02:55 PM
1 Kudo
@apocolis I recommend adding the ValidateRecord before your PublishKafkaRecord processor to filter out the invalid records from your dataflow. Hope this helps, Matt
... View more
12-04-2019
07:34 AM
1 Kudo
@GKrishan You want to be very careful when setting safety valve in CM to override existing default property values in NiFi files. @wengelbrecht screenshot shows creating a safety valve that would override "java.arg.2" in the NiFi bootstrap.conf file with a new value of "Xmx1024m". The problem here is that "java.arg.2" is used to set Xms while "java.arg.3" is used to set Xmx. So you would end up with two properties defining Xmx and missing the property for Xms. Here is an example setting override safety valves in CM for both Xms and Xmx: Hope this helps, Matt
... View more
12-03-2019
12:06 PM
1 Kudo
@hhibigdata I am not clear on your setup based on your comments: I have to configure one more NiFi Cluster(Standalone)? My NiFi Cluster is 4(3 clustering, 1 standalone)? --- The List based processor are not cluster friendly because the non NiFi protocols they are built for are not cluster friendly. All this means is that these processors must be configured in your NiFi cluster with an "Execution" of "Primary Node" so that it will only ever be running on one node at a time. You should not have two different NiFi installs. In between the ListSFTP and FetchSFTP processors you should be redistributing the listed files via the load balanced strategy options on the connection. --- NiFi clusters require zookeeper and zookeeper requires quorum meaning you should have an odd number of ZK nodes (3 or 5 recommended). This same ZK will also be used to store cluster state for thes non cluster friendly processors, so that when a primary node changes nodes, the new node will pull last known state from ZK so that the list based processors continue to list from where previously elected primary node left off. So two things i suggest you check on: 1. That the zookeeper "Connect String" is correct in your state-management.xml "zk-provider". It should be a comma separated list of 3 to 5 ZK <hostname>:<port> 2. That the "nifi.zookeeper.connect.string=" has been properly setup in the nifi.properties file. It should be a comma separated list of 3 to 5 ZK <hostname>:<port> *** Generally both use the same ZK connect string and same ZK root node. Hope this helps, Matt
... View more
12-03-2019
10:34 AM
1 Kudo
@emanueol The intent of templates was to allow users to create re-useable dataflows or distribute dataflows to another NiFi installation. Since you can not have more than one component with the same uuid, the uuids are randomized when both creating the template and each time the components from a template are instantiated to the canvas. The best way to identify your component in the xml is by uniquely naming your components. By default a components name will be the same as the component type; however, users can modify the name to whatever they like. Hope this helps, Matt
... View more