Member since
07-30-2019
3467
Posts
1641
Kudos Received
1018
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 150 | 05-06-2026 09:16 AM | |
| 245 | 05-04-2026 05:20 AM | |
| 236 | 05-01-2026 10:15 AM | |
| 467 | 03-23-2026 05:44 AM | |
| 352 | 02-18-2026 09:59 AM |
06-14-2021
06:20 AM
@Rupesh_Raghani NiFi was not designed to provide a completely blank canvas to each user. There are important design reason for this. NiFi runs within a single JVM. All dataflows created on the canvas run as the NiFi service user and not as the user who is logged in. This means that all user's dataflows share and compete for the same system resources. Another user's poorly designed dataflow(s) can have an impact on the operation of another user's dataflow(s). So it is important for one users to be able to identify where backlogs may be forming even if that is occurring in another user's dataflow(s). With a secured NiFi, authorization policy control what a successfully authenticated user can see and do on the NiFi canvas. While components added to the canvas will always be visible to all users, what is displayed on the component is limited only stats for unauthorized users (no component names, component types, component configurations, etc). So an unauthorized user would be unable to see how that unauthorized component is being used and for what. The unauthorized user would also not have access to modify the component, access FlowFiles that traversed those components (unless that data passed through an authorized component somewhere else in the dataflow(s)), etc. Besides resource usage, another reason users need to see these place holders for all components is so that users do not build dataflows atop one another. It is common for multiple teams to be authorized to work within the same NiFi. It is also common to have some users who are members of more than one team. For those users, it would be very difficult to use the UI if each teams flows were built on top of one another. Most common setup involves an admin user creating a single Process Group (PG) on the root canvas level (top level - what you see when you first log in to a new NiFi). Then each team is authorized only to their assigned PG. So team1 user logs in and there PG is fully rendered and non authorized PGs are present by non configurable and no displayed details. team1 is unable to add components to canvas at this level and must enter their authorized PG before they can start building dataflows. When you enter sub-PG, you have a blank canvas to work with. Hope this helps with your query. Matt
... View more
06-14-2021
06:03 AM
@midee You could use a routeOnContent [1]processor to accomplish this. You would create a Java regex that matches only on customfield(s) where there is a string wrapped in quotes. If found, it routes entire FlowFile to the relationship created using the dynamic property's name. RouteOnContent configuration: I noticed in you example you have two customfield entries that do not have "null" "customfield_10001": "This is required value",
"customfield_10002": "", Based on my regex provided above, both of these would match resulting in FlowFile being routed to the "NotNull" relationship. If i were to change the second .*? to .+? , then the customfield that contained only quotes would not match (just in case you only want to route when it is not null and not empty. "customfield_.+?": ".*?", versus "customfield_.+?": ".+?", If you found this addressed your query, please take a moment to login and click "accept" on this solution. Thank you, Matt
... View more
06-09-2021
06:31 AM
@midee This use case is really not clear to me. The image you shared is the content of a single FlowFile and that content has numerous "customfield_" fields with most being "null" and one having a string value. So you are asking that this 1 FlowFile with both null and non-null "customfield_" fields is routed to the path A because at least one "customfield_" field as a non-null string? The content would remain unedited. And you want other FlowFiles where the content contains nothing but all "customfield_" fields with null value routed to path B? The content would remain unedited. Thanks, Matt
... View more
06-09-2021
06:16 AM
@myuintelli2021 Let's start with your mapping pattern setup here: nifi.security.identity.mapping.pattern.dn=^CN=(.*?), OU=(.*?)$
nifi.security.identity.mapping.value.dn=$1
nifi.security.identity.mapping.transform.dn=LOWER You node hostnames look like this: CN=nifi4.{valid_domain}.com, OU=NIFI So if we ran your hostname against the pattern Java Regular expression we would see: Capture group 1 (.*?) would match on nifi4.{valid_domain}.com Capture group 2 (.*?) would match on NIFI Then the value $1 used is only what came from capture group 1, so the string that would get passed to the NiFi authorizer would be nifi4.{valid_domain}.com You log output does reflect this now: 2021-06-08 15:33:19,173 WARN [NiFi Web Server-15] o.a.n.w.s.NiFiAuthenticationFilter Rejecting access to web api: Untrusted proxy nifi3.{valid_domain}.com The problem you have is that your file-user-group-provider is still using the full DN when setting up your clients and policies for your nodes: <property name="Initial User Identity 2">CN=nifi2.{valid_domain}.com</property>
<property name="Initial User Identity 3">CN=nifi3.{valid_domain}.com</property>
<property name="Initial User Identity 4">CN=nifi4.{valid_domain}.com</property> Above lines should be now: <property name="Initial User Identity 2">nifi2.{valid_domain}.com</property>
<property name="Initial User Identity 3">nifi3.{valid_domain}.com</property>
<property name="Initial User Identity 4">nifi4.{valid_domain}.com</property> AND in the file-acces-policy-provider: <property name="Node Identity 1">CN=nifi2.{valid_domain}.com</property>
<property name="Node Identity 2">CN=nifi3.{valid_domain}.com</property>
<property name="Node Identity 3">CN=nifi4.{valid_domain}.com</property> Above needs to change to: <property name="Node Identity 1">nifi2.{valid_domain}.com</property>
<property name="Node Identity 2">nifi3.{valid_domain}.com</property>
<property name="Node Identity 3">nifi4.{valid_domain}.com</property> You will need to remove the users.xml and authorizations.xml files again, so that they get recreated on NiFi startup after making these changes. Thank you, Matt
... View more
06-08-2021
07:27 AM
@Leopol Welcome to NiFi! The ListFile [1] processor is only designed to create a 0 byte NiFi FlowFile (no content is fetched). This created NiFi FlowFile simply has a a bunch of Attributes created on the FlowFile that can be used later to actually retrieve the content via the FetchFile [2] processor. The combination of these two processors allow NiFi to spread the heavy work across multiple nodes in a cluster when the source of the data may not be cluster friendly (for example a remote disk mounted to all nodes in a NiFi cluster). The ListFile processor would be configured to execute on "Primary Node" only and its success relationship would be routed via a connection to the FetchFile. That connection would be configured to load balance the 0 byte FlowFiles produced by ListFile. Then the FetchSFTP processors executing on all nodes would get the now distributed files and fetch the content. There are other similar list/fetch combinations. Since you have left the FetchFile processor out of yoru dataflow, you are not passing any content to the UnpackContent processor thus resulting in the exception you are seeing. In that exception you will see details on the FlowFile trying to be unpacked: StandardFlowFileRecord[uuid=cfe7807c-d6ad-4127-b779-75b2f57c0ba6,claim=,offset=0,name=data.zip,size=0] You'll notice the "size=0" which menas it is 0 bytes which is expected since you have not fetched the content for this file yet. [1] https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.13.2/org.apache.nifi.processors.standard.ListFile/index.html [2] https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.13.2/org.apache.nifi.processors.standard.FetchFile/index.html If you found this helped with your query, please take a moment to login and click "Accept" on this solution. Thank you, Matt
... View more
06-08-2021
07:08 AM
@AnkushKoul You would need to do this through some other monitoring flow. When a NiFi components encounters a failure, it will produce a bulletin which correlates to an ERROR log entry. NiFi has a SiteToSIteBulletingReportingTask [1] which can be setup to send these produced bulletins over Site-To-Site (S2S) to another NiFi or this same NiFi as FlowFiles which can be parsed via a dataflow and notifications sent out via email. [1] https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-site-to-site-reporting-nar/1.13.2/org.apache.nifi.reporting.SiteToSiteBulletinReportingTask/index.html If you found this addressed your query, please take a moment to login and click "Accept" on this solution. Thank you, Matt
... View more
06-08-2021
06:41 AM
@ang_coder The RouteOnAttribute processor establishes a NEW relationship for each dynamic property you add. If you intent is that a single FlowFile must satisfy all conditions to route on, then you should have just one NiFi Expression Language (NEL) statement that covers all conditions resulting a true or false boolean. If you share your two statements, I'd be happy to help you construct a single NEL statement. butt would be structured something like something like: All 4 lines would need to result in true before FlowFile would be related to this dynamic property's relationship. If you found this addressed your query, please take a moment to login and click "Accept" on this solution. Thank you, Matt
... View more
06-08-2021
06:07 AM
@techNerd The PutSFTP processor contains the following configuration property: Do you have that set to false on the particular putSFTP processor throwing the exception? Thanks, Matt
... View more
06-07-2021
08:34 AM
@Acbx It looks like your CSV uses commas as the field delimiter. So the solution i provided parses the entire file line by line and changes all "." to ",". So, I am guessing that you have other places within your CSV that also had ".", thus creating the additional 5 field columns. Are trying to create a new column for cents? Is that why you are changing 109.29 tp 109,29? If you are not looking for a new column, how will downstream system parse this edited CSV now that you added a new comma in there? You could write a complex Java regular expression in the Search Value to match only specifically on column number X (Money Column) and then use Replacement Strategy "Regex Replace" to edit it. Let's assume the "Money" Column was column number 5. And then wrap money once converted from 109.29 to 109,29 in quotes so it is not treated as two columns later on.... Search Value: ^(.*?),(.*?),(.*?),(.*?),(.*?),(.*?)$ Replacement Value: $1,$2,$3,$4,"${'$5':replace(".",",")}",$6 So above would manipulate column 5 only and change 109.29 in to "109,29". Hope this helps you, Matt
... View more
06-07-2021
08:05 AM
@myuintelli2021 Noticed in another post from you that commented: I am aware that there are 3 TLS certificates (one for each server) stored in keystore and 1 self-signed CA (stored in truststore) for nifi cluster. NiFi keystore used in each node MUST meet following minimum criteria: - Must contain ONLY 1 PrivateKeyEntry. Having more than 1 PrivateKeyEntry will not work as NiFi will not know which to use. - The DN used in the PrivateKeyEntry must not contain wildcards. Since NiFi certificate is used for ClientAuth, the PrivateKeyEntry DN is what is presented to identify the node. Many Authorizers will not support client names with wildcards, plus it is not advisable security wise. - The PrivateKeyEntry must have an Extended Key Usage (EKU) that supports both clientAuth and serverAuth - The PrivateKeyEntry must have at least 1 SAN entry that matches the hostname for the server on which the keystore is being used. Assuming since you used the NiFi CA toolkit to build your keystores and truststore files, you are good here. Just adding this detail in case you switch a some point to using private or publicly signed certificates. Thanks, Matt
... View more