Member since
07-30-2019
3406
Posts
1622
Kudos Received
1008
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 191 | 12-17-2025 05:55 AM | |
| 252 | 12-15-2025 01:29 PM | |
| 185 | 12-15-2025 06:50 AM | |
| 281 | 12-05-2025 08:25 AM | |
| 471 | 12-03-2025 10:21 AM |
05-20-2021
03:16 PM
@SAMSAL NiFi Site-To-SIte (S2S) components perform to things: 1. A background process runs every 30 seconds which connects to the target URL entered (In this case https://localhost:9443/nifi ) to retrieve S2S details. These details include details like how many nodes in target NiFi cluster, hostnames for those target NiFi nodes, load on those nodes, if those nodes support http and/or RAW transport protocols, What remote input ports exist that this source node is authorized to see, etc... S2S details are always fetched over HTTP even if you set transport protocol to RAW. 2. Then the source NiFi uses this data to actually sent content over S2S to all the target nifi nodes in a distributed fashion. Since your target is a https, the first thing that needs to happen is a Mutual TLS handshake. That means the keystore configured in the SSL Context service must contain a PrivateKeyEntry that support an EKU with "clientAuth". The target NiFi which is probably returned a FQDN via those S2S details, will also send its server certificate to the client. That means the truststore configured in your sslContextService must contain the complete trust chain for that certificate. That server certificate which comes from the keystore in the nifi.properties file on the target NiFi must contain a single PrivateKeyEntry with a EKU that supports "serverAuth" and must also have a SAN that matches the hostname used to connect with. The target NiFi's truststore configured in its nifi.properties file must also contain the complete trust chain for that client certificate presented from the sslContextService. So if all the above is properly in place, and I would guess it is since you are trying to S2S back to same NiFi cluster/instance and are probably using same keystore and truststore in your SSLContextService as is configured in the nifi.properties file, that those files are good. I would however be concerned with your use of "localhost" as the target URL because I doubt the server certificate sent in that TLS handshake is going to have a SAN entry that contains "localhost". You should instead provide the actual hostname for that target NiFi. It is fine and common to use localhost in the "instance URL"field as that is only used to identify the host that sent the FlowFile to the target InputPort. The only other statement that stands out to me is "The user created by securing the instance has the policy "retrieve site-to-site details"." NiFi authentication and authorization is setup to control what users are allowed to do once they access a secured NiFi UI. The components that are added by the authorized user do not execute as that authenticated user. All components execute as the NiFi service user. In that case of this S2S reporting task, it is executing as the NiFi service user but authenticating to the target through that mutual LS handshake, which means the DN form that clientAuth certificate is going to be the user that needs to be authorized for both the "retrieve Site-To-Site details" and "receive data via site-to-site" NiFi authorization policies. I know there is a lot of information here and hope it is clear. If you found this addressed your query, please take a moment to login and click accept on this solution. Thank you, Matt
... View more
05-20-2021
02:48 PM
1 Kudo
@leandrolinof NiFi Expression Language (NEL) [1] does not read content of the FlowFile. The RouteOnAttribute processor never looks at the content of a FlowFile. So verify your source FlowFile has attributes set with valid numeric only values. So your inbound FlowFile would need to have two attributes on it already: 1. cont 2. CONTADOR Note: NiFi is case sensitive as well. And both these attributes need to have assigned values to them. The NEL statement you have will return the value assigned to the FlowFile attribute "cont" and check to see if it is less than the value assigned to the FlowFile attribute "CONTADOR". If that resolves to "True", the FlowFile will be routed to the connection containing the new dynamically created "CONTINUE" relationship. Otherwise, it will route to the "unmatched" relationship which you appear to have auto-terminated. [1] https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html If you found this addressed your query, please take a moment to login and click accept on this solution. Thank you, Matt
... View more
05-20-2021
02:33 PM
@P_Rat98 Can you share how you have the ListS3 processor configured? What can you tell us about the file being listed? Is it constantly being updated? Thanks, Matt
... View more
05-20-2021
02:29 PM
@ankita_pise Have you tried using the "PostHttp" [1] processor instead to send your multi-part form data content FlowFile? It would be configured as follows: [1] https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.13.2/org.apache.nifi.processors.standard.PostHTTP/index.html If you found this helped with yoru query, please take a moment to login and click accept on this solution, Thank you, Matt
... View more
05-20-2021
02:20 PM
@Acbx You should be able to easily do this based on your example easily with the ReplaceText processor configured as follows: If you found this addressed your query, please take a moment to login and click accept on the solution. Thank you, Matt
... View more
05-20-2021
02:13 PM
@Vinayakmkmishra NiFi FlowFile content claims can contain the content for 1 to many FlowFile. A content claim can not be deleted from the content repository until all FlowFiles reference that content claim are no longer queued anywhere in the dataflow. So it is possible a 1 byte content FlowFile somewhere in your dataflow(s) could be holding up a claim of a much larger size. You can never expect yoru content repository usage to match up with the cumulative queued content size reported in the NiFi UI. What is summed up for you in the UI is representative of the FlowFiles still queued through out your NiFi dataflow(s) and not representative of the size of the many content claims so various bits of content may exist as part of. You may find the following article about the content repository helpful as well: https://community.cloudera.com/t5/Community-Articles/Understanding-how-NiFi-s-Content-Repository-Archiving-works/ta-p/249418 Above being said, there are some known bugs that can prevent the content repository cleanup from working, but you have not shared what NiFi version you are using. NIFI-6150 NIFI-6236 NIFI-6846 NIFI-7469 NIFI-7992 I recommend upgrading to latest NiFi release which resolves all the above issues. If you found this addressed your query, please take a moment to login and click accept on all solutions that helped you. Thank you, Matt
... View more
05-20-2021
01:57 PM
@midee I am not sure you have provide enough detail on what it is you are trying to accomplish. Are you trying to setup your NiFi securely and use oath 2 based provider to authenticate your users in to NiFi? If so, you may find this helpful: https://bryanbende.com/development/2017/10/03/apache-nifi-openid-connect If you are trying to interface with an oath2 compliant endpoint via a NiFi dataflow, this may be helpful: https://pierrevillard.com/2017/01/31/nifi-and-oauth-2-0-to-request-wordpress-api/comment-page-1/ If this helped you with your query, please take a moment to login and click accept on this solution. Thank you, Matt
... View more
05-20-2021
01:42 PM
@leandrolinof The NiFi merge processor is working exactly as designed. Binary concatenation simply appends the content from one FlowFile to the end of the previous FlowFIle's content. You are trying to perform a special formatted merge. There is no way to configure the mergeContent processor with custom merge logic. I am not clear on the what the content looks like at the various stages of your dataflow. You mention merging FlowFiles from two streams; however, the flow screenshot you shared as 3 flow streams leading to the MergeContent processor. Are you saying the content of a FlowFile down one path is exactly this: Flow 1 (
{"cod": 1}
{"cod": 2}
{"cod": 3}) and on the other path it is exactly this: Flow 2 (
{"error": "error 1"}
{"error": "error 2"}
{"error": "error 3"}) or is it just the json objects without the "flow <num> ( )" wrapped around it? I am guessing above is what is extracted from the invokeHTTP response content? What does the FlowFile content look like on that third path before it reaches MergeContent? I might be helpful with examples of content post InvokeHTTP on all three paths and then what the exact mergeContent based on that example you would want to see. What logic can be get from the above two flows that tells us the first "cod" goes with the first "error"? So if we were to split each into three separate FlowFiles, would there be a way to determine what cod should go with what error? The ExecuteSQL processor configuration you shared is using NiFi Expression Language (NEL). NEL does not read from a FlowFiles content. ${cod} and ${erros} will looks for attributes with these names created on FlowFile and replace that with the values assigned to those attributes. If those FlowFile Attributes do not exist on the FlowFile, NEL will check the variable registry for those attribute strings. Hope this helps give you some direction with your dataflow design journey here. Matt
... View more
05-20-2021
01:10 PM
@Seedy Sorry, I feel like some important details are missing here that would help in giving you the most thorough answer. I understand that your Admin user has no issues creating a parameter context and populating that parameter context with parameters; however, a non admin user is having issues. 1. What permissions have been granted to this user? 2. Where is the user going to create parameter context? By right clicking on Process Group (PG) --> configure --> Process Group Parameter Context --> Create new parameter context OR NiFi Global menu (upper right corner of UI) --> Parameter Contexts --> + (click plus icon to create new parameter context). If you doing through configure on a PG, while adding new parameter context (before hitting apply), can you add a parameter first. Does that Parameter stay without exception? Does exception only occur when trying to add a parameter to a parameter context that has already been created? There have been numerous issues addressed with parameter contexts between Apache NiFi 1.11.4 and 1.13.2. It appears you may be hitting https://issues.apache.org/jira/browse/NIFI-7725 While I can generate same exception in 1.11.4 (just not sure if i followed you exact same path to do so), I do not get that same exception in 1.12.1. I recommend upgrading to the latest NiFi release. Earlier is was asking about PG and sub-PG along with controller services assigned (Do not need any parameters referenced in these Controller services) because of another issue I had seen that affected being able to modify and add parameters to a parameter context for non admin users. This issue was tracked under https://issues.apache.org/jira/browse/NIFI-8419 Above is fixed for the future 1.14 release. So while upgrading may solve your specific issue here, you may hit this other issue, so I was trying to collect all the details to give you the best and most accurate help. Thank you, Matt
... View more
05-20-2021
10:56 AM
@Chakkara More detailed would be needed before I would know if I can offer any advice here. Can you share: 1. PutDatabaseRecord processor configuration 2. Assuming you are using the CSVReader, please share its configuration 3. DBCPConnectionPool configuration 4. Complete Exception including any stack trace if exist form nifi-app.log. 5. Any data manipulations made between your GetFile and PutDatabaseRecord processors? 6. Sample input file would also be very helpful. Thanks, Matt
... View more