Member since
07-30-2019
3123
Posts
1563
Kudos Received
907
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
220 | 12-13-2024 10:58 AM | |
329 | 12-05-2024 06:38 AM | |
272 | 11-22-2024 05:50 AM | |
235 | 11-19-2024 10:30 AM | |
205 | 11-14-2024 01:03 PM |
11-13-2024
09:12 AM
@Armel316 Lets discuss first what needs to happen successfully when a secured NiFi is connecting with a secured NiFi-Registry. When NiFi connects to NiFi-Registry client URL, it does so using the either the keystore and truststore configured in the NiFi-Registry Client's StandardRestrictedSSLContextService setup within NiFi or using the keystore and truststore setup in the nifi.properties when no StandardRestrictedSSLContextService was setup in the NiFi-Registry Client. A mutualTLS handshake will be attempted between NiFi and NiFi-Registry. NiFi-Registry will "WANT" the client (NiFi) to provide clientAuth certificate. If one is not provided, NiFi-Registry will proceed using the anonymous user (Anonymous user only has read on public buckets which align with what you shared from developer tools). So an unsuccessful mutualTLS handshake is most likely your issue currently. To answer the possible next question.... If It shows "read" on the bucket in developer tools, why does NiFi UI does not show the bucket? This is because the UI opened was for starting version control on an process group on the NiFi canvas. This UI will only show buckets for which the user identity currently authenticated into NiFi is authorized read and write on. Next question: My NiFi user is authorized read and write on the bucket in NiFi-Registry, so why is bucket not showing? NiFi authenticates with NiFi-Registry via a mutualTLS handshake. The client/user identity derived form the clientAuth certificate DN for the NIFi node is used as the identity passed to NiFi-Registry. Assuming the MutualTLS handshake is successful, the node user identity must be authorized "read" on all buckets and "read, write, and delete" on proxy user requests. This allows the node to proxy request on behalf of the user authenticated in NiFi. So only the buckets for which the authenticated user identity in NiFi has been authorized read, write, and delete on within NiFi-Registry will be shown in the list. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
11-12-2024
10:46 AM
@s198 I really don't know anything about your consumeASB processor (not part of Apache NiFi distribution). What does it do and how does it do it? The invokehttp processor would be used for interacting with HTTP endpoints. Are you able to read from your ASB endpoint via HTTP from command line (outside of NiFi via curl for example)? What does that HTTP request look like from command line? Have you looked at the ConsumeAzureEventHub processor to see if it can accomplish what you need here? Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
11-12-2024
06:17 AM
@ZNFY Since you are exporting a flow definition of a process group, you'll need to utilize the MiNiFi toolkit to transform it in to the proper format that can be loaded by MiNiFi. The MiNiFi-toolkit can be downloaded from here: https://nifi.apache.org/download/ (select "MINIFI' and click download link for Toolkit). Execute: ./minifi-toolkit/bin/config.sh transform-nifi <exported flow definition> flow.json.raw Now edit the flow.json.raw file and edit the following property at start of file (value can not be 0.) "maxTimerDrivenThreadCount":5 Now you can start your MiNiFi and it will create the flow.json.gz as it starts. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
11-04-2024
06:26 AM
1 Kudo
@ehsan125 What version of Java is you NiFi using? This may be related to: https://issues.apache.org/jira/browse/HADOOP-19212 You could try adding a new java.arg to the nifi bootstrap.conf file as below to see if it helps: java.arg.manager=-Djava.security.manager=allow Any modifications to bootstrap.conf file will require a NiFi restart to take affect. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
11-04-2024
06:09 AM
2 Kudos
@shiva239 You can create an apache "NiFi" jira in the Apache community to highlight this new feature and request modification to existing processor to support it. https://issues.apache.org/jira/ The more detail you provide in your jira the better chance someone might take this on in the community. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
11-04-2024
05:59 AM
@cadrian90 Check the nifi-app.log for the exception to see if there is a stack trace with it that provides more details around the ERROR. Also try the following: 1. Conflict Resolution = RENAME 2. Remote Path = Try leaving this blank or setting to just "testbucket" since your connection to your ftp server drops you into /home/adrian/minio as base directory. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
11-01-2024
01:36 PM
1 Kudo
@nifier I am not very clear on yoru use case. From your dataflow it appears some external source is making a rest-api call to the endpoint created by the HandleHTTPRequest processor. The FlowFile produced by the HandleHTTPRequest processor contains necessary information to identify which file needs to be fetched from the local NiFi host's filesystem. What other info are you exposing through your rest-api request to the HandleHTTPRequest processor? The FetchFile processor is the one producing the read permission exception, correct? All components added to the NiFi canvas execute as the NiFi service user. This means the NiFi service user needs to be authorized to read on the local files in order to ingest them. The local filesystem NiFi processor components do not provide an option to execute as another user. Also your use case feels a bit dangerous from a security standpoint. You are exposing a rest-api endpoint multiple user could potentially reach to fetch files. I see no protection build into your dataflow. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
11-01-2024
08:26 AM
@HenriqueAX It is safe to restart the NiFi service without encountering any data loss. NiFi is designed to protect against dataloss relative to the FlowFile traversing connection between processor components added to the NiFi canvas. FlowFiles are persisted to disk (Content is store in content claims within the "content_repository" and metadata/attributes associated with a FlowFile is stored in the "flowfile_repository"). These repositories should be protected against loss through RAID storage or some other protected storage. When a processor is scheduled to execute it will begin processing of a FlowFile from an inbound connection. Only when a processor has completed execution is the FlowFile moved to one of the processors outbound relationships. If you were to shutdown NiFi or NiFi was to abruptly die, upon restart FlowFiles will be loaded in last known connection and execution on them will start over at that processor's execution. There exists opportunity within some race conditions that data duplication could occur (NiFi happens to die just after processing of FlowFile is complete, but before it is committed to downstream relationship resulting in FlowFile being reprocessed by that component). But this only matters where specific processor is writing out the content external to NiFi Or when NiFi is ingesting data in some scenarios (consuming from a topic and dies after consumption but before offset is written resulting in same messages consumed again). With a normal NiFi shutdown, NiFi has a configurable shutdown grace period. During that grace period NiFi no longer schedules and processors to execute new threads and NiFi waits up to that configured race period for existing running threads to complete before killing them. IMPORTANT: Keep in mind that each node in NiFi cluster executes the dataflows on the NiFi canvas against only the FlowFiles present on the individual node. one node has no knowledge of the FlowFiles on another node. NiFi also persists state (for those components that use local or cluster state) either in a local state directory or in zookeeper for cluster state. Even in a NiFi cluster some components will still use local state (example: ListFile). So protection of the local state directory via RAID storage of other means of protected storage is important. Loss of stare would not result in dataloss, but rather potential for a lot of data duplication through ingestion of same data again (depending on processors used). Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
10-31-2024
06:33 AM
1 Kudo
@drewski7 While you added the public cert for your NiFi8444 to the truststore used in the nifi8443 StandardRestrictedSSLContetService, did you do the same in reverse? Does your StandardRestrictedSSLContetService also include the keystore? The Keystore contains the PrivateKey that is used in the mutual TLS exchange with NiFi8444. NiFi8443's public cert (or complete trusts chain) needs to be added the truststore configured in the nifi.properties file on NiFi8444. You'll also want to look at the nifi-user.log on NiFi8444 to see the full exception thrown when NiFi8443 reporting tasks is trying to retrieve the Site-to-Site (S2S) details. Identities will be manipulated by matching identity mapping patterns setup in the nifi.properties file. So you'll want to verify that also. Additionally, are you still using Single-User-provider on NiFI8444 along with the NiFi auto generated keystore and truststore? (I saw CN=localhost in one of your images). You should create a keystore and truststore with proper DN and SANs for use with S2S. Hope this helps with your investigation and troubleshooting. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
10-31-2024
06:07 AM
2 Kudos
@SS_Jin Another option is to use the NiFi expression Language (NEL) function "literal()}" in the NEL statement: ${myattr:append(${literal('$$$')}):prepend(${literal('$$$')})} This removes the need to you are using the correct number of "$" to escape $ in the NEL. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more