Member since
07-30-2019
3369
Posts
1616
Kudos Received
997
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
79 | 10-10-2025 08:03 AM | |
115 | 10-08-2025 10:52 AM | |
92 | 10-08-2025 10:36 AM | |
172 | 10-03-2025 06:04 AM | |
133 | 10-02-2025 07:44 AM |
05-19-2025
05:22 AM
@BobKing Welcome to the Cloudera Community. It is going to be difficult to determine what is going on here without a sample failing zip file to reproduce with. What can you tell me about these WinZip files? How are they generated? Do they contain any files or only contain directories? (NiFi on creates FlowFiles for actual content, so zip file containing non files and only a bunch of empty directories would fail to unpack. Are these multi-part zip files? Thank you, Matt
... View more
05-16-2025
01:04 PM
@blackboks Authentication and Authorization happen in two steps in NiFi and NIFi-Registry. Group association with Users is part of the Authorization step handled by the configuration in the authorizers.xml file. Authentication is step one which you have working. At the end of authentication all that is available and passed to for authorization is the User Identity. In yoru case " nifi-admin-2@blackboks.ru " is what is being passed to the configured authorizer. You are most likely using the managed-authorizer which utilizes the file-access-policy-provider which in turn has a dependency on one or more configurable user-group-providers (file-user-group-provider, ldap-user-group-provider, composite-user-group-provider, composite-configurable-user-group-provider). It is these user group provider that are responsible for establishing what groups the user identity belongs to. What we can tell from the log output you shared is that your authorizer is unaware of any gorups that the user identity " nifi-admin-2@blackboks.ru " belongs to. If the authorizer was aware of any groups associated to this user identity, those groups would have been in that log output instead of blank: identity[nifi-admin-2@blackboks.ru], groups[] So you'll need to verify the setup in your authorizers.xml and determine which user-group-provider you will use to establish these known user to group identity mappings. The file-user-group-provider would require you to do this manually from within the NiFi UI. Hopefully this helps clarify the why you are seeing what you are seeing. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
05-14-2025
12:43 PM
@asand3r With Archive disabled, NIFi is no longer tracking the files left in the archive sub-directories. You can remove those files while NiFi is running. Just make sure you don't touch the active content_repository claims. Matt
... View more
05-14-2025
11:54 AM
@alan18080 The Single-User-Provider for authentication was not intended for production use. It is a very basic username and password based authenticator that support only a single user identity. When you access the UI of a NiFi node, you are authenticating with only that node. The provider generates a client token which your browser holds and a corresponding server side token/key held only by the node you authenticated with. This is why you need to use sticky sessions (Session Affinity) in your load-balancer so that all subsequent request go to same NiFi server. There is no option in NiFi that would allow that client JWT token to be accepted by all nodes in a NiFi cluster because of the uniqueness of the JWT generated token to a specific node. Related: NIFI-7246 Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
05-14-2025
10:29 AM
1 Kudo
@asand3r Changing following to false turns off archiving. nifi.content.repository.archive.enabled NiFi does not clean-up files left in these directories once archive is disabled. Since archive is disabled the archive code that would scan these directories to remove old archive data is not longer executing. You'll need to manually purge the archived content claims from the archive sub-directories after disabling content_repository archiving. So your two nodes that still have archive data had that data still present at shutdown while the others did not have archive data after shutdown. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
05-14-2025
05:52 AM
@asand3r Need some more detail to provide a good answer here... What version of Apache NiFi or Cloudera Flow Management are you using? After changing "nifi.content.repository.archive.enabled" to false in the nifi.properties file, did you restart NiFi? If you manually inspect the archive sub-directories, do any of them still hold files or are all of the archive sub-directories within the content_repository empty? If they are empty then archive clean-up is complete. You mention " I've saw messages, that archived data is never cleanup", can you share this message you are seeing which I assume is from the nifi-app.log? Keep in mind that disabling archive will not prevent content_repository from filling the disk where it resides to 100%. Content claims associated to actively queued FlowFiles within your dataflows on the NiFi canvas will still exist in the content_repository. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
05-14-2025
05:24 AM
2 Kudos
@s198 There is no other processor that provides this same functionality. This Apache community processor is not tested or maintained by Cloudera and thus is not included in the list of Cloudera supported NiFi processor components. This does not mean that the processor has any known issues, but does mean that Cloudera would not be obligated to provide bug fixes if they did arise nor provide support for this processor component. If this is an important processor for you and you have a Cloudera Flow management license, I would encourage you to raise this with your Cloudera Account owner requesting that Cloudera add this component to the list of supported components. Just making this formal request does not guarantee it will be added, but gets visibility around the processor for consideration. Thank you, Matt
... View more
05-13-2025
12:33 PM
2 Kudos
@s198 I can't think of any NiFi stock processors that would create dynamic attributes nor can I figure out why you would so this. I understand that you are using these "dynamic" ("${SourceFilePath}/${SourceFile}") attributes downstream in your dataflow, but to do so you are configuring these strings in that downstream processor meaning you need to know what they will be before even starting your processors. If that is the case, these are not really dynamic since you need to know what they are to configure your downstream processors. If the following strings exist in all your source json, you can just declare them manually with EvaluateJsonPath processor to get the corresponding values from the FlowFile content: SourceFilePath SourceFile SourceFilePattern Can you share more info about your complete dataflow for better understanding? What is happening downstream of your custom processor? Thank you, Matt
... View more
05-13-2025
09:02 AM
@noncitizen Can you share your PostHTTP and ListenHTTP processor configurations and scheduling? What is the volume of FlowFiles queued to the PostHTTP? How many PostHTTP processors sending to same ListenHTTP? Does ListenHTTP outbound connection queue fill resulting in backpressure being applied on the ListenHTTP? Since this is a sporadic issue, trying to get a better idea of setup and conditions at time of issue. Thanks, Matt
... View more
05-13-2025
07:01 AM
@melek6199 When you setup an Apache NiFi cluster versus a standalone NiFi instance, the cluster coordinator and zookeeper become part of the setup. Since a NiFi cluster is a zero master cluster, the UI can be access from any cluster connected node. So your user authenticates to the specific node you are accessing and then that node proxies the user request (initially that would "access the flow") on behalf of that user to the cluster coordinator that replicates request to all connected nodes. The exception means that node with node identity derived from certificate DN "CN=vtmrt3anifit04.x.com, OU=NIFI" was not properly authorized to "proxy user requests". All your NiFi node identities must be authorized to "proxy user requests". While it appears that your NiFi authorizers.xml is setup correctly with your 4 node's identities (case sensitivity also correct), I suspect it was only setup correctly after NiFi having already being started before it was configured correctly. The "file-access-policy-provider" will only generate the authorizations.xml during NiFi startup if it does NOT already exist. It also will not modify an already existing authorizations.xml file. The "file-user-group-provider" will only generate the users.xml during NiFi startup if it does not already exist. It also will NOT modify an already existing users.xml file. So I would inspect the users.xml to make sure it contains all 4 node identities (case sensitive correctly) and then verify the authorizations.xml has those node's properly authorized. So I would start here to make sure above is correct on all 4 nodes. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more