Member since
07-30-2019
3427
Posts
1632
Kudos Received
1011
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 85 | 01-27-2026 12:46 PM | |
| 491 | 01-13-2026 11:14 AM | |
| 1028 | 01-09-2026 06:58 AM | |
| 916 | 12-17-2025 05:55 AM | |
| 977 | 12-15-2025 01:29 PM |
09-27-2017
01:10 PM
@pawan soni Is the UI of the NiFi instance running on Node 1 reachable via port 9090? The RPG reports some communication issues there. This may have just been the result of node 1 restart? The invalid state message indicates the state of your Remote Process Group is Enabled ( ); however, the Remote Input Port is stopped (thus an invalid state for data transfer). Either start the "From File" input port or "disable that port in your RPG to get rid of this ERROR. Thanks, Matt
... View more
09-26-2017
12:55 PM
@Alvin Jin When you obtain a token, that token is only valid against the specific node that it was issued from. So if you use token=$(curl -k -X POST --negotiate -u : https://<nifi-node1>:9091/nifi-api/access/kerberos) Then that token can only be used to access NiFi end-points on nifi-node1 only. You would need to obtain a different token for node2, node3, etc... Also keep in mind that NIFI will only continue to accept a token for the configured expiration time. Default is 12 hours as you see in the kerberos-provider configuration. After expiration, a new token will be needed. Thanks, Matt
... View more
09-25-2017
11:36 AM
@James V Can you post teh entire verbose output of both your Keystore and Truststore?
... View more
09-22-2017
03:02 PM
@Saikrishna Tarapareddy NiFi's authentication and authorization controls what users can access NiFI's various features and components. All the NiFi components added to the canvas of a Nifi instance are executed by the user who owns the NiFi service and not the user who is currently logged in. So you need to make sure the target directory your PutFile processor is writing to has teh necessary permission set on it to allow the NiFi service/process user to write to it. You can see what user owns the nifi process by running the following command (assumming you are running on Linux OS) ps -ef|grep nifi Thanks, Matt
... View more
09-21-2017
05:00 PM
@James V The "Keystore" you are using that you are using that was derived form your CA should contain only a single "PrivateKeyEntry". That "PrivateKeyEntry" should have a EKU that authorizes it use for both clientAuth and ServerAuth. (Based on above, EKU looks correct.) The Issuer listed of that PrivateKeyEntry should be the DN for your CA. If the Issuer is the same as the owner, it is a self signed cert. This typically means you did not install the response you got back from your CA. You should have provided your CA with a csr (certificate signing request) which you then received a response for. The "truststore" should not contain any PrivateKeyEntries. It should contain 1 to many "TrustedCertEntries". There should be a trustedCertEntry for every CA that signs any certificates being used anywhere to communicate with this NiFi. TrustedCertEntries are nothing more teh public keys. Thanks, Matt
... View more
09-19-2017
02:09 PM
@sally sally By setting your minimums (Min Num Entries and Min Group Size to some large value), FlowFiles that are added to a bin will not qualify for merging right away. You should then set "Max Bin Age" to a unit of time you are willing to allow a bin to hang around before it is merged regardless of the number of entries in that bin or that bins size. As far as the number of bins go, a new bin will be created for each unique filename found in the incoming queue. Should the MergeContent processor encounter more unique filenames then there are bins, the MergeContent processor will force merging of the oldest bin to free a bin for the new filename. So it is important to have enough bins to accommodate the number of unique filenames you expect to pass through this processor during the configured "max bin age" duration; otherwise, you could still end up with 1 FlowFile per merge. Thanks, Matt
... View more
09-19-2017
01:01 PM
1 Kudo
@David Miller NiFi's default File based authorizer: Advantages: - supports user groups (This can make setting up authorizations for team a lot less cumbersome.) - Integrated within NiFi so no need to worry about connective issues with external service. Disadvantages: - There is no way currently to sync the user with LDAP. User must be added manually. Ranger based authorizer: Advantages: - Ranger can be setup to sync users from LDAP. - Authorizing new users does not require authorization admin to have access to NiFi's UI. Disadvantages: - Ranger user groups are not supported yet. (Each and every user must be added to any policy required) Here is a helpful link that maps Ranger Policies to NiFi's Default user authorizations: https://community.hortonworks.com/content/kbentry/115770/nifi-ranger-based-policy-descriptions.html ----- You are correct that the most common approach to user/team managed authorization is through the user of unique process groups added to the root canvas level. Sub-process groups by default inherit their access policies from the parent process group. The only thing to be aware of is the use of NiFi's Site-To-Site (S2S)capability. Site-To-Site Remote input and output ports must be added at the root canvas level. So when it comes to using S2S to receive or send data from a NiFi, you would need a admin level user who has the ability to add these components to to the root canvas level for your users and connect them to a process group(s) that your users/teams are authorized for. The other side of a S2S connection is a Remote Process Group (RPG). These RPGs can be added at any level (sub-process group) in a dataflow so special considerations are needed here. A typical approach might be to create a Remote Input port for each team (Process group) and connect that port(s) to their assigned process group. Once in the team group, a routing processor could be shared by all sub teams/users so direct a particular feed of incoming S2S data to a particular sub-process group. Teams are still going to need to work with admins to authorize remote NiFi instance to connect to these ports, so it cannot be completely team managed after creation. Thanks, Matt
... View more
09-18-2017
06:24 PM
@Sravanthi Bellamkonda Was my explanation helpful in addressing this specific question? If so, please take a moment to mark this naswer as accepeted to close out this thread. Thank you, Matt
... View more
09-15-2017
08:39 PM
1 Kudo
In order to have listing start over again, you would need to perform the following: 1. Open "Component State" UI by right clicking on the listHDFS processor and select "view state". 2. Within that UI you will see a blue link "Clear state" which will clear the currentlr retained state.
... View more
09-15-2017
01:42 PM
1 Kudo
@Jon Rodriguez Breton There are no dedicated processors for removing cached entries from the distributed map cache. You can try using the "Age Off Duration" property in the detect duplicate processor or use a scripting processor in NiFi to execute a script to clear the cache. The follwoing Jira covers this missing processor as well as provide a sample template https://issues.apache.org/jira/browse/NIFI-4173
... View more