Member since
07-30-2019
3391
Posts
1618
Kudos Received
1000
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 275 | 11-05-2025 11:01 AM | |
| 163 | 11-05-2025 08:01 AM | |
| 496 | 10-20-2025 06:29 AM | |
| 636 | 10-10-2025 08:03 AM | |
| 403 | 10-08-2025 10:52 AM |
06-08-2017
12:17 PM
1 Kudo
@Anishkumar Valsalam There are two parts that need to be successful to access NiFi: User authentication: In your case, you are using LDAP to authenticate your users. The NiFi login-identity-providers.xml is used to configure the ldap-provider. NiFI offers two supported configurable "Identity Strategy" options (USE_DN or USE_USERNAME). USE_DN is the default. With "USE_DN" the full DN returned by LDAP after successfully authenticating a used. With "USE_USERNAME" the username entered at login will be used. Which ever strategy is used, the value used will be passed through any configured "Identity Mapping Properties" in NiFi before the resulting mapped value is passed to part two. (Review LDAP settings and Identity mapping Properties in NiFi Admin guide for more details on setup) User Authorization: In you case, you are using Ranger for user authorization. (default is NiFi's file-based authorizer). The final value derived form step one above is passed to the configured authorizer to determine what NiFi resources that authenticated user has been granted access. Based on your output above, you appear to have two options possibly to match your authenticated value with your ldap sync'd user in Ranger: Configure an "Identity Mapping Property" in NiFi that will extract on the value from CN= from the entire returned DN.
Based on the DN pattern you shared, your pattern mapping would look like this:
nifi.security.identity.mapping.pattern.dn=^CN=(.*?), OU=(.*?), OU=(.*?), OU=(.*?), DC=(.*?), DC=(.*?), DC=(.*?)$nifi.security.identity.mapping.value.dn=$1 This will return just "anish" from the DN and that is what will be passed to the authorizer.
Change your "Identity Strategy" configuration in your login-identity-providers.xml file to use "USE_USERNAME". This assumes the username supplied at login matches exactly with the LDAP sync username.
Add/Modify the following line in your ldap-provider: <property name="Identity Strategy">USE_USERNAME</property> Thanks, Matt
... View more
06-07-2017
05:50 PM
@Anishkumar Valsalam I asked around and those quicklinks hostnames are set to the hostnames of the hosts you added to Ambari. You terminal window output above shows you logged into "server1" and not "nifi.server1.com". I could find no way to change them. If you are configuring your nifi.properties file manually, those config changes will be overwritten when you restart NiFi via Ambari. In Ambari, if you configure the nifi.web.https.host= property to a static vale of "nifi.server1.com" , then every node in your NiFi cluster will try to start with that hostname value unless you create a unique config group in Ambari fro each node in your cluster. Wish I could be more help, but it was never the intention of NiFi managed via Ambari to bind to different hostnames then what was provided during your Ambari host registration. I think you only option here is to use config groups, but again that will not change the quicklink URLs. Thanks, Matt
... View more
06-07-2017
03:28 PM
@Anishkumar Valsalam Standalone NiFi instances have no need to perform and 2-way TLS negotiations. Once you cluster, NiFi nodes need to communicate with each other and that negotiation uses 2-way TLS. Not sure where you got your keystore and truststore files from, but you need to verify that the contents of both are correct. The truststore.jks file should contain the necessary trustedCertEntries so that it can trust the client certificate being presented from the other nodes in your cluster. Matt
... View more
06-07-2017
02:46 PM
@Anishkumar Valsalam The Ambari quicklinks URLs are not driven by any of the NiFi configurations. Those quicklinks are driven by the hostname provided when you initially added hosts to your Ambari managed cluster. Since this is not a NiFi configuration issue, I am not sure where to change these quicklink URL values within Ambari. Matt
... View more
06-07-2017
02:29 PM
1 Kudo
@J. D. Bacolod Have you considered using the PutDistributedMapCache and GetDistributedMapCache processors? Have two separate dataflows. One runs on a cron and is responsible for obtaining the token and write that token to the distirbutedMapCache using the putDistirbutedMapCache processor. The Second flow is for doing all your other operations using that token. Just before the invokeHTTP processor add a GetDistibutedMapCache processor that reads the token from the distributed map cache in to a FlowFile attribute. You then use that attribute to pass the token in your connections. One thing to keep in mind is that it is possible that a new token may be retrieved after a FlowFile had already retrieved the old token from the distirbutedMapCache. This would result in auth failure. So you will want your flow to loop back to GetDistributedMapChace processor to get latest key on auth failure on your invokeHTTP processor. This flow does not keep track in any way when a token expires, but if you know how long a token is good for you can set your cron accordingly. Thanks, Matt
... View more
06-07-2017
01:03 PM
2 Kudos
@Anthony Murphy Is this a NiFi cluster or standalone NIFi instance? Make sure that the following property is set to "true" in the nifi.properties file on every instance of NiFi: nifi.flowcontroller.autoResumeState=true If this property is set to false it will trigger all components to come up stopped when Nifi is restarted. Thanks, Matt
... View more
06-06-2017
09:02 PM
@Alvin Jin My suggestion would be use something like: https://regex101.com/ You can enter your regex and sample test you want to run it against. Matt
... View more
06-05-2017
08:23 PM
@Joshua Adeleke HDF 2.1.4 (essentially HDF 2.1.3, plus the Controller service UI fix) will be out very very soon.
keep an eye out for it on the https://docs.hortonworks.com/ page. You can then just do an Ambari upgrade from HDF 2.1.3 to HDF 2.1.4. Thanks,
Matt
... View more
06-05-2017
02:05 PM
@Kiran Hebbar Hello, Your question is not very clear as to what you are looking for. I am going to assume you are asking how to view the metadata currently associated with a FlowFile passing through your NiFi dataflow(s). There are several ways to view this metadata: Right click on a connection that has queued data and select "List queue" from the context menu. Form the new UI that opens you will se a list of FlowFiles. Click on the icon to the left of any one of the FlowFiles to "view details" of that FlowFile. There you will find a "attributes" tab that list all the key/value pairs associated to this FlowFile. Use data provenance to perform a search on FlowFile events. "Data Provenance" can be found under the upper right corner hamburger menu in the NiFi UI. Click the search icon to open a "Search Events" UI where you can add criteria to limit the results (Provenance returns 1000 of the most recent events). From the final list use the same "view details" icon to the left of an event to open a new UI that will show the Attributes of your selected FlowFile. Use the LogAttribute processor. Add this processor anywhere in your dataflow. As FlowFile pass through this processor their FlowFile Attributes as they exist at the time of passing through this processor will be logged to the nifi-app.log. Keep in mind that this processor can greatly increase the size of your logs and require more space to store your logs. If you found this answer addressed your question, please mark it as accepted. Thanks, Matt
... View more
06-05-2017
12:40 PM
1 Kudo
@Paula DiTallo 1. Everything you configure in NiFi (Processors, connections, input ports, output ports, Remote Process groups, funnels, Controller services, reporting tasks, etc...) is contained within the flow.xml.gz file (by default located in NiFi's conf directory). You can clear the canvas in a couple ways: Select all components the canvas and click "delete" key. (Depending on connections, some components may not delete first time.) All connections must be absent of any queued data. All Processors must be stopped. This method will not delete any controller services, reporting tasks, or imported templates (these must be removed manually). Stop NiFi and delete the flow.xml.gz file. On next restart, a new blank flow.xml.gz file will be generated. Any FlowFiles that were still queued in NiFi will be deleted during start-up. This method will remove all components including controller services, reporting tasks, and imported templates. 2. User need to use the "upload template" icon found in the "Operate Panel" found to the left of the canvas. Once the template has been uploaded, it can be instantiated on to the canvas by dragging the "Template" icon form the top menu bar in the NIFi UI to the canvas. Thanks, Matt
... View more