Member since
07-30-2019
3391
Posts
1618
Kudos Received
1001
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 290 | 11-05-2025 11:01 AM | |
| 175 | 11-05-2025 08:01 AM | |
| 157 | 11-04-2025 10:16 AM | |
| 507 | 10-20-2025 06:29 AM | |
| 647 | 10-10-2025 08:03 AM |
08-03-2017
12:55 PM
@Narasimma varman In order to access a secured NiFi's UI, successful user authentication and authorization must occur. In HDF a NiFI CA is installed that takes care of building valid keystores and truststores for your NiFi nodes, but it does not create user certificates for you. Typically the above error indicates NiFi did not trust the client certificate it was passed or a client certificate was not passed at all. I would suggest staring by getting verbose outputs of your NiFi keystore.jks, truststore.jks, and users keystore.p12. The verbose output for each of these can be obtained using keytool. ./keytool -v --list -keystore <jks or p12 keystore file> In your Keystore.jks used by the NiFi server, you will see a single entry with two certificates included in it: Specifically you are looking for the "PrivateKeyEntry". This PrivateKeyEntry will show a user DN (It will be in the form of CN=<server FQDN>, OU=NIFI). You will then see an issuer line which will also have a DN for the NiFi CA. This PrivateKeyEntry should have an extended key usage that allows the key to be used for both client auth and server auth. Something else (not related to your issue) i noticed was your browser URL is "localhost". The NIFi CA will generate a server certificate based off the hostname of the server and not localhost. This will require you to add an exception in your browser at some point. (This is because the cert passed to your browser from your NiFi server will say it belongs to server XYZ, but your browsers knows it was trying to connect to localhost. So it appears as a man in the middle type attack (some en-point using another end-points cert). In your truststore.jks used on your NiFi servers, you will see a single certificate. It will be a "TrustedCertEntry" for the NiFi CA. The truststore.jks file can contain 1 to many trusted cert entries. Each trusted cert entry is derived from the public key of a CA or self-signed cert. When a client (user or another server) negotiates a connection with the server a TLS handshake occurs. As part of this negotiation, the server expects to receive a client certificate which it can trust. If a trusted client cert is not received, the connection is typically closed by the server. Your client keystore.p12 file will also need to contain a PrivateKeyEntry. In TLS negotiation that occurs with the server, the DN associated to that PrivateKeyEntry is passed to the server. If that certificate was self-signed, the truststore would need to contain the public key for that certificate as a TrustedCertEntry before that certificate will be accepted for authentication. Beyond authentication is authorization, but it does not appear you are getting that far yet. Thanks, Matt
... View more
08-02-2017
01:15 PM
@Hadoop User Please start a new question rather then asking multiple unrelated questions in a single post. This makes it easier for community users to find similar issues. It also help other members identify unanswered questions so they may address them. This question would likely go unnoticed otherwise. I would need to do some investigation to come up with a good solution, but other community members may have already handled this exact scenario. By starting a new question, all members following the "data-processing" or "nifi-processor" or "nifi-streaming" will get notified of your question. Thanks, Matt
... View more
08-01-2017
04:31 PM
1 Kudo
@Hadoop User The ExtractText processor will extract the text that matches your regex and assign it to an attribute matching the property name on the FlowFile. The content of the FlowFile remains unchanged. Then you update a FlowFiles Attribute and finally use PutHDFS to write the content (which at this time you have not changed at all) to HDFS. If your intent is to write the modified string to HDFS, you need to update the actual content of the FlowFile and nit just create and modify attributes. For that use case, you would want to use ReplaceText processor instead. You would configure ReplaceText similar to the following: The above will result in the actual content of the FlowFile being changed to: [hdfs file="/a/b/c" and' the; '''', "", file is streamed. The location=["/location"] and log is some.log"] Thanks, Matt
... View more
08-01-2017
03:08 PM
@Foivos A The banner is a NiFi core feature and is not tied in anyway to the dataflows you select or build on your canvas. You are correct that the best approach for identifying which dataflows on a single canvas are designated dev, test, or production is through the use of "labels". In a secure NiFi setup, you can use NiFi granular multi-tenancy user authorization to control what components a user can interact with an view. If you use labels, you should set a policy allowing all user to view that specific component, so even if they are not authorized to access the labeled components, they will be able to see why via the label text. Thanks, Matt
... View more
08-01-2017
03:00 PM
@Hadoop User Your Java regular expression needs to escape the "[" and "]" since they have reserved meaning in Java. Try using the following java regular expression instead: (\[hdfs.*log"\]) Thanks, Matt
... View more
07-31-2017
05:39 PM
@Alvin Jin I am not familiar with what K8 is...
I suggest starting a new question rather then adding to this existing question so that it gets full exposure to the community. I would also suggest providing as much detail as you can about the use case for your question. Thanks, Matt
... View more
07-31-2017
05:01 PM
@Alvin Jin The nifi.properties file does not support environment variables. It expects hardcoded values or it will use default values some properties in teh absense of a configured value. Thanks, Matt
... View more
07-31-2017
12:42 PM
@Sanaz Janbakhsh You should look in your nifi-user.log file. When you attempt to perform the "List Queue", What log entries do you see? Unfortunately, the attachment you provided does not tell me much since it does not include the "NiFi Resource Identifier" or users assigned to each of those policy names. Did you create a policy that uses the "NiFi Resource Identifier" of "/data/*" and assign your single node's DN to it? Another place you could check is the Ranger Audit. Filter on Result:Denied and try to list your queue. Do you see any Denied audit lines for any "/data" resource similar to the below:
The above is the result of me trying to perform list queue for my user "nifiuser1" when the node has not been properly authorized to READ the data. As you can see Ranger is reporting there is no policy authorizing my node's DN for the resource listed. The UUID in the resource is the UUID of the processor which owns the connection I was trying to list. Once I added my policy that gives the node's DN READ/WRITE to "/data/*", i was able to list and empty this queue. Thanks, Matt
... View more
07-26-2017
02:56 PM
@Richard Corfield The Provenance repo has not impact on the functionality of your dataflow. All the FlowFiles currently queued in your dataflow are directly tied to the content in the FlowFile and Content repositories. The data stored in your provenance repository has a configured lifespan (default 24 hours or 1 GB disk usage) and should be cleared automatically based on those threshold by NiFi.
... View more
07-26-2017
02:21 PM
1 Kudo
@Jobin George The issue here is caused by the following.... 1. Ambari metrics have been enabled. 2. On start of a NiFi Node, If Ambari detects a flow.xml.gz fiel does not exist, it creates a flow.xml.gz that contains only the AmbariReportingTask to support the enabling of Ambari metrics from this NiFi. 3. Then NiFi is started and NiFi's normal startup procedure occurs. During that process NiFi detects the flow.xml.gz on this new node does not match the flow.xml.gz on the cluster. Node will shut back down. Aside from just manually copying the flow.xml.gz from an existing cluster node, another workaround is to make sure the flow.xml.gz fiel is not there and start the new node via NiFi's command line start manually to bypass the Ambari flow.xml.gz file generation. Thanks, Matt
... View more