Member since
07-30-2019
3414
Posts
1623
Kudos Received
1008
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 415 | 12-17-2025 05:55 AM | |
| 476 | 12-15-2025 01:29 PM | |
| 512 | 12-15-2025 06:50 AM | |
| 393 | 12-05-2025 08:25 AM | |
| 648 | 12-03-2025 10:21 AM |
08-24-2023
12:45 PM
@mslnrd This is likely caused by LDAP on 636 uses referrals that can your initial query can be referred to across the entire domain tree across multiple LDAP servers. So somewhere within that referral your issues arrises in the hostname verification. Switching to the global catalog port 3269 and there are no referrals. I can't speak to the issues within your ldaps servers causing the issue within the referrals, but makes sense why switching to the secure global catalog port resolved your issue. Hope this clarifies why the change in port resolved your issue. If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped. Thank you, Matt
... View more
08-24-2023
12:31 PM
@kothari It is not Ranger's job to inform the client applications using Ranger what users belong to what group. Each client application is responsible for determining which groups the user authenticated into that service belong to. The policies generated by Ranger are downloaded by the client applications. Within that downloaded policy json will be a resource identifier(s), list if user identities authorized (read, write, and/or delete) , and list of group identities authorized (read, write, or delete) against each resource identifier. So when client checks the downloaded policies from Ranger it is looking for the user identity being authorized and if client is aware of the group(s) that user belongs to, will also check authorization for that group identity. so in your case, it i s most likely that your client service/application has not been configured with the same user and group association setup in your Ranger service. If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped. Thank you, Matt
... View more
08-24-2023
09:17 AM
@BKZ Can you share more about your environment setup. Are you using Knox (from URL it appears so)? What topology is setup in knox for NiFI and NiFI-API? What are all the methods of user authentication you have setup (nifi.properties file would help here)? You are getting a 401 unauthorized response. Are you seeing anything logged in the nifi-user.log when you execute your python code? Have you tried going directly to the NiFi node's nifi-api endpoint instead of going through the cdp-proxy-api? What version of CDP and CFM are you using? The details may help get more responses from within the community. Thank you, Matt
... View more
08-23-2023
05:47 AM
@pashtet04 There is a significant jump in version from NiFi 1.11.4 to 1.23.0 As with any upgrade/migration, you should NOT be just copying core configuration files from old version to new version. You should instead use the old configuration files to update the new configuration files as those new configuration files would have introduced new properties that would then be missing if you simply copied old to new. NiFi 1.11.4 loaded a flow.xml.gz file on startup. With release of NiFi 1.16, NiFi introduced a new flow storage format using flow.json.gz (which also introduced a new property in the nifi.properties file: "nifi.flow.configuration.json.file"). 1.16 and later version will still load from the flow.xml.gz file if a flow.json.gz is not present and then generate the flow.json.gz file which will be used for all subsequent launches starting with 1.18+ version. I am guessing your exception about flow configuration file [NULL] comes from missing configuration property in the nifi.properties you copied over. This may have also resulted in a failure to convert the flow.xml.gz??? I'd recommend re-copying over your original flow.xml.gz, utilize old config files to populate new config files to avoid missing properties, start NiFi 1.23 first time using original sensitive props key and sensitive props algorithm so that the flow.json.gz gets generated. Then use the nifi.sh commands to change the sensitive props key and sensitive key algorithm. If you don't know your original sensitive props key, you could use the Encrypt-config toolkit to change it as an alternative to the nifi.sh commands. If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped. Thank you, Matt
... View more
08-22-2023
05:55 AM
@mslnrd TLS/SSL is nothing specific to NiFi or LDAP. There are certain requirements in the TLS exchange specification. This applies to any application that would use TLS. Based on your exception: [Root exception is javax.naming.CommunicationException: my.network.com:636 [Root exception is javax.net.ssl.SSLHandshakeException: No subject alternative DNS name matching my.network.com found.]] The issue happens during the TLS exchange. NiFi's ldap-user-group-provider within the authorizers.xml file is attempting to execute during NiFi startup in order to sync users and groups from your target LDAPS server. At a very high basic level here, you have configured NiFi to connect to your LDAPS host "my.network.com". NiFi is acting as the client in the TLS exchange and the LDAPS as the server side. Your LDAPS is returning a serverAuth certificate that is missing a Subject Alternative Name (SAN) entry matching the hostname (my.network.com) the client is trying to connect with. Within the TLS spec, the client is then rejecting this connection as insecure since it can not verify the authenticity of the server that is responding. It assumes a man in the middle type issue where client is trying to establish a secure connection with server "my.network.com"; however, certificate returned does not indicate it belongs to that server via its list of SAN entries. So the issue here is with the certificate being used by your LDAPS server missing the expected SAN entry. NiFi does not provide an option to force allow an insecure connection. Bottom line is the NiFi Keystore needs to meet following requirements: 1. The Keystore can contain only one PriavteKeyEntry 2. The PrivateKeyEntry must have both clientAuth and ServerAuth Extended Key Usage (EKUs) 3. The PrivateKeyEnrty must have a SAN entry that matches the hostname of the server on which NiFi is installled. The NiFi Truststore needs to meet the following: 1. The truststore can contain one too many TrustedCert entries. 2. The complete trust chain for any server certificates NiFi will need to establish a TLS connection with will need to be present in the truststore. When a certificate is created it has an owner and a signer/issuer. The signer of the certificate might be self-signed (meaning owner DN and signer/issuer DN are the same), it might be an intermediate CA (Owner DN is different the signer/issuer DN), and a Root CA (also has matching owner DN and signer/issuer DN). A complete trust chain means all trusted certificates (public certs) from client certificate to root CA need to be present in the truststore. This could encompass the server public cert, one or more intermediate CA public certs, and the CA root public cert. Often, but not always you can get all the public certificates using openssl. openssl s_client -connect <target server hostname>:<port> -showcerts Above would initiate a TLS handshake with target server and will return within the response the public certs from the target server. It may or may not have complete trustchain. You can then load these public certificates into your truststore. In your case however, it does not look like a trust chain issue at this point. As i mentioned above, your current issue is that your serverAuth certificate used by your target LDAP is missing the TLS spec required SAN entry. If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped. Thank you, Matt
... View more
08-22-2023
05:24 AM
@sahil0915 What you are proposing would require you to ingest into NiFi all ~100 million records from DC2, hash that record, write all ~100 million hashes to a map cache like Redis or HBase (which you would also need to install somewhere) using DistributedMapCache processor, then ingest all 100 million records from DC1, hash those records and finally compare the hash of those 100 million record with the hashes you added to the Distributed map cache using DetectDuplicate. Any records routed to non-duplicate would represent what is not in DC2. Then you would have to flush your Distributed Map Cache and repeat process except this time writing the hashes from DC3 to the Distributed Map Cache. I suspect this is going to perform poorly. You would have NiFi ingesting ~300 million records just to create hash for a one time comparison. If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped. Thank you, Matt
... View more
08-22-2023
05:22 AM
@Yasine Adding Cloudera Flow Management (CFM) to an existing CDP installation requires adding both the CFM Parcel and Cloudera Service Descriptors (CSDs). Step one is verify CFM compatibility with your CDP deployment. https://docs.cloudera.com/cfm/2.1.5/release-notes/topics/cfm-system-requirements.html CFM 2.1.5 is supported on CDP Private Cloud Base versions CDP 7.1.7 (plus all Service packs) and CDP 7.1.8 only. I see you mentioned you are deploying on CDP 7.1.5, so you should upgrade to CDP 7.1.7 or newer first. Then follow the steps for preparing your CDP Private Cloud Base environment: https://docs.cloudera.com/cfm/2.1.5/deployment/topics/cfm-prepare-cdpdc.html Installing the CFM Parcel: https://docs.cloudera.com/cfm/2.1.5/deployment/topics/cfm-add-parcel-url.html Installing the CFM CSDs: https://docs.cloudera.com/cfm/2.1.5/deployment/topics/cfm-get-csd.html CFM Download locations for Parcel and CSDs (there are two CSD jar files that need to be added before the NiFi and NIFi-Registry services will show in the list of available services within your CDP Private Cloud Base. If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped. Thank you, Matt
... View more
08-18-2023
05:29 AM
@sahil0915 I don't know that this is a good use case for NiFi. NiFi at the most basic level is designed to automate the movement of data between systems. In between the ingest of data and egress of data, NiFi provides a variety of components for route, enhancing, modify, etc that data. So trying to use NiFi to compare the data existing in multiple data centers is not a good fit. If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped. Thank you, Matt
... View more
08-18-2023
05:16 AM
3 Kudos
@sinRudra As a workaround you could take the "nifi-standard-nar-1.16.3.nar" from the NiFi 1.16.3 distro and add it to your NiFi 1.21 install. This will give you access to both version of all the standard nar components. The nifi-standard-nar does however contain a lot of components besides just listFTP. You'll end up seeing two available versions of a lot of components with either 1.21.0 or 1.16.3 as the version when adding components to the canvas. If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped. Thank you, Matt
... View more
08-17-2023
12:00 PM
1 Kudo
@leqlaz777 What version of NiFi are you using? How large is your response body? I have no issues creating FlowFile Attributes with values larger then 256 bytes. Can you share your processors being used and the configuration of them? Thank you, Matt
... View more