Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 696 | 06-04-2025 11:36 PM | |
| 1265 | 03-23-2025 05:23 AM | |
| 631 | 03-17-2025 10:18 AM | |
| 2313 | 03-05-2025 01:34 PM | |
| 1501 | 03-03-2025 01:09 PM |
12-10-2025
04:20 AM
@Amr5 The NoSuchMethodError means JAR conflict at runtime. You must ensure that only CDH 7.2.18 Hive JARs are in the classpath, with no remnants of 7.1.9. The ParseDriver.parse() method signature changed between Hive versions. In your case the Old Hive JARs (from CDH 7.1.9) are still present in /data1/informatica/dei/services/shared/hadoop/CDH_7.218 Java is loading the old hive-exec.jar instead of the new one, causing method signature mismatches Step 1. Identify ALL Old Hive JARs find /data1/informatica/dei/services/shared/hadoop/CDH_7.218 -name "hive*.jar" -exec ls -lh {} \; Step 2: Remove ALL Old Hive JARs cd /data1/informatica/dei/services/shared/hadoop/CDH_7.218 # Create backup directory if not exists mkdir -p backup_all_old_hive_jars # Move ALL hive-related JARs to backup mv hive*.jar backup_all_old_hive_jars/ Step 3: Copy ALL Correct Hive JARs from Cloudera Cluster # Find Cloudera CDH 7.2.18 parcels location CLOUDERA_PARCEL=$(find /opt/cloudera/parcels -maxdepth 1 -type d -name "CDH-7.2.18*" | head -1) # Copy ALL Hive JARs cp $CLOUDERA_PARCEL/lib/hive/lib/hive*.jar /data1/informatica/dei/services/shared/hadoop/CDH_7.218/ # Also copy Hive dependencies cp $CLOUDERA_PARCEL/jars/hive*.jar /data1/informatica/dei/services/shared/hadoop/CDH_7.218/ Step 4: Verify Correct Versions cd /data1/informatica/dei/services/shared/hadoop/CDH_7.218 ls -lh hive*.jar | head -5 # Check the version inside hive-exec.jar unzip -p hive-exec-*.jar META-INF/MANIFEST.MF | grep -i version Step 5: Clear Java Classpath Cache # Remove compiled artifacts rm -rf /data1/informatica/dei/tomcat/bin/disTemp/DOM_IDQ_DEV/DIS_DEI_DEV/node02_DEI_DEV/cloudera_dev/SPARK/* rm -rf /data1/informatica/dei/tomcat/bin/disTemp/DOM_IDQ_DEV/DIS_DEI_DEV/node02_DEI_DEV/cloudera_dev/HIVE/* Step 6: Restart Informatica Services infaservice.sh dis stop -domain DOM_IDQ_DEV -service DIS_DEI_DEV infaservice.sh dis start -domain DOM_IDQ_DEV -service DIS_DEI_DEV Step 7: Verify Hadoop Distribution in Informatica Admin Console Login to Informatica Administrator Navigate to DIS_DEI_DEV → Properties → Hadoop Connection Click Test Connection If it fails, click Re-import Hadoop Configuration to refresh Step 8: Re-run Your Mapping Happy Hadooping
... View more
11-25-2025
05:54 AM
Hi, Did anyone find a solution for the last question posted by "Abhijith_Nayak". We are facing the same issue, we dont have : Cloudera Manager > Impala > Configuration > Admission Control > Pool Mapping Rules Regards Sofiane
... View more
09-26-2025
04:06 AM
@Shelton Thank you for the detailed answer, much appreciated !
... View more
06-05-2025
12:37 AM
@sydney- The SSL handshake error you're encountering is a common issue when connecting NiFi instances to NiFi Registry in secure environments it indicates that your NiFi instances cannot verify the SSL certificate presented by the NiFi Registry server. javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider. certpath.SunCertPathBuilder
Exception:
unable to find valid certification path to requested target Based on your description, there are several areas to address. The certificate used by NiFi Registry is self-signed or not issued by a trusted Certificate Authority (CA) The certificate chain is incomplete The truststore configuration is incorrect 1. Certificate Trust Configuration Verify Certificate Chain: # Check if certificate is in NiFi truststore (repeat for each instance)
keytool -list -v -keystore /path/to/nifi/truststore.jks -storepass [password]
# Check if certificate is in Registry truststore
keytool -list -v -keystore /path/to/registry/truststore.jks -storepass [password]
# Verify the Registry's certificate chain
openssl s_client -connect nifi-registry.example.com:443 -showcerts Ensure Complete Certificate Chain: Add the Registry's complete certificate chain (including intermediate CAs) to NiFi's truststore Add NiFi's complete certificate chain to the Registry's truststore # Add Registry certificate to NiFi truststore
keytool -import -alias nifi-registry -file registry-cert.pem -keystore /path/to/nifi/conf/truststore.jks -storepass [password]
# Add NiFi certificate to Registry truststore
keytool -import -alias nifi-prod -file nifi-cert.pem -keystore /path/to/registry/conf/truststore.jks -storepass [password] 2. Proper Certificate Exchange Ensure you've exchanged certificates correctly export NiFi Registry's public certificate keytool -exportcert -alias nifi-registry -keystore /path/to/registry/keystore.jks -file registry.crt -storepass [password] Import this certificate into each NiFi instance's truststore keytool -importcert -alias nifi-registry -keystore /path/to/nifi/truststore.jks -file registry.crt -storepass [password] -noprompt 3. NiFi Registry Connection Configuration In your NiFi instance (nifi.properties), verify # Registry client properties
nifi.registry.client.name=NiFi Registry
nifi.registry.client.url=https://nifi-registry.example.com/nifi-registry
nifi.registry.client.timeout.connect=30 secs
nifi.registry.client.timeout.read=30 secs Verify these configuration files in NiFi (production/development) # nifi.properties:
nifi.registry.client.ssl.protocol=TLS
nifi.registry.client.truststore.path=/path/to/truststore.jks
nifi.registry.client.truststore.password=[password]
nifi.registry.client.truststore.type=JKS In NiFi Registry # nifi-registry.properties:
nifi.registry.security.truststore.path=/path/to/truststore.jks
nifi.registry.security.truststore.password=[password]
nifi.registry.security.truststore.type=JKS 4. LDAP Configuration For your LDAP integration issues in authorizers.xml ensure you have <accessPolicyProvider>
<identifier>file-access-policy-provider</identifier>
<class>org.apache.nifi.registry.security.authorization.FileAccessPolicyProvider</class>
<property name="User Group Provider">ldap-user-group-provider</property>
<property name="Authorizations File">./conf/authorizations.xml</property>
<property name="Initial Admin Identity">cn=admin-user,ou=users,dc=example,dc=com</property>
<property name="NiFi Identity 1">cn=dev-nifi,ou=servers,dc=example,dc=com</property>
</accessPolicyProvider> In the authorizations.xml add appropriate policies for the dev-nifi identity <policy identifier="some-uuid" resource="/buckets" action="READ">
<user identifier="dev-nifi-uuid"/>
</policy> 5. Proxy Configuration For proxy user requests, add in nifi.properties nifi.registry.client.proxy.identity=cn=dev-nifi,ou=servers,dc=example,dc=com 6. Restart Order After making changes restart the Nifi instance in the below order NiFi Registry first Then restart all NiFi instances Happy hadoping
... View more
04-29-2025
08:15 AM
@Shelton We just followed Steps 1,3 4 and 5 to generate the automated report to Elasticsearch. It was pretty straight forward. Only things is we had to do was enable firewall in our Docker container and update Input Port's Access Policies. Thanks
... View more
04-28-2025
07:05 AM
@Shelton Please read my previous answer carefully. None of the properties provided by you are in hbase codebase
... View more
04-18-2025
01:14 AM
@Jay2021, Welcome to our community! As this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post.
... View more
04-04-2025
08:29 AM
Thanks for the help and sorry for late reply @Shelton I am getting the output here but the values for parent class are not getting populated, they are displayed as NULL
... View more
03-27-2025
03:22 PM
Thanks @Shelton for details!! I will try Option 3 in our pipeline shell script. Will let you know if any further issues.
... View more