Member since
01-19-2017
3651
Posts
623
Kudos Received
364
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
172 | 12-22-2024 07:33 AM | |
109 | 12-18-2024 12:21 PM | |
428 | 12-17-2024 07:48 AM | |
298 | 08-02-2024 08:15 AM | |
3578 | 04-06-2023 12:49 PM |
12-17-2024
07:48 AM
@tono425 The error messages you are encountering in NiFi are related to Java's native method access restrictions introduced in newer Java versions (likely Java 17 or higher). These warnings indicate that NiFi (or a dependency like Apache Lucene) is calling restricted native methods that require explicit permission to access low-level operating system functions. The warning mentions java.lang.foreign.Linker::downcallHandle, which is a native method used for low-level interactions with the operating system. Here are the 3 options you should try to resolve the issue: Option 1: Add the Java Option to Enable Native Access Update the NiFi startup configuration to allow unrestricted native access for unnamed modules. Edit the NiFi Java options in the bootstrap.conf file: nano /opt/nifi/conf/bootstrap.conf 2. Add the following option to the java.arg properties: java.arg.X=-enable-native-access=ALL-UNNAMED 3. Restart NiFi: sudo systemctl restart nifi Option 2: Use a Lower Java Version (if possible) If NiFi was previously running fine with an earlier Java version (e.g., Java 8 or 11), you can revert to it until you're ready to address the native access changes: Update the JAVA_HOME path in NiFi's bootstrap.conf to point to the older Java version. Option 3: Verify Dependencies Check if you are using the latest versions of NiFi and Apache Lucene: Some of these warnings may be addressed in recent releases of NiFi or its libraries. Consider upgrading NiFi to the latest stable version to ensure compatibility with newer Java versions. Happy hadooping
... View more
12-16-2024
02:19 PM
1 Kudo
@sayebogbon Upgrading Cloudera Manager or CDP can sometimes alter TLS/SSL settings. Please can you verify if TLS/SSL is enabled for the affected services: Navigate to Cloudera Manager > Administration > Security. Confirm that the keystores and truststores for SSL are correctly configured. Validate the keystore and truststore paths in the HDFS configuration: dfs.http.policy dfs.https.enable Please do the above and revert happy hadooping !!!!!
... View more
12-16-2024
02:10 PM
1 Kudo
@divyank Have you resolved this issue if not the issue you're encountering is common when Kerberos is enabled for HDFS, as it introduces authentication requirements that need to be properly configured. Here’s how to diagnose and resolve the problem: 1. Root Cause Analysis When Kerberos is enabled: Authentication: Every interaction with HDFS now requires a Kerberos ticket. Misconfiguration: The HDFS service or client-side configurations may not be aligned with Kerberos requirements. Keytabs: Missing or improperly configured keytab files for the HDFS service or users accessing the service. Browser Access: The HDFS Web UI may not support unauthenticated access unless explicitly configured. 2. Steps to Resolve Step 1: Verify Kerberos Configuration Check the Kerberos principal and keytab file paths for HDFS in Cloudera Manager: Navigate to HDFS Service > Configuration. Look for settings like: hadoop.security.authentication → Should be set to Kerberos. dfs.namenode.kerberos.principal → Should match the principal defined in the KDC. dfs.namenode.keytab.file → Ensure the file exists on the NameNode and has correct permissions. Step 2: Validate Kerberos Ticket Check if the HDFS service has a valid Kerberos ticket: klist -kte /path/to/hdfs.keytab If missing, reinitialize the ticket: kinit -kt /path/to/hdfs.keytab hdfs/<hostname>@<REALM> Test HDFS access from the command line: hdfs dfs -ls / If you get authentication errors, the Kerberos ticket might be invalid. Step 3: Validate HDFS Web UI Access Post-Kerberos, accessing the HDFS Web UI (e.g., http://namenode-host:50070) often requires authentication. By default: Unauthenticated Access: May be blocked. Browser Integration: Ensure your browser is configured for Kerberos authentication or the UI is set to allow unauthenticated users. Enable unauthenticated access in Cloudera Manager (if needed): Go to HDFS Service > Configuration. Search for hadoop.http.authentication.type and set it to simple. Step 4: Review Logs for Errors Check NameNode logs for Kerberos-related errors: less /var/log/hadoop/hdfs/hadoop-hdfs-namenode.log Look for errors like: "GSSException: No valid credentials provided" "Principal not found in the keytab" Step 5: Synchronize Clocks Kerberos is sensitive to time discrepancies. Ensure all nodes in the cluster have synchronized clocks ntpdate <NTP-server> Step 6: Restart Services Restart the affected HDFS services via Cloudera Manager after making changes: Restart NameNode, DataNode, and HDFS services. Test the status of HDFS hdfs dfsadmin -report 3. Confirm Resolution Verify HDFS functionality: Test browsing HDFS via the CLI: hdfs dfs -ls / Access the Web UI to confirm functionality: http://<namenode-host>:50070 If HDFS is working via CLI but not in the Web UI, revisit the Web UI settings in Cloudera Manager to allow browser access or configure browser Kerberos support. 4. Troubleshooting Tips If the issue persists: Check the Kerberos ticket validity with: klist Use the following commands to troubleshoot connectivity: hdfs dfs -mkdir /test hdfs dfs -put <local-file> /test Let me know how it goes or if further guidance is needed!
... View more
12-16-2024
01:50 PM
1 Kudo
@rizalt Can you share your layout of the 18 hosts to better understand where the issue could be emanating from? The issue you are experiencing, where shutting down 8 DataNodes causes both NameNodes in your high availability (HA) configuration to go down, likely points to Quorum loss in the JournalNodes or insufficient replicas for critical metadata blocks. The NameNodes in HA mode rely on JournalNodes for shared edits. For the HA setup to function correctly, the JournalNodes need a quorum (more than half) to be available. With 5 JournalNodes, at least 3 must be operational. If shutting down 8 DataNodes impacted the connectivity or availability of more than 2 JournalNodes, the quorum would be lost, causing both NameNodes to stop functioning. If shutting down 8 DataNodes reduces the number of replicas below the replication factor (typically 3), the metadata might not be available, causing the NameNodes to fail. Please revert
... View more
08-05-2024
11:11 AM
@pravin_speaks I see some typo error in the oracle create table stmt create table schema.ABC(account_id decimal(28,0), "1234" decima;(28,0)) ; Is that the exact copy and paste ?? Geoffrey
... View more
08-02-2024
08:15 AM
2 Kudos
@steinsgate According to Cloudera documentation check the Security Best Practice ACLs/Permissions Can you add the below line in your SERVER_JVMFLAGS in zookeeper-env template please substitute the value for YOUR_REALM -Dzookeeper.security.auth_to_local=RULE:[2:\$1@\$0](hbase@ YOUR_REALM)s/.*/hbase/RULE:[2:\$1@\$0](infra-solr@ YOUR_REALM)s/.*/infra-solr/RULE:[2:\$1@\$0](rm@ YOUR_REALM)s/.*/rm/ Please revert
... View more
08-02-2024
07:56 AM
1 Kudo
@Alf015 Can you share in what context standalone? Package with HDP or CDP etc. This will give a better understanding on maybe how to replicate your environment and enable us to help resolve your issue. Thank you
... View more
06-07-2024
04:38 AM
1 Kudo
@rizalt Can you share the OS ,OS version and HDP version you are trying to Kerberize? I don't have a dump of HDP binaries though. I would like to reproduce and share the steps.? I suggest starting afresh so delete/destroy the current KDC as the root user or sudo the following steps are specific to ubuntu re-adapt for appropriate OS # sudo kdb5_util -r HADOOP.COM destroy Accept with a "Yes" Now create a new Kerberos database Complete remove Kerberos $ sudo apt purge -y krb5-kdc krb5-admin-server krb5-config krb5-locales krb5-user krb5.conf
$ sudo rm -rf /var/lib/krb5kdc Do a refresh installation First, get the FQDN of your kdc server for this example # hostanme -f
test.hadoop.com Use the above output for a later set up # apt install krb5-kdc krb5-admin-server krb5-config Proceed as follow At the prompt for the Kerberos Realm = HADOOP.COM
Kerberos server hostname = test.hadoop.com
Administrative server for Kerberos REALM = test.hadoop.com Configuring krb5 Admin Server # krb5_newrealm Open /etc/krb5kdc/kadm5.acl it should contain a line like this */admin@HADOOP.COM * The kdc.conf should be adjusted to look like this [kdcdefaults]
kdc_ports = 88
kdc_tcp_ports = 88
[realms]
HADOOP.COM = {
#master_key_type = aes256-cts
acl_file = /var/kerberos/krb5kdc/kadm5.acl
dict_file = /usr/share/dict/words
admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
} The krb5.conf should look like this if you are on a multi-node cluster this is the fines you will copy to all other hosts, notice the entry under domain_realm? [libdefaults]
renew_lifetime = 7d
forwardable = true
default_realm = HADOOP.COM
ticket_lifetime = 24h
dns_lookup_realm = false
dns_lookup_kdc = false
default_ccache_name = /tmp/krb5cc_%{uid}
#default_tgs_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
#default_tkt_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
[domain_realm]
.hadoop.com = HADOOP.COM
hadoop.com = HADOOP.COM
[logging]
default = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
kdc = FILE:/var/log/krb5kdc.log
[realms]
HADOOP.COM = {
admin_server = test.hadoop.com
kdc = test.hadoop.com
} Restart the Kerberos kdc daemons and kerberos admin servers: # for script in /etc/init.d/krb5*; do $script restart; done Don't manually create any principle like the "ambari_hdfs-050819@HADOOP.COM" Go to the ambari kerberos wizard for the domain notice the . (dot) kdc host = test.hadoop.com
Real Name = HADOOP.COM
Domains = .hadoop.com ,hadoop.com
-----
kadmin host = test.hadoop.com
Admin principal = admin/admin@HADOOP.COM
Admin password = password set during the creation of kdc database Now from here just accept the default the keytabs should generate successfully.
... View more
06-06-2024
12:12 PM
1 Kudo
@rizalt Did you see the same entry in the krb5.conf that I suggested you add? [domain_realm]
.hadoop.com = HADOOP.COM
hadoop.com = HADOOP.COM In the Kerberos setup UI you should also include HADOOP.COM , . HADOOP.COM Check a solution I offered Error while enabling Kerberos on ambari Hope that helps
... View more
06-06-2024
05:41 AM
@rizalt Make a backup of your krb5.conf and modify it like below # Configuration snippets may be placed in this directory as well includedir /etc/krb5.conf.d/ [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] dns_lookup_realm = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true ticket_lifetime = 24h dns_lookup_realm = false dns_lookup_kdc = false default_ccache_name = /tmp/krb5cc_%{uid} #default_tgs_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5 #default_tkt_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5 [realms] HADOOP.COM = { admin_server = master1.hadoop.com kdc = master1.hadoop.com } [domain_realm] .master1.hadoop.com = HADOOP.COM master1.hadoop.com = HADOOP.COM Then restart the KDC and retry
... View more