Member since
01-19-2017
3627
Posts
608
Kudos Received
361
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
225 | 08-02-2024 08:15 AM | |
3411 | 04-06-2023 12:49 PM | |
764 | 10-26-2022 12:35 PM | |
1502 | 09-27-2022 12:49 PM | |
1770 | 05-27-2022 12:02 AM |
08-05-2024
11:11 AM
@pravin_speaks I see some typo error in the oracle create table stmt create table schema.ABC(account_id decimal(28,0), "1234" decima;(28,0)) ; Is that the exact copy and paste ?? Geoffrey
... View more
08-02-2024
08:15 AM
2 Kudos
@steinsgate According to Cloudera documentation check the Security Best Practice ACLs/Permissions Can you add the below line in your SERVER_JVMFLAGS in zookeeper-env template please substitute the value for YOUR_REALM -Dzookeeper.security.auth_to_local=RULE:[2:\$1@\$0](hbase@ YOUR_REALM)s/.*/hbase/RULE:[2:\$1@\$0](infra-solr@ YOUR_REALM)s/.*/infra-solr/RULE:[2:\$1@\$0](rm@ YOUR_REALM)s/.*/rm/ Please revert
... View more
08-02-2024
07:56 AM
1 Kudo
@Alf015 Can you share in what context standalone? Package with HDP or CDP etc. This will give a better understanding on maybe how to replicate your environment and enable us to help resolve your issue. Thank you
... View more
06-07-2024
04:38 AM
1 Kudo
@rizalt Can you share the OS ,OS version and HDP version you are trying to Kerberize? I don't have a dump of HDP binaries though. I would like to reproduce and share the steps.? I suggest starting afresh so delete/destroy the current KDC as the root user or sudo the following steps are specific to ubuntu re-adapt for appropriate OS # sudo kdb5_util -r HADOOP.COM destroy Accept with a "Yes" Now create a new Kerberos database Complete remove Kerberos $ sudo apt purge -y krb5-kdc krb5-admin-server krb5-config krb5-locales krb5-user krb5.conf
$ sudo rm -rf /var/lib/krb5kdc Do a refresh installation First, get the FQDN of your kdc server for this example # hostanme -f
test.hadoop.com Use the above output for a later set up # apt install krb5-kdc krb5-admin-server krb5-config Proceed as follow At the prompt for the Kerberos Realm = HADOOP.COM
Kerberos server hostname = test.hadoop.com
Administrative server for Kerberos REALM = test.hadoop.com Configuring krb5 Admin Server # krb5_newrealm Open /etc/krb5kdc/kadm5.acl it should contain a line like this */admin@HADOOP.COM * The kdc.conf should be adjusted to look like this [kdcdefaults]
kdc_ports = 88
kdc_tcp_ports = 88
[realms]
HADOOP.COM = {
#master_key_type = aes256-cts
acl_file = /var/kerberos/krb5kdc/kadm5.acl
dict_file = /usr/share/dict/words
admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
} The krb5.conf should look like this if you are on a multi-node cluster this is the fines you will copy to all other hosts, notice the entry under domain_realm? [libdefaults]
renew_lifetime = 7d
forwardable = true
default_realm = HADOOP.COM
ticket_lifetime = 24h
dns_lookup_realm = false
dns_lookup_kdc = false
default_ccache_name = /tmp/krb5cc_%{uid}
#default_tgs_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
#default_tkt_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
[domain_realm]
.hadoop.com = HADOOP.COM
hadoop.com = HADOOP.COM
[logging]
default = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
kdc = FILE:/var/log/krb5kdc.log
[realms]
HADOOP.COM = {
admin_server = test.hadoop.com
kdc = test.hadoop.com
} Restart the Kerberos kdc daemons and kerberos admin servers: # for script in /etc/init.d/krb5*; do $script restart; done Don't manually create any principle like the "ambari_hdfs-050819@HADOOP.COM" Go to the ambari kerberos wizard for the domain notice the . (dot) kdc host = test.hadoop.com
Real Name = HADOOP.COM
Domains = .hadoop.com ,hadoop.com
-----
kadmin host = test.hadoop.com
Admin principal = admin/admin@HADOOP.COM
Admin password = password set during the creation of kdc database Now from here just accept the default the keytabs should generate successfully.
... View more
06-06-2024
12:12 PM
1 Kudo
@rizalt Did you see the same entry in the krb5.conf that I suggested you add? [domain_realm]
.hadoop.com = HADOOP.COM
hadoop.com = HADOOP.COM In the Kerberos setup UI you should also include HADOOP.COM , . HADOOP.COM Check a solution I offered Error while enabling Kerberos on ambari Hope that helps
... View more
06-06-2024
05:41 AM
@rizalt Make a backup of your krb5.conf and modify it like below # Configuration snippets may be placed in this directory as well includedir /etc/krb5.conf.d/ [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] dns_lookup_realm = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true ticket_lifetime = 24h dns_lookup_realm = false dns_lookup_kdc = false default_ccache_name = /tmp/krb5cc_%{uid} #default_tgs_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5 #default_tkt_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5 [realms] HADOOP.COM = { admin_server = master1.hadoop.com kdc = master1.hadoop.com } [domain_realm] .master1.hadoop.com = HADOOP.COM master1.hadoop.com = HADOOP.COM Then restart the KDC and retry
... View more
06-05-2024
12:51 AM
1 Kudo
@rizalt There are a couple of things to validate. Step 1 Pre-requisites Kerberos Server: Ensure you have a Kerberos Key Distribution Center (KDC) and an administrative server set up. DNS: Proper DNS setup is required for both forward and reverse lookups. NTP: Time synchronization across all nodes using Network Time Protocol (NTP). HDP Cluster: A running Hortonworks Data Platform (HDP) cluster. Step 2: Check your /etc/host file ensure your KDC host is assigned the domain HADOOP.COM to match your KDC credentials # hostname -f Step 3: Once that matches then edit the Kerberos configuration file (/etc/krb5.conf) on all nodes to point to your KDC you can scramble the sensitive info and share [libdefaults] default_realm = HADOOP.COM dns_lookup_realm = false dns_lookup_kdc = false [realms] HADOOP.COM = { kdc = kdc.hadoop.com admin_server = admin.hadoop.com } [domain_realm] .hadoop.com = HADOOP.COM hadoop.com = HADOOP.COM Step 4: Locate your kadm5.acl file and ensure it looks like this */admin@HADOOP.COM * Step 5: Restart the KDC and admin servers as root or with sudo # systemctl restart krb5kdc # systemctl restart kadmin Step 6: Check Kerberos Ticket: Ensure that the Kerberos ticket is obtained correctly. kinit -kt /etc/security/keytabs/hdfs.keytab hdfs/hostname@HADOOP.COM klist If your setup is correct you will see an output like below Ticket cache: FILE:/tmp/krb5cc_1000 Default principal: hdfs/hostname@HADOOP.COM Valid starting Expires Service principal 06/05/2024 09:50:21 06/06/2024 09:50:21 krbtgt/HADOOP.COM@HADOOP.COM renew until 06/05/2024 09:50:21 06/05/2024 09:50:22 06/06/2024 09:50:21 HTTP/hostname@HADOOP.COM renew until 06/05/2024 09:50:21 Hope that helps
... View more
01-09-2024
10:05 AM
@achemeleu Welcome Acemeleu, @DianaTorres for pinging me on this one. I provide 2 solutions see threads about a similar case Ambari stuck1 Ambari.stuck2 Can you check the above solution and see if that works out for you too? In case it doesn't please can you share your HDP version, database type/version, ambari-server logs and OS type/version, and the brief background of the steps you executed before getting stuck Please let us know whether that resolved your issue. Geoffrey
... View more
04-15-2023
02:13 PM
@harry_12 Assumption non kerberized sandboox User creation in Ambari Ui should auto create user's home directory. Let try out this recommended approach On your Ambari Server host, backup and edit the ambari-properties file. # cp /etc/ambari-server/conf/ambari.properties /etc/ambari-server/conf/ambari.properties_<$date> Edit using vi in this example # vi /etc/ambari-server/conf/ambari.properties For consistency group it alphabetically add the line below as shown see last line ambari.post.user.creation.hook=/var/lib/ambari-server/resources/scripts/post-user-creation-hook.sh #Sat Apr 15 21:49:53 CEST 2023 agent.package.install.task.timeout=1800 agent.stack.retry.on_repo_unavailability=false agent.stack.retry.tries=5 agent.task.timeout=900 agent.threadpool.size.max=25 ambari-server.user=root ambari.python.wrap=ambari-python-wrap ambari.post.user.creation.hook=/var/lib/ambari-server/resources/scripts/post-user-creation-hook.sh Save the new ambari.properties Restart Ambari server. # ambari-server restart Recreate a new user and see if the home dir is auto-created in /user/<New_user> Please let me know if that helped
... View more
04-15-2023
12:37 PM
@harry_12 Can you share the link for the download of the sandbox? I want to try it
... View more