Created 01-03-2019 05:03 PM
Is there a way to test MIT kerberos high availability functionality?Any approaches?
thanks in advance.
Created 01-04-2019 03:12 PM
Yes for sure, that's doable I am assuming you have set up 2 kdc's on different networks but accessible to the cluster,
Assumptions:
You MUST have successfully configure the 2 master and slave KDC's
my realm =REALM Master host=master-kdc.test.com Slave host=slave-kdc.test.com
Contents of /var/kerberos/krb5kdc/kpropd.acl:
host/master-kdc.test.com@REALM host/slave-kdc.test.com@REALM
# Create the configuration for kpropd on both the Master and Slave KDC hosts:
# Create /etc/xinetd.d/krb5_prop with the following contents.
service krb_prop { disable = no socket_type = stream protocol = tcp user = root wait = no server = /usr/sbin/kpropd }
# Configure xinetd to run as a persistent service on both the Master and Slave KDC hosts:
# systemctl enable xinetd.service # systemctl start xinetd.service
# Copy the following files from the Master KDC host to the Slave KDC host:
/etc/krb5.conf /var/kerberos/krb5kdc/kadm5.acl /var/kerberos/krb5kdc/kdc.conf /var/kerberos/krb5kdc/kpropd.acl /var/kerberos/krb5kdc/.k5.REALM
# Perform the initial KDC database propagation to the Slave KDC:
# kdb5_util dump /usr/local/var/krb5kdc/slave_datatrans # kprop -f /usr/local/var/krb5kdc/slave_datatrans slave-kdc.REALM
# Start the Slave KDC :
# systemctl enable krb5kdc # systemctl start krb5kdc
# Script to propagate the updates from the Master KDC to the Slave KDC. Create a cron job, or the like, to run this script on a frequent basis.
#!/bin/sh #/var/kerberos/kdc-slave-propogate.sh kdclist = "slave-kdc.customer.com" /sbin/kdb5_util dump /usr/local/var/krb5kdc/slave_datatrans for kdc in $kdclist do /sbin/kprop -f /usr/local/var/krb5kdc/slave_datatrans $kdc done
How to test the KDC HA is to shut down the master KDC as start the slave KDC note both KDC's should NEVER be running at the same time, the crontab script should do the propagation of all changes in the KDC database in the master to the slave.
CAUTION
Run the kprop before shutting down the master KDC then to test the kdc HA log on to the cluster linux CLI follow the below steps my steps I am using the root user
Switch user to hive/spark/Yarn etc
# su - hive
Check if the hive user still has valid Kerberos ticket The below output shows the hive user still has a valid ticket
$ klist Ticket cache: FILE:/tmp/krb5cc_507 Default principal: hdfs-host1@{REALM} Valid starting Expires Service principal 12/28/16 22:57:11 12/29/16 22:57:11 krbtgt/{REALM}@{REALM} renew until 12/28/16 22:57:11 12/28/16 22:57:11 12/29/16 22:57:11 HTTP/host1.test.com@{REALM} renew until 12/28/16 22:57:11 12/28/16 22:57:11 12/29/16 22:57:11 HTTP/host1.com@{REALM} renew until 12/28/16 22:57:11
# Destroy the Kerberos tickets as user hive
$ kdestroy
Running the previous command shouldn't give you any lines, now try getting a valid ticket by running the following command format {kinit -kt $keytab $principal}
$ kinit -kt /etc/security/keytabs/hive.keytab {PRINCIPAL}
Repeating the klist should give the hive user a valid ticket this will validate that the HA is functioning well.
Created 07-24-2021 12:49 PM
@SheltonThank you so much
Created 01-07-2019 09:59 PM
i created principal for my LDAP id in below fashion.
kadmin.local: addprinc myid
WARNING: no policy specified for id@RXPERF.HDP.XX.COM; defaulting to no policy
Enter password for principal "id@RXPERF.HDP.XX.COM":
Re-enter password for principal "id@RXPERF.HDP.XX.COM":
Principal "id@RXPERF.HDP.XX.COM" created.
i didn't created any keytab for my id as of now.
Reg the sync will update that.
Thanks.
Created 01-07-2019 10:42 PM
Create the test user principal
Let's try this out as root create user at OS level
# useradd test
Set password
# passwd test
evoke the kdc admin CLI, run these commands from /etc/security/keytabs
# kadmin.local .. kadmin.local: addprinc test@RXPERF.HDP.XX.COM Quit kadmin Kadmin.local: q
Extract/Generate the keytab
The extracting the keytab is done in the ktutil shell cmd a continuation from the previous step the keytab name and principal is an explicit input it’s usually good if it matches the user for easy identification.
This will extract the keytab in the current directory i.e /etc/security/keytabs/ you can later move it to the user’s home directory or the /tmp directory
#sudo ktutil ktutil : addent –password –p test@RXPERF.HDP.XX.COM -k 1 -e RC4-HMAC Password for test@RXPERF.HDP.XX.COM : ktutil : wkt test.keytab ktutil : q
Now to validate the above steps run as the user test
$ klist -kt /etc/security/keytabs/test.keytab
The output should look like
Keytab name: FILE:/etc/security/keytabs/test.keytab KVNO Timestamp Principal ---- ----------------- -------------------------------------------------------- 1 01/07/19 22:25:31 test@RXPERF.HDP.XX.COM (des3-cbc-sha1) 1 01/07/19 22:25:31 test@RXPERF.HDP.XX.COM (aes128-cts-hmac-sha1-96) 1 01/07/19 22:25:31 test@RXPERF.HDP.XX.COM (arcfour-hmac) 1 01/07/19 22:25:31 test@RXPERF.HDP.XX.COM (des-cbc-md5) 1 01/07/19 22:25:31 test@RXPERF.HDP.XX.COM (aes256-cts-hmac-sha1-96)
Now grab a ticket using as test user format kinit -kt $keytab $principal
$ kinit -kt /etc/security/keytabs/test.keytab test@RXPERF.HDP.XX.COM
Check for ticket
Klist
Let me know if that works
Created 10-01-2019 01:58 AM
kadmin can't re-start on slave kdc and master kdc. This message log is:
oct 01 15:49:48 kdc01.test.local _kadmind[24364]: Error. This appears to be a slave server, found kpropd.acl
Oct 01 15:49:48 kdc01.test.local systemd[1]: kadmin.service: control process exited, code=exited status=6
Oct 01 15:49:48 kdc01.test.local systemd[1]: Failed to start Kerberos 5 Password-changing and Administration.
when i removed kpropd.acl on /var/kerberos/krb5kdc/ on slave & master node. kadmin is working fine. Howto solve this problem?