03-15-2016 01:39 AM
I installed a CDH5 cluster and configured it to authenticate with MIT Kerberos and everything seems to be fine.
I have HA configured for MRV2 on YARN and for HDFS, and now Kerberos has become a single pont of failure.
I wnt to configure it for high availability too.
I found this manual from MIT: http://web.mit.edu/kerberos/krb5-latest/doc/admin/install_kdc.html
It describe how to configure Kerberos slave server manually, but I'm not sure it this is the way to go because it is not Hadoop specific and it does not descrive how to tell the cluster about the slave Kerberos server and enable failover.
I could not find any documentation from Cloudera about this process or any menu items in Cloudera manager that give any hint on how to do it.
Cn you please refer me to a documentation/guide for configuring Kerberos HA in CDH5 (preferebly using Cloudera manager) ?
04-14-2017 03:17 PM
In a MIT Kerberos master/slave setup, I configure Cloudera Manager with the hostname of the master KDC during the Enable Kerberos wizard. Then after running the Enable Kerberos wizard, I go back into the settings and add the slave KDC in a safety valve.
Administration / Settings
CATEGORY = Kerberos
Advanced Configuration Snippet (Safety Valve) for the Default Realm in krb5.conf
kdc = slave.host.name
Then stop your cluster, deploy the Kerberos Client Configuration, and start the cluster.
This assumes that you have Cloudera Manager managing the /etc/krb5.conf file. Otherwise, just add the above line to each server's /etc/krb5.conf in the same section as the existing kdc = line for your KDC master.