Community Articles
Find and share helpful community-sourced technical articles
Expert Contributor

This article describes the setup of two separate KDCs in a Master/Slave configuration. This setup will allow two clusters to share a single Kerberos realm, which allows the principals to be recognized between clusters. A use case for this configuration is when a Disaster Recovery cluster is used as a warm standby. The high level information for the article was found at, while the details were worked out through sweat and tears.

Execute the following command to install the Master and Slave KDC if the KDC is not already installed:

yum install krb5-server 

The following defines the KDC configuration for both clusters. This file, /etc/krb5.conf, must be copied to each node in the cluster.

  renew_lifetime = 7d
  forwardable = true
  default_realm = CUSTOMER.HDP
  ticket_lifetime = 24h
  dns_lookup_realm = false
  dns_lookup_kdc = false
[domain_realm] = CUSTOMER.HDP = CUSTOMER.HDP
  default = FILE:/var/log/krb5kdc.log
  admin_server = FILE:/var/log/kadmind.log
  kdc = FILE:/var/log/krb5kdc.log
    admin_server =
    kdc =
    kdc =

Contents of /var/kerberos/krb5kdc/kadm5.acl:

*/admin@CUSTOMER.HDP *

Contents of the /var/kerberos/krb5kdc/kdc.conf:

 kdc_ports = 88,750
 kdc_tcp_ports = 88,750
        kadmind_port = 749
        max_life = 12h 0m 0s
        max_renewable_life = 7d 0h 0m 0s
        master_key_type = aes256-cts
       supported_enctypes = aes256-cts aes128-cts des-hmac-sha1 des-cbc-md5 arcfour-hmac des-cbc-md5

Contents of /var/kerberos/krb5kdc/kpropd.acl:


Now start the KDC and kadmin processes on the Master KDC only:

shell% systemctl enable krb5kdc 
shell% systemctl start krb5kdc 
shell% systemctl enable kadmin 
shell% systemctl start kadmin  

The KDC database is then initialized with the following command, executed from the Master KDC:

shell% kdb5_util create -s 
Loading random data 
Initializing database '/var/kerberos/krb5kdc/principal' for realm 'CUSTOMER.HDP', 
master key name 'K/M@CUSTOMER.HDP' 
You will be prompted for the database Master Password. 
It is important that you NOT FORGET this password. 
Enter KDC database master key: <db_password>
Re-enter KDC database master key to verify: <db_password>

An administrator must be created to manage the Kerberos realm. The following command is used to create the administration principal from the Master KDC:

shell% kadmin.local -q "addprinc admin/admin" 
Authenticating as principal root/admin@CUSTOMER.HDP with password. 
WARNING: no policy specified for admin/admin@CUSTOMER.HDP; defaulting to no policy 
Enter password for principal "admin/admin@CUSTOMER.HDP": <admin_password>
Re-enter password for principal "admin/admin@CUSTOMER.HDP": <admin_password>
Principal "admin/admin@CUSTOMER.HDP" created. 

Host keytabs must now be created for the SLAVE KDC. Execute the following commands from the Master KDC:

shell% kadmin
kadmin: addprinc -randkey host/
kadmin: addprinc -randkey host/

Extract the host key for the Slave KDC and store it on the hosts keytab file, /etc/krb5.keytab.slave:

kadmin: ktadd –k /etc/krb5.keytab.slave host/

Copy /etc/krb5.keytab.slave to and rename the file to /etc/krb5.keytab

Update /etc/services on each KDC host, if not present:

krb5_prop       754/tcp               # Kerberos slave propagation

Install xinetd on the hosts of the Master and Slave KDC, if not already installed, to enable kpropd to execute:

yum install xinetd

Create the configuration for kpropd on both the Master and Slave KDC hosts:

Create /etc/xinetd.d/krb5_prop with the following contents.

Create /etc/xinetd.d/krb5_prop with the following contents.
service krb_prop
        disable         = no
        socket_type     = stream
        protocol        = tcp
        user            = root
        wait            = no
        server          = /usr/sbin/kpropd

Configure xinetd to run as a persistent service on both the Master and Slave KDC hosts:

systemctl enable xinetd.service
systemctl start xinetd.service

Copy the following files from the Master KDC host to the Slave KDC host:


Perform the initial KDC database propagation to the Slave KDC:

shell% kdb5_util dump /usr/local/var/krb5kdc/slave_datatrans
shell% kprop -f /usr/local/var/krb5kdc/slave_datatrans

The Slave KDC may be started at this time:

shell% systemctl enable krb5kdc 
shell% systemctl start krb5kdc 

Script to propagate the updates from the Master KDC to the Slave KDC. Create a cron job, or the like, to run this script on a frequent basis.

kdclist = ""
/sbin/kdb5_util dump /usr/local/var/krb5kdc/slave_datatrans
for kdc in $kdclist
    /sbin/kprop -f /usr/local/var/krb5kdc/slave_datatrans $kdc
Expert Contributor

@Terry Padgett Great article, but I have a problem.

While propagating to the slave I got an error:

# kprop -f /tmp/slave_datatrans <>
kprop: Connection refused while connecting to server
New Contributor

At least for my own experience I had to make a few changes for this doc to work.

1) The ktadd step needs all servers not just the slaves

2) kpropd.acl only goes on the slaves

3) the kdc master service start and master password steps need to be flipped ( daemon won't start until that password is set )

4) minor change, you have a copy/paste bug in your automation script - has values from the mit kerberos doc which uses different paths than rhel

5) suggestion, tweak the cipher suites to not include weak crypto ( just use the aes ones )

This doc was very helpful thanks!

Tweak change #1 for the ktadd - I was getting goofy errors about not being able to find a principal. I don't think even the mit kerberos docs included that step they just had you export the slaves too iirc.


In /etc/xinetd.d/krb5_prop file.
service should be krb5_prop rather than krb_prop. If you are adding service name as krb5_prop /etc/services.

Master doesn't require /var/kerberos/krb5kdc/kpropd.acl file. If it's present kadmin won't start.

Also we need to add step where need to create keytab for master and kinit with the same. Else we will get below error.

[root@node1 ]#kprop -f /usr/local/var/krb5kdc/slave_datatrans node3.openstacklocal
kprop: Key table entry not found while getting initial ticket

Please correct me, If I am wrong.

New Contributor

I am stuck at below step. Tried multiple times, but of no use. Give me some pointers here please.

[root@node1 ]#kprop -f /usr/local/var/krb5kdc/slave_datatrans node3.openstacklocal
kprop: Key table entry not found while getting initial ticket
New Contributor

dont we need below two to be installed as well on Master and Slave?

krb5-libs krb5-workstation

Not applicable

hi all,

thanks for the page. I've an error when i try to deploy dump on slave.

Shell% kprop -s /etc/krb5.keytab -f /tmp/slave_datatrans -d

kprop: Key table entry not found while getting initial credentials

any idea?


same issue with me please suggest some solution.

@benoit moisan did you find any solution???

Please guide me.