Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

CM upgrade - stale Kerberos configuration

avatar

Hi,

 after an upgrade from CM 5.11 to 5.13 the Cloudera Manager complains with a red excl mark: Cluster has stale Kerberos client configuration.

 

The cluster was all in green before upgrade and had no problem with kerberos configs (/etc/krb5.conf).

 

What is more concerning, that after opening this warning, three (gateway) nodes does not require upgrade, but the rest of them does:

 

Consider stopping roles on these hosts to ensure that they are updated by this command:
ip-10-197-13-169.eu-west-1.compute.internal; ip-10-197-15-82.eu-west-1.compute.internal; ip-10-197-18-[113, 248].eu-west-1.compute.internal...

 

 

But the command is not there. What should I do? Stop the whole CDH and then rerun the deploy?

 

Thanks for the advise,

T.

 

1 ACCEPTED SOLUTION

avatar
Master Guru
hide-solution

This problem has been solved!

Want to get a detailed solution you have to login/registered on the community

Register/Login
3 REPLIES 3

avatar
Master Guru
hide-solution

This problem has been solved!

Want to get a detailed solution you have to login/registered on the community

Register/Login

avatar

I stopped CDH and did a Kerberos configuration redeploy.

The /etc/krb5.conf is more or less the same.

The only difference is the last line "[domain_realm], it was added by the CM.

 

After the redeploy the CDH started and now everything is in green

Thanks

Tomas

 

[libdefaults]
default_realm = MYREALM.LOCAL
dns_lookup_kdc = false
dns_lookup_realm = false
ticket_lifetime = 86400
renew_lifetime = 604800
forwardable = true
default_tgs_enctypes = aes256-cts aes128-cts
default_tkt_enctypes = aes256-cts aes128-cts
permitted_enctypes = aes256-cts aes128-cts
udp_preference_limit = 1
kdc_timeout = 3000
[realms]
MYREALM.LOCAL = {
kdc = 10.197.16.197 10.197.16.88
admin_server = 10.197.16.197 10.197.16.88
}
[domain_realm]

 

avatar
New Contributor

I was not stoping cdh and cloudera management services  and deploying Kerberos configurations.

 

 

Now I stopped cdh and cloudera management services and deploying Kerberos configurations.
It worked for me and /etc/krb5.conf is updated for all hosts in cluster.

 

This issue resolved.

Thanks.