Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

CM upgrade - stale Kerberos configuration

avatar

Hi,

 after an upgrade from CM 5.11 to 5.13 the Cloudera Manager complains with a red excl mark: Cluster has stale Kerberos client configuration.

 

The cluster was all in green before upgrade and had no problem with kerberos configs (/etc/krb5.conf).

 

What is more concerning, that after opening this warning, three (gateway) nodes does not require upgrade, but the rest of them does:

 

Consider stopping roles on these hosts to ensure that they are updated by this command:
ip-10-197-13-169.eu-west-1.compute.internal; ip-10-197-15-82.eu-west-1.compute.internal; ip-10-197-18-[113, 248].eu-west-1.compute.internal...

 

 

But the command is not there. What should I do? Stop the whole CDH and then rerun the deploy?

 

Thanks for the advise,

T.

 

1 ACCEPTED SOLUTION

avatar
Master Guru

@Tomas79,

 

The Cluster has stale Kerberos client configuration message indicates that there was some configuration change in Cloudera Manager to your Kerberos configuration that resulted in a change to the managed krb5.conf file.

 

I am not sure what the upgrade may have done, but it would be worth checking your Cloudera Manager configuration to see.

Try going to Administration --> Settings and then click the History and Rollback link.

 

See if there were any recent changes to your kerberos configuration.

 

If you don't find anything conclusive, the following should clear this up:

- stop CDH and Cloudera Management Service

- copy aside one of your existing /etc/krb5.conf files (for later comparison)

- From the cluster drop-down in the Cloudera Manager home page, choose Deploy Kerberos Client Configuration and deploy

- After the deploy is complete, start Cloudera Management Service and CDH

 

If the issue still occurs, let us know.

You may also want to compare the previous and new /etc/krb5.conf files to see if there are difference.

 

Not sure what happened to cause this situation, but the steps should help (as you suggested).

View solution in original post

3 REPLIES 3

avatar
Master Guru

@Tomas79,

 

The Cluster has stale Kerberos client configuration message indicates that there was some configuration change in Cloudera Manager to your Kerberos configuration that resulted in a change to the managed krb5.conf file.

 

I am not sure what the upgrade may have done, but it would be worth checking your Cloudera Manager configuration to see.

Try going to Administration --> Settings and then click the History and Rollback link.

 

See if there were any recent changes to your kerberos configuration.

 

If you don't find anything conclusive, the following should clear this up:

- stop CDH and Cloudera Management Service

- copy aside one of your existing /etc/krb5.conf files (for later comparison)

- From the cluster drop-down in the Cloudera Manager home page, choose Deploy Kerberos Client Configuration and deploy

- After the deploy is complete, start Cloudera Management Service and CDH

 

If the issue still occurs, let us know.

You may also want to compare the previous and new /etc/krb5.conf files to see if there are difference.

 

Not sure what happened to cause this situation, but the steps should help (as you suggested).

avatar

I stopped CDH and did a Kerberos configuration redeploy.

The /etc/krb5.conf is more or less the same.

The only difference is the last line "[domain_realm], it was added by the CM.

 

After the redeploy the CDH started and now everything is in green

Thanks

Tomas

 

[libdefaults]
default_realm = MYREALM.LOCAL
dns_lookup_kdc = false
dns_lookup_realm = false
ticket_lifetime = 86400
renew_lifetime = 604800
forwardable = true
default_tgs_enctypes = aes256-cts aes128-cts
default_tkt_enctypes = aes256-cts aes128-cts
permitted_enctypes = aes256-cts aes128-cts
udp_preference_limit = 1
kdc_timeout = 3000
[realms]
MYREALM.LOCAL = {
kdc = 10.197.16.197 10.197.16.88
admin_server = 10.197.16.197 10.197.16.88
}
[domain_realm]

 

avatar
New Contributor

I was not stoping cdh and cloudera management services  and deploying Kerberos configurations.

 

 

Now I stopped cdh and cloudera management services and deploying Kerberos configurations.
It worked for me and /etc/krb5.conf is updated for all hosts in cluster.

 

This issue resolved.

Thanks.