Support Questions

Find answers, ask questions, and share your expertise
Announcements
Now Live: Explore expert insights and technical deep dives on the new Cloudera Community BlogsRead the Announcement

Hive-on-Tez Not Restarting During Cluster-Level Rolling Upgrade Despite All Roles Selected

avatar
Explorer

We recently performed a cluster-level rolling upgrade using Cloudera Manager (CDP Private Cloud Base 7.1.9) on our High Availability (HA) enabled cluster. During the rolling restart phase, we selected:

* All services (including Hive-on-Tez)
* All roles (including non-worker roles)
* Restart roles with stale configurations only
* Restart roles with old software versions only
* Redeploy client configuration

Despite these selections, the Hive-on-Tez service (specifically the HiveServer2 role) was not restarted. Post-upgrade, HiveServer2 continues to show a stale configuration warning in Cloudera Manager. This behavior suggests that HiveServer2 was skipped during the rolling restart, even though it should have been included based on the selected options.

We would appreciate your assistance in:

1. Confirming whether this is expected behavior or a known issue.
2. Advising on the correct procedure to ensure Hive-on-Tez roles are properly restarted during rolling upgrades.
3. Recommending any patches or configuration changes to prevent this in future upgrades.

Please let us know if any additional logs or diagnostic information are needed.

1 REPLY 1

avatar
Guru

@shubham_rai 

We would need below details:

 1. What is the version of CLoudera Manager and CLoudera Manager Runtime?

 2. Provide logs under below:
   /var/run/cloudera-scm-agent/process/<hive on tez dir>/logs
 /var/log/hive/<hs2 log>