- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
HDP OS Upgrade/Patching Best practices
Created on 11-30-2015 08:32 PM - edited 09-16-2022 02:50 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Are there any best practices/ documentation around patching or upgrading an OS (e.g. upgrading CentOS 6 --> 7 or security patching ) while the cluster is running?
Thanks,
Created 12-01-2015 01:12 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
This post by Lester Martin sums up really well - https://martin.atlassian.net/wiki/pages/viewpage.action?pageId=36044812
Here is a summary -
- Since most OS patching / upgrade requires reboot, it is best to schedule such an activity around a scheduled outage.
- It is also recommended to go through the exercise in a lower level environment prior to applying changes in a PROD environment.
- In order to be apply the changes while the cluster is up, the patch/upgrade will have to be applied in rolling manner by first stopping the components in the host from Ambari, then applying changes, rebooting host and then starting the Hadoop services from Ambari. Repeat for each host.
This process will have to be scripted for a large size cluster. The steps of stopping and starting the cluster can be performed by uzing Ambari APIs.
Created 12-01-2015 01:12 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
This post by Lester Martin sums up really well - https://martin.atlassian.net/wiki/pages/viewpage.action?pageId=36044812
Here is a summary -
- Since most OS patching / upgrade requires reboot, it is best to schedule such an activity around a scheduled outage.
- It is also recommended to go through the exercise in a lower level environment prior to applying changes in a PROD environment.
- In order to be apply the changes while the cluster is up, the patch/upgrade will have to be applied in rolling manner by first stopping the components in the host from Ambari, then applying changes, rebooting host and then starting the Hadoop services from Ambari. Repeat for each host.
This process will have to be scripted for a large size cluster. The steps of stopping and starting the cluster can be performed by uzing Ambari APIs.
Created 12-01-2015 03:22 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks @bsaini . Do you know how HDFS rebalancing would work during an OS upgrade/rebalance? When would HDFS start trying to rebalance the data residing on the DataNode being bounced?
Created 06-01-2016 12:20 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thx @bsani
We have sat HA on our name nodes while we don't want the cluster to be unawailable.
So is there a best practice for doing patching on a cluster that is supposed to be available 24/7.
How to avoid rebalancing during patching
When upgrading datanodes in chuncks, is there a way to make sure that replica of a data block is available on one of the servers alive
/Claus
Created 12-01-2015 01:23 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Andrew Watson This old guide but good hints http://docs.hortonworks.com/HDPDocuments/Ambari-1.6.0.0/bk_upgrading_Ambari/content/ambari-chap8.htm...
OS upgrade (example)
Created 02-09-2016 03:19 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Neeraj.. Is there any latest hortonworks documentation to know step by step procedure to perform O.S upgrade for hadoop cluster?
Created 02-09-2016 03:22 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
here's an old one, can't find new https://ambari.apache.org/1.2.1/installing-hadoop-using-ambari/content/ambari-chap8.html
Created 02-09-2016 03:22 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
We enabled HA for Namenode and ResourceManager. What are the actions we need to take care while doing O.S updates on HDP cluster?
Created 02-09-2016 03:24 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
go through each server one by one, put services in maintenance mode on that host and stop them. Upgrade OS to latest minor version, reboot if necessary and restart services. Go to the next. @Ram D
Created 02-09-2016 03:32 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you @Artem Ervits
