Created 12-20-2018 05:53 PM
For example, There are 500 nodes in the Hadoop cluster, Linux team want to make OS patches/upgrades in batches, how can we make(Hadoop Admins) sure data availability and no impacts on jobs without decommissioning? Because decommissioning of heavy large nodes(say 90TB) nodes take forever, is there any way we can do this without decommissioning the nodes?
Say, we have rack awareness set, each rack has 6 nodes.
Created 12-20-2018 05:53 PM
Created 12-20-2018 05:57 PM
Created 12-20-2018 09:01 PM
This should answer your worries.
https://community.hortonworks.com/questions/4940/hdp-os-upgradepatching-best-practices.html#
Be cautious always with production cluster you need to test and document in DEV, UAT or pre-PROD never said you were not warned 🙂
Happy Hadooping !!
Created 12-21-2018 09:32 PM
Thanks! @Geoffrey Shelton Okot
Created 12-22-2018 06:56 PM
If you found this answer addressed your question, please take a moment to log in and click the "accept" link on the answer.