Let's say I want to upgrade JDK in my HDP cluster without stopping it. I can start by upgrading masters one by one and restarting services, and then I can upgrade workers in groups of 2 (HDFS replication factor is 3). HA components (NN, RM etc) will have no down time, but non-HA (History Server, SHS) will have some down time. However, midway through the process some processes will run on old Java version and some on the new version. Is this acceptable? Of course we'd like to avoid troubles. Has anyone tried this? A concrete example: HDP-2.2.6, Ambari-2.0.1 and upgrading Java from 188.8.131.52 to 184.108.40.206. Any comments will be appreciated. Thanks!
Because HDFS writes block replicas to at least two racks, I can actually upgrade Data Nodes rack by rack. But still concerned about running services on disparate versions of Java during the upgrade.
I know you are asking for Rolling upgrade of java. I don't think its supported or safe approach to handle. I found this useful on java upgrade.
If you DR solution in place "Active - Active setup" then you can switch over to secondary and upgrade primary then upgrade secondary after switching overing to primary.
If it's urgent then I suggest to open a support ticket.
Hi @Neeraj Sabharwal, yeah, not safe... but still, thinking about it, since we are staying on the same major version of Java-1.7 there are good chances it works. We don't have DR and it's not urgent, but we have a non-prod cluster to try.
@Predrag Minovic Do you have access to Hortonworks support?
Here is my thoughts
Host 1 - all components down (HA will take care of some components) - Java upgrade, bring up all the components
Host 2 - same approach
I do feel that it will work theoretically. Let's do this in non-prod. I think you will head in right direction by doing it based on host.