Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

HDP Upgrade 2.4.0.0 to 2.4.2.0 Failing a pre-check

avatar
Rising Star

So I've upgraded Ambari to the bleeding edge 2.2.2.0 today and I was about to rollout HDP 2.4.2.0-258 and I am stumped at the pre-check script after all the HDP-2.4.2.0 packages have been sucessfully installed across the board.

Upgrade to HDP-2.4.2.0

Requirements

You must meet these requirements before you can proceed.

A previous upgrade did not complete. Reason: Upgrade attempt (id: 1, request id: 2,681, from version: 2.2.6.0-2800, to version: 2.4.0.0-169) did not complete task with id 17,829 since its state is FAILED instead of COMPLETED. Please ensure that you called: ambari-server set-current --cluster-name=$CLUSTERNAME --version-display-name=$VERSION_NAME Further, change the status of host_role_command with id 1 to COMPLETED

Failed on: HugeData

I ran the command as instructed:

ambari-server set-current --cluster-name=HugeData --version-display-name=HDP-2.4.2.0

To no avail... I am stumped at this point in time and not sure where to look to change that manually in the backend? As far as I am concerned we had been running 2.4.0.0-169 without any issues (except for the NN failover) for about a month...

According to the error above we missed something in the 2.2.x to 2.4.x upgrade...... I'm sure there's a value I can edit to set as successful but I am not sure right now.

Your input would be much appreciated 🙂

1 ACCEPTED SOLUTION

avatar
Rising Star

So in the end knowing my config was fine I added stack.upgrade.bypass.prechecks=true to /etc/ambari-server/conf/ambari.properties and chose to disregard the warning. The upgrade went fine and all test are green. Essentially we went fro hdp 2.1 back then to the bleeding edge and some steps had to be done manually. So somehow after a few tries we succeeded but most likely left some artifacts behind....

I'm still interested to find out where this entry is located and where I could clean it up.

thankfully we're getting professional services soon and building a brand new pro-level cluster with the help of some hortonworks engineers so there wont be any weird or unknown configuration choices.

View solution in original post

10 REPLIES 10

avatar
Rising Star

@Eric Periard have you managed to find any solution for this?
I'm stuck in the same error for over a week.

Many thanks in advance. Best regards