Support Questions

Find answers, ask questions, and share your expertise

Upgrade HDF 3.1.1.0 to HDF 3.1.2 Rolling Upgrade fail

avatar
Contributor

Hi All,

I try run rolling upgrade with subject versions.

And it fails with the very strange message. I added the image with current state

86403-2018-08-08-22-23-41.png

and in ambari-server.log I see this lines:

WARN [Server Action Executor Worker 2015] ServerActionExecutor:471 - Task #2015 failed to complete execution due to thrown exception: java.lang.NullPointerException:null
java.lang.NullPointerException
        at org.apache.ambari.server.serveraction.upgrades.ConfigureAction.execute(ConfigureAction.java:200)
        at org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.execute(ServerActionExecutor.java:550)
        at org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.run(ServerActionExecutor.java:466)
        at java.lang.Thread.run(Thread.java:745)

Please help.

Thanks,

Ilya

1 ACCEPTED SOLUTION

avatar
Contributor

In the end, I created the new cluster with HDF 3.2.0.

With NiFi Registry made it very fast.

Thanks to all which tried to help.

View solution in original post

17 REPLIES 17

avatar

you might need to analyze your tables one by one in ambari where it fetchs the HDF-3.0 info.

some useful tables are metainfo ,hostcomponentstate + some tables i mentioned in my previous comment

avatar

@Ilya Li looks like you have null reference in upgrade_history table.

Can you review the entries from below query and see if from_repo_version_id, target_repo_version_id are NULL.

select * from upgrade_history; //Look for zookeeper entry and correct that to valid repo version id.

After correcting that you would need to restart ambari server before attempting the upgrade again.

avatar
Contributor

This is the last upgrade_history

86415-2018-08-09-8-49-23.png

avatar

Hi @Ilya Li ,

Can you please have a look at this table

select * from repo_version;
select * from stack;

I hope you will get some clue.

probably the stack_id of newly registered 3.1.2 will be pointing towards the wrong stack in stack table.

As Amar suggested I would suggest you downgrade the cluster

do the service checks on the zookeeper. change some configs in zookeeper and make sure it works. and then proceed with upgrade.

avatar
Contributor

I made downgrade, service check on the zookeeper. Nothing helped. Which configs do I need to change in zookeeper?

stack:

86412-2018-08-09-8-31-05.png

repo_version:

86410-2018-08-09-8-26-16.png

cluster:

86411-2018-08-09-8-27-02.png

avatar

Hi @Ilya Li,

Oh this commands everything seems to be ok.

Can you investigate further with this commands :

1) SELECT id, component_id, repo_version_id, state, user_name
FROM ambari.servicecomponent_version;
2) select * from servicecomponentdesiredstate where service_name='ZOOKEEPER';
3) select * from repo_version where repo_version_id in ( select desired_repo_version_id from servicecomponentdesiredstate where service_name='ZOOKEEPER');
4)  select * from repo_version where repo_version_id in ( select desired_repo_version_id from servicecomponentdesiredstate);
5)  select * from servicecomponentdesiredstate where desired_repo_version_id not in ( select repo_version_id from repo_version);

If you happened to see two repo_versions in command 3 and 4 or get some output for command 5 ( which is empty in my env ) you can suspect that's the wrong entry in database and act accordingly.

Hope this helps.

avatar
Contributor

1)

86419-2018-08-09-12-11-14.png

2)

86420-2018-08-09-12-12-07.png

3) -4)

86421-2018-08-09-12-13-26.png

5) empty
Any ideas?

avatar
Contributor

In the end, I created the new cluster with HDF 3.2.0.

With NiFi Registry made it very fast.

Thanks to all which tried to help.