I have successfully upgraded Ambari to 184.108.40.206
During the HDP upgrade the process stopped at "Update Target Repositories".
Updating the desired repository version to 220.127.116.11-1634 for all cluster services.
with the full error
org.apache.ambari.server.ServiceNotFoundException: Service not found, clusterName=HDP26, serviceName=SPARK at org.apache.ambari.server.state.cluster.ClusterImpl.getService(ClusterImpl.java:883) at org.apache.ambari.server.state.UpgradeHelper.setDesiredRepositories(UpgradeHelper.java:922) at org.apache.ambari.server.state.UpgradeHelper.updateDesiredRepositoriesAndConfigs(UpgradeHelper.java:894) at org.apache.ambari.server.orm.AmbariJpaLocalTxnInterceptor.invoke(AmbariJpaLocalTxnInterceptor.java:118) at org.apache.ambari.server.serveraction.upgrades.UpdateDesiredRepositoryAction.updateDesiredRepositoryVersion(UpdateDesiredRepositoryAction.java:160) at org.apache.ambari.server.orm.AmbariJpaLocalTxnInterceptor.invoke(AmbariJpaLocalTxnInterceptor.java:128) at org.apache.ambari.server.serveraction.upgrades.UpdateDesiredRepositoryAction.execute(UpdateDesiredRepositoryAction.java:96) at org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.execute(ServerActionExecutor.java:550) at org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.run(ServerActionExecutor.java:466) at java.lang.Thread.run(Thread.java:745)
Any idea how to get around it? Many thanks!
SPARKv1 service SPARK seems to be removed in HDP3.0. It looks that Ambari DB still has the reference for SPARK service after upgrade.
Remove SPARK service from the cluster and try upgrade again.
Thank you for your feedback!
I did remove the SPARK services using Ambari.
Carry on with the upgrade process still fails at the same stage. I also checked some postgress DB tables hostcomponentdesiredstate, hostcomponentstate, servicecomponentdesiredstate, clusterservices. The service is not there.
Any other pointers please?
Or maybe there is a way to start the process from the beginning?
we usually have the service info in 5 tables. Please verify if any of these tables has entry for SPARK.
You can cancel the upgrade and make sure spark service is removed completely from DB and re-do the steps.
select * FROM servicecomponentdesiredstate WHERE service_name = 'SPARK'; select * FROM hostcomponentdesiredstate WHERE service_name = 'SPARK'; select * FROM hostcomponentstate WHERE service_name = 'SPARK'; select * FROM servicedesiredstate where service_name = 'SPARK'; select * FROM clusterservices where service_name = 'SPARK';
I just did a upgrade from HDP2.6.4 to HDP3.0.0 with SPARK service, ambari will delete the service itself during upgrade process, should not need any manual delete step.
Checked all the tables, there are no entries.
Is there any way I can start the process from scratch?
I am currently stack at the "Update Target Repositories" step with the error described initially.
SELECT upgrade_id, cluster_id, from_version, to_version, direction, upgrade_package, upgrade_type FROM upgrade;
SELECT from_version, to_version, direction, skip_failures, skip_sc_failures FROM upgrade;
Basically, you have to populate the upgrade_type and upgrade_pack columns based on the versions used.
This happens because a record may be missing in the repo_version table. If you insert the record before performing the Ambari Upgrade it should work.
You can make sure you have a record for each one of the versions.
-- Assuming 18.104.22.168-292 is missing INSERT INTO repo_version (repo_version_id, version, display_name, upgrade_package, repositories, stack_id) VALUES (1, '22.214.171.124-292', 'HDP-126.96.36.199-292', 'upgrade-3.0', '', (SELECT stack_id FROM stack WHERE stack_version = '3.0'));
Now check the new output
SELECT from_version, to_version FROM upgrade;