I am following the official guide to upgrade HortonWorks cluster to HDP 2.5 from HDP 2.4 using Ambari 2.4.
These are the steps we followed:
1. Delete Falcon and Atlas services. They weren't working on our cluster so we decided to delete them first and install them after upgrade to HDP 2.5. (We had HDFS, YARN, HIVE, OOZIE, SQOOP, KAFKA, FLUME, KNOX, SLIDER, TEZ, MAPREDUCE2, ZOOKEEPER installed in addition Falcon and Atlas which got deleted).
2. Take backups etc,.install HDP 2.5 and start upgrade wizard (we chose express upgrade)
3. Everything went smoothly until we ran into an issue with Oozie that we couldn't figure out.The upgrade wizard provided an option to proceed so we did. At the last step it said Oozie failed therefore the only choices are downgrade or fix issue and retry. Since we couldn't figure out what is causing the issue we decided to first downgrade back to 2.4
4. Downgrade back to HDP 2.4 went smoothly without issues
4. Since we are not dependent on Oozie yet, we thought we could delete Oozie and try upgrade again so we did that (again, express upgrade)
5. It then failed on Hive install which succeeded in the first upgrade run. We couldnt figure out the cause for that either so we decided to downgrade again.
6. Downgrade worked without issues
7. We then tried to add back HDP 2.4 Oozie as we are doing some POC stuff with it. That failed. It was fairly weird because an message box popped up indicating server error and then it froze. I had to close the tab where I was connected to Ambari and relogin
8. We looked at the logs and it seems to be related to this message which we found in /var/log/ambari-server/ambari-server.log
21 Oct 2016 04:32:54,069 ERROR [ambari-client-thread-200240] AmbariJpaLocalTxnInterceptor:188 - [DETAILED ERROR] Internal exception (2) : org.postgresql.util.PSQLException: ERROR: duplicate key value violates unique constraint "serviceconfigmapping_pkey" Detail: Key (service_config_id, config_id)=(489, 136) already exists.
How do we fix it so that we can proceed with Oozie, Hive etc and actually upgrade to HDP 2.5. Where do we look for error messages related to Hive upgrade failures?
Sorry, forgot to mention our environment. We are running on RHEL7 Linux Servers (x86_64). If you need additional information please let me know.
This seeme to be issue with postgresql DB. Please check -
Hi, yes we use external PostGres for Ambari database. What do I do to fix the issue? Thanks
We are unable to start Ambari server because of this reason so we had to use the skip database check option:
2016-10-24 05:20:19,549 INFO - Checking for configs selected more than once 2016-10-24 05:20:19,551 ERROR - You have config(s), in cluster cmor2, that is(are) selected more than once in clusterconfigmapping table: capacity-scheduler,webhcat-log4j,ranger-hdfs-policymgr-ssl,ranger-yarn-policymgr-ssl,ranger-hive-policymgr-ssl,hcat-env,ssl-server,hdfs-log4j,ranger-hive-plugin-properties,ranger-yarn-security,hadoop-policy,ranger-yarn-audit,hive-exec-log4j,webhcat-env,hiveserver2-site,ssl-client,hive-log4j,ranger-hdfs-security,ranger-yarn-plugin-properties,ranger-hdfs-audit,ranger-hdfs-plugin-properties,yarn-log4j,hive-env,ranger-hive-audit,ranger-hive-security
Can you please tell me how you resolved the issue when you are unable to start Ambari .... ?? I am facing same issue:
You have config(s), in cluster NOC_HDP_PROD, that is(are) selected more than once in clusterconfigmapping table: ranger-yarn-plugin-properties,ranger-yarn-policymgr-ssl,ranger-yarn-security,yarn-log4j,ranger-yarn-audit
We use Postgres as the database for Ambari and Hive. We made a backup of it prior to making any changes. We rolled back to the database copy where we didn't have any issues with consistency, duplicate key issues etc. After we rolled back and started from scratch again we didn't have any issues.
Sorry, beyond that we didn't have to try anything else. Hopefully that helps.