Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Ambari not able to start services after HDP 2.2 to 2.3 upgrade

avatar
Explorer
@Alejandro Fernandez

We have upgraded from Amabari 1.7 to Ambari 2.1.2 last week and today we have upgraded from HDP 2.2 to HDP 2.3 (manual upgrade). The final goal is from here do a rolling upgrade to HDP 2.3.4.

All the upgrade process went fine and all the services are running fine until i issued a

ambari-server set-current --cluster-name=*** --version-display-name=HDP-2.3.4.0

and restarted the cluster as it was showing pending upgrade in the stacks and versions page.

Every time i start a service I get the following error

 File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/shared_initialization.py", line 70, in setup_users
    create_tez_am_view_acls()
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/shared_initialization.py", line 94, in create_tez_am_view_acls
    if not params.tez_am_view_acls.startswith("*"):
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/config_dictionary.py", line 81, in __getattr__
    raise Fail("Configuration parameter '" + self.name + "' was not found in configurations dictionary!")
resource_management.core.exceptions.Fail: Configuration parameter 'tez.am.view-acls' was not found in configurations dictionary!
Error: Error: Unable to run the custom hook script ['/usr/bin/python2.6', '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py', 'ANY', '/var/lib/ambari-agent/data/command-5003.json', '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-ANY', '/var/lib/ambari-agent/data/structured-out-5003.json', 'INFO', '/var/lib/ambari-agent/tmp']

I did see a Jira and Fix but we have not encountered this when we upgraded our Dev environment.

https://issues.apache.org/jira/browse/AMBARI-13835

Moving to a new version is not that easy in our organization, and am trying to see if there is any work around to fix this issue.

Is there a way to do a roll back or do a rolling upgrade to 2.3.4 ? I know this is specific to ambari and am trying to see how to fix this ? I am able to shutdown the services but not able to start any.

Thanks for looking into this

Prasad

1 ACCEPTED SOLUTION

avatar
Super Collaborator
@prasad nuamatha

Property tez.am.view-acls might be missing in tez-site.xml, please add "tez.am.view-acls" to custom tez-site and set the values to "*" (it can also be empty).

View solution in original post

3 REPLIES 3

avatar
Rising Star

set the property "tez.am.view-acls" to *

and then restart the services.

avatar
Super Collaborator
@prasad nuamatha

Property tez.am.view-acls might be missing in tez-site.xml, please add "tez.am.view-acls" to custom tez-site and set the values to "*" (it can also be empty).

avatar
Explorer

Thank you it did help. The bug fix jira threw me off. I am running into another issue now with kafka where i am running into the following

https://issues.apache.org/jira/browse/AMBARI-14147

Any way to fix without applying the patch. Also i see this is stemming from me not able to reset the version using this command. We have upgraded and finalized the upgrade but Ambari stack and versions continue to say upgrade in progress. When i run the following

ambari-server set-current --cluster-name=*** --version-display-name=HDP-2.3.4.0

ERROR: Exiting with exit code 1. REASON: Error during setting current version. Http status code - 500. { "status" : 500, "message" : "org.apache.ambari.server.controller.spi.SystemException: Finalization failed. More details: \nSTDOUT: Begin finalizing the upgrade of cluster 001 to version 2.3.0.0-2557\nThe following 1 host(s) have not been upgraded to version 2.3.0.0-2557. Please install and upgrade the Stack Version on those hosts and try again.\nHosts: ns1.hadoop.com\n\nSTDERR: The following 1 host(s) have not been upgraded to version 2.3.0.0-2557. Please install and upgrade the Stack Version on those hosts and try again.\nHosts: ns1.hadoop.com\n" }

Any thoughts ?

Thanks for checking