Member since
07-30-2019
453
Posts
112
Kudos Received
80
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2398 | 04-12-2023 08:58 PM | |
| 4975 | 04-04-2023 11:48 PM | |
| 1592 | 04-02-2023 10:24 PM | |
| 3487 | 07-05-2019 08:38 AM | |
| 3403 | 05-13-2019 06:21 AM |
12-24-2018
06:26 AM
Hi @Rajeswaran Govindan , File "/var/lib/ambari-server/resources/scripts/./../stacks/HDP/2.6/services/stack_advisor.py", line 69 if streamline_env: ^ IndentationError: unindent does not match any outer indentation level From the error it looks like someone has edited this python file and the python indentation was not kept correctly as it should be. can you do a ls -lh on the folder /var/lib/ambari-server/resources/scripts/./../stacks/HDP/2.6/services/ and see the timestamp difference. If my assumption is correct you are using ambari-2.6.2 and below is the apache ambari code of stack_advisor.py for this version . please compare it with yours and correct the mistake. https://github.com/apache/ambari/blob/release-2.6.2/ambari-server/src/main/resources/stacks/HDP/2.6/services/stack_advisor.py#L69 Please accept this answer if its helpful
... View more
12-21-2018
09:55 AM
1 Kudo
Hi @Michael Bronson , refer to this article : https://community.hortonworks.com/articles/227486/how-to-fix-database-consistency-warnings-in-ambari.html The solution given here is irrespective of any config, it will work 100% for your DB warning too. its taking backup of your orphaned configs and making the value mapped=1. Please let me know if it doesnt work. You can take backup if you are afraid to do so.
... View more
12-21-2018
09:06 AM
Hi @Michael Bronson,,
Basically this error happens due to some orphan configs in your database that is not linked to any service.
you can ignore those warnings if required or fix it as mentioned in article abouve.
you can take a backup of your database if you are worried to do it.
and yes these warnings are related .
Please accept answer if it helped.
... View more
12-21-2018
07:37 AM
Hi @Michael Bronson, You need to login to ambari database to perform the steps mentioned in the article. If your database is postgresql you need to login like this : [root@alatestambari1 tmp]# psql -U ambari ambari
Password for user ambari: <default password is bigdata> and execute the commands as mentioned there. If you are using non-default databases it depends on your database : for ex: mysql [root@alatestambari1 tmp]#mysql -u ambari ambari password: <password of user ambari> ambari=> use ambari; Please see if this is helpful.
... View more
12-21-2018
03:10 AM
1 Kudo
Hi @Michael Bronson,
Can you please in /var/log/ambari-server/ambari-server-check-database.log and see what exception you are hitting.
You can refer to following article if you are having same warning : https://community.hortonworks.com/articles/227486/how-to-fix-database-consistency-warnings-in-ambari.html
If this is not the warning, please give the server log snippet in this thread and tag me too 🙂
If this comment helps you please accept my answer
... View more
12-20-2018
06:10 PM
The Abouve Article doesnt work in ambari-2.7.3 due to a bug . Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/custom_actions/scripts/remove_previous_stacks.py", line 119, in <module>
RemovePreviousStacks().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute
method(env)
File "/var/lib/ambari-agent/cache/custom_actions/scripts/remove_previous_stacks.py", line 49, in actionexecute
self.remove_stack_version(structured_output, low_version)
File "/var/lib/ambari-agent/cache/custom_actions/scripts/remove_previous_stacks.py", line 54, in remove_stack_version
packages_to_remove = self.get_packages_to_remove(version)
File "/var/lib/ambari-agent/cache/custom_actions/scripts/remove_previous_stacks.py", line 77, in get_packages_to_remove
all_installed_packages = self.pkg_provider.all_installed_packages()
AttributeError: 'YumManager' object has no attribute 'all_installed_packages' Please refer to article if you face the same bug : https://community.hortonworks.com/articles/230893/remove-old-stack-versions-script-doesnt-work-in-am.html
... View more
12-20-2018
06:09 PM
6 Kudos
Disclaimer: This article is based on my personal experience and knowledge. Don't take it as a standard guidelines, understand the concept and modify it for your environmental best practices and use case. Ambari unofficially supports a curl command which will help deleting the old stacks and packages from each of the host . the script is described in https://issues.apache.org/jira/browse/AMBARI-18435 and also in the article : https://community.hortonworks.com/articles/202904/how-to-remove-all-previous-version-hdp-directories.html But this above script doesn't work in ambari-2.7.3 version. Root cause : the script will fail with below exception :
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/custom_actions/scripts/remove_previous_stacks.py", line 119, in <module>
RemovePreviousStacks().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute
method(env)
File "/var/lib/ambari-agent/cache/custom_actions/scripts/remove_previous_stacks.py", line 49, in actionexecute
self.remove_stack_version(structured_output, low_version)
File "/var/lib/ambari-agent/cache/custom_actions/scripts/remove_previous_stacks.py", line 54, in remove_stack_version
packages_to_remove = self.get_packages_to_remove(version)
File "/var/lib/ambari-agent/cache/custom_actions/scripts/remove_previous_stacks.py", line 77, in get_packages_to_remove
all_installed_packages = self.pkg_provider.all_installed_packages()
AttributeError: 'YumManager' object has no attribute 'all_installed_packages'
this was due to fix of : https://issues.apache.org/jira/browse/AMBARI-21738 ( https://github.com/apache/ambari/commit/e7c4ed761072256dabd881242a0eea40d94cf8af) the fix is not handled properly in remove_previous_stacks.py Workaround : 1) go to each ambari-agent node and edit the file remove_previous_stacks.py
[root@asn1 current]#vi /var/lib/ambari-agent/cache/custom_actions/scripts/remove_previous_stacks.py
2) go to line : 77 edit the line from :
all_installed_packages = self.pkg_provider.all_installed_packages()
to
all_installed_packages = self.pkg_provider.installed_packages()
3) retry the operation via curl again ex:
curl 'http://asn1.openstacklocal:8080/api/v1/clusters/asnaik/requests' -u admin:admin -H "X-Requested-By: ambari" -X POST -d'{"RequestInfo":{"context":"remove_previous_stacks", "action" : "remove_previous_stacks", "parameters" : {"version":"3.0.1.1-84"}}, "Requests/resource_filters": [{"hosts":"asn1.openstacklocal"}]}'
The Operation has to be success now. If you are facing any issue please comment in this thread and tag me Please upvote this article if you find this helpful
... View more
Labels:
12-20-2018
06:01 PM
Hi @Tim Verhoeven, Removing old stacks and versions is something i personally wont suggest you because at any point you want to downgrade your HDP you might need it. But as you facing the above error i would like to see if below changes help you or not : I was facing the same issue and changing the below line helped me get past the abouve expection : Steps 1) go to each ambari agents
[root@asn1 current]#vi /var/lib/ambari-agent/cache/custom_actions/scripts/remove_previous_stacks.py
2) go to line : 77 edit the line from :
all_installed_packages = self.pkg_provider.all_installed_packages()
to
all_installed_packages = self.pkg_provider.installed_packages()
retry the operation via curl again
curl 'http://asn1.openstacklocal:8080/api/v1/clusters/asnaik/requests' -u admin:admin -H "X-Requested-By: ambari" -X POST -d'{"RequestInfo":{"context":"remove_previous_stacks", "action" : "remove_previous_stacks", "parameters" : {"version":"3.0.1.1-84"}}, "Requests/resource_filters": [{"hosts":"asn1.openstacklocal"}]}'
The operation has to be success in UI. I have created an article in this regard : https://community.hortonworks.com/articles/230893/remove-old-stack-versions-script-doesnt-work-in-am.html Please accept this answer and upvote the article if this helps you
... View more
12-20-2018
04:25 PM
3 Kudos
Disclaimer: This article is based on my personal experience and knowledge. Don't take it as a standard guidelines, understand the concept and modify it for your environmental best practices and use case.Its always suggest contact Hortonworks Support if you have trouble in your production cluster Problem Statement: My Cluster is upgraded from ambari-2.7.1 to ambari-2.7.3 and HDP-3.0.1 to HDP-3.1. Whenever I am trying to change some configs and save it. the Below Error is shown :
Error message: </strong>Stack Advisor reported an error . Exit Code: 2. Error: KeyError: 'beeline_jdbc_url_default'
StdOut file: /var/run/ambari-server/stack-recommendations/12/stackadvisor.out
StdErr file: /var/run/ambari-server/stack-recommendations/12/stackadvisor.err
and in Ambari-server.log I can see
Caused by: org.apache.ambari.server.api.services.stackadvisor.StackAdvisorException: Stack Advisor reported an error. Exit Code: 2. Error: KeyError: 'beeline_jdbc_url_default'
StdOut file: /var/run/ambari-server/stack-recommendations/12/stackadvisor.out
StdErr file: /var/run/ambari-server/stack-recommendations/12/stackadvisor.err
at org.apache.ambari.server.api.services.stackadvisor.StackAdvisorRunner.processLogs(StackAdvisorRunner.java:149)
at org.apache.ambari.server.api.services.stackadvisor.StackAdvisorRunner.runScript(StackAdvisorRunner.java:89)
at org.apache.ambari.server.api.services.stackadvisor.commands.StackAdvisorCommand.invoke(StackAdvisorCommand.java:314)
at org.apache.ambari.server.api.services.stackadvisor.StackAdvisorHelper.validate(StackAdvisorHelper.java:94)
at org.apache.ambari.server.controller.internal.ValidationResourceProvider.createResources(ValidationResourceProvider.java:127)
... 105 more
And in stackadvisor.err i can see the below error :
[root@asn1 ~]# cat /var/run/ambari-server/stack-recommendations/12/stackadvisor.err
Traceback (most recent call last):
File "/var/lib/ambari-server/resources/scripts/stack_advisor.py", line 190, in <module>
main(sys.argv)
File "/var/lib/ambari-server/resources/scripts/stack_advisor.py", line 142, in main
result = stackAdvisor.validateConfigurations(services, hosts)
File "/var/lib/ambari-server/resources/scripts/../stacks/stack_advisor.py", line 1079, in validateConfigurations
validationItems = self.getConfigurationsValidationItems(services, hosts)
File "/var/lib/ambari-server/resources/scripts/../stacks/stack_advisor.py", line 1468, in getConfigurationsValidationItems
items.extend(self.getConfigurationsValidationItemsForService(configurations, recommendedDefaults, service, services, hosts))
File "/var/lib/ambari-server/resources/scripts/../stacks/stack_advisor.py", line 1521, in getConfigurationsValidationItemsForService
items.extend(serviceAdvisor.getServiceConfigurationsValidationItems(configurations, recommendedDefaults, services, hosts))
File "/var/lib/ambari-server/resources/stacks/HDP/3.1/services/HIVE/service_advisor.py", line 143, in getServiceConfigurationsValidationItems
return validator.validateListOfConfigUsingMethod(configurations, recommendedDefaults, services, hosts, validator.validators)
File "/var/lib/ambari-server/resources/scripts/../stacks/stack_advisor.py", line 1491, in validateListOfConfigUsingMethod
validationItems = method(siteProperties, siteRecommendations, configurations, services, hosts)
File "/var/lib/ambari-server/resources/stacks/HDP/3.1/services/HIVE/service_advisor.py", line 785, in validateHiveConfigurationsEnvFromHDP30
beeline_jdbc_url_default = hive_env["beeline_jdbc_url_default"]
KeyError: 'beeline_jdbc_url_default'
Root Cause : This issue happens due to hive-env is missing a parameter called "beeline_jdbc_url_default" which by default get added while upgrading ambari. if you have upgraded from ambari-2.7.1 to ambari-2.7.3 only then this issue will happen. Solution: Go to Ambari-server Execute the below command to add the missing config via configs.py (please note this cannot be added via ui) [root@asn1 ~]# /var/lib/ambari-server/resources/scripts/configs.py -l <AMBARI_IP> -t 8080 -u <ADMIN_USERNAME> -p <ADMIN_PASSWORD> -a set -n <CLUSTER_NAME> -c hive-env -k beeline_jdbc_url_default -v container for example : [root@asn1 ~]# /var/lib/ambari-server/resources/scripts/configs.py -l asn1.openstacklocal -t 8080 -u admin -p admin -a set -n asnaik -c hive-env -k beeline_jdbc_url_default -v container
2018-12-20 16:21:33,820 INFO ### Performing "set":
2018-12-20 16:21:33,820 INFO ### new property - "beeline_jdbc_url_default":"container"
2018-12-20 16:21:33,835 INFO ### on (Site:hive-env, Tag:version1545035376545)
2018-12-20 16:21:33,843 INFO ### PUTting json into: doSet_version1545322893843244.json
2018-12-20 16:21:34,054 INFO ### NEW Site:hive-env, Tag:version1545322893843244 Go to Ambari-UI and restart the services are requested. Now you can save any configs there won't be any Consistency Check Failed Error message.
... View more
12-20-2018
06:01 AM
2 Kudos
Disclaimer:This article is based on my personal experience and knowledge.Don't take it as a standard guidelines, understand the concept and modify it for your environmental best practices and use case. Always contact Hortonworks support if its production cluster Problem Statement: I have installed HDF-3.1 and ambari-2.6.2.2 and i am upgrading my ambari to 2.7.0.0 to upgrade my HDF to 3.2+ versions and its failing with below exception :
INFO: about to run command: /usr/java/jdk1.8.0_162/bin/java -cp '/etc/ambari-server/conf:/usr/lib/ambari-server/*:/usr/java/latest/mysql-connector-java.jar:/usr/share/java/mysql-connector-java.jar' org.apache.ambari.server.upgrade.SchemaUpgradeHelper > /var/log/ambari-server/ambari-server.out 2>&1
INFO:
process_pid=16599
Traceback (most recent call last):
File "/usr/sbin/ambari-server.py", line 1060, in <module>
mainBody()
File "/usr/sbin/ambari-server.py", line 1030, in mainBody
main(options, args, parser)
File "/usr/sbin/ambari-server.py", line 980, in main
action_obj.execute()
File "/usr/sbin/ambari-server.py", line 79, in execute
self.fn(*self.args, **self.kwargs)
File "/usr/lib/ambari-server/lib/ambari_server/serverUpgrade.py", line 262, in upgrade
retcode = run_schema_upgrade(args)
File "/usr/lib/ambari-server/lib/ambari_server/serverUpgrade.py", line 162, in run_schema_upgrade
upgrade_response = json.loads(stdout)
File "/usr/lib/ambari-server/lib/ambari_simplejson/__init__.py", line 307, in loads
return _default_decoder.decode(s)
File "/usr/lib/ambari-server/lib/ambari_simplejson/decoder.py", line 335, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/ambari-server/lib/ambari_simplejson/decoder.py", line 353, in raw_decode
raise ValueError("No JSON object could be decoded")
ValueError: No JSON object could be decoded
And inspecting the ambari-server log I found this exception :
2018-09-04 06:34:55,967 ERROR [main] SchemaUpgradeHelper:238 - Upgrade failed. java.lang.RuntimeException: Trying to create a ServiceComponent not recognized in stack info, clusterName=c174, serviceName=AMBARI_INFRA, componentName=INFRA_SOLR_CLIENT, stackInfo=HDF-3.1 at org.apache.ambari.server.state.ServiceComponentImpl.updateComponentInfo
Ps: here C174 is my cluster name Root cause : Starting from ambari-2.7.0 the order of ambari upgrade has been changed. first, we need to perform upgrade mpack command then perform the ambari-server upgrade. The order of execution will be :
ambari-server upgrade-mpack \
--mpack=http://public-repo-1.hortonworks.com/HDF/centos7/3.x/updates/<version>/tars/hdf_ambari_mp/hdf-ambari-mpack-<version>-<build-number>.tar.gz \
--verbose
ambari-server upgrade
Please refer to document before upgrade : HDF-3.3 : https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.3.0/ambari-managed-hdf-upgrade/content/hdf-upgrade_ambari_and_the_hdf_management_pack.html HDF-3.2 : https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.2.0/ambari-managed-hdf-upgrade/content/hdf-upgrade_ambari_and_the_hdf_management_pack.html
... View more
Labels: