Member since
07-30-2019
453
Posts
112
Kudos Received
80
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2950 | 04-12-2023 08:58 PM | |
| 5588 | 04-04-2023 11:48 PM | |
| 1934 | 04-02-2023 10:24 PM | |
| 4134 | 07-05-2019 08:38 AM | |
| 3855 | 05-13-2019 06:21 AM |
12-24-2018
02:04 PM
1 Kudo
Disclaimer:This article is based on my personal experience and knowledge.Don't take it as a standard guidelines, understand the concept and modify it for your environmental best practices and use case. Always contact Hortonworks support if its production cluster Problem Description: I am currently using ambari-2.6.2 version. I am trying to update some configs via configs.py and my python version is python-2.7.9 and my configs.py is failing with below error : [root@asnaik1 certs]# /var/lib/ambari-server/resources/scripts/configs.py --port=8443 --action=set --host=asnaik1.openstacklocal --cluster=asnaik --config-type=kafka-env --user=admin --password=admin --key=kafka_log_dir --value=/tmp --protocol=https
2018-12-05 10:24:57,615 INFO ### Performing "set":
2018-12-05 10:24:57,615 INFO ### new property - "kafka_log_dir":"/tmp"
Traceback (most recent call last):
File "/var/lib/ambari-server/resources/scripts/configs.py", line 364, in <module>
sys.exit(main())
File "/var/lib/ambari-server/resources/scripts/configs.py", line 343, in main
return set_properties(cluster, config_type, action_args, accessor)
File "/var/lib/ambari-server/resources/scripts/configs.py", line 237, in set_properties
update_config(cluster, config_type, updater, accessor)
File "/var/lib/ambari-server/resources/scripts/configs.py", line 131, in update_config
properties, attributes = config_updater(cluster, config_type, accessor)
File "/var/lib/ambari-server/resources/scripts/configs.py", line 136, in update
properties, attributes = get_current_config(cluster, config_type, accessor)
File "/var/lib/ambari-server/resources/scripts/configs.py", line 123, in get_current_config
config_tag = get_config_tag(cluster, config_type, accessor)
File "/var/lib/ambari-server/resources/scripts/configs.py", line 94, in get_config_tag
response = accessor(DESIRED_CONFIGS_URL.format(cluster))
File "/var/lib/ambari-server/resources/scripts/configs.py", line 89, in do_request
raise Exception('Problem with accessing api. Reason: {0}'.format(exc))
Exception: Problem with accessing api. Reason: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579)> Root cause: starting from python 2.7.9, python will validate the SSL certificate by default and if no valid SSL certificate is configured it might fail. All the python version below 2.7.9 this is disabled by default. More Details can be found in this JIRA : https://issues.apache.org/jira/browse/AMBARI-23893 the above JIRA is fixed in ambari-2.7 version so all the versions below ambari-2.7 and having python 2.7.9 will be having this issue. Workaround : The fix of JIRA : AMBARI-23893 is this : https://github.com/apache/ambari/pull/1314/files We can actually take backup of file : /var/lib/ambari-server/resources/scripts/configs.py and use this configs.py instead . Steps 1) Navigate to /var/lib/ambari-server/resources/scripts/ cd /var/lib/ambari-server/resources/scripts/ 2) take Backup of configs.py mv configs.py configs.py_Backup 3) wget the raw github content with fix: wget https://raw.githubusercontent.com/dlysnichenko/ambari/75e0c4a6e5f2c30483bf2f783c1af0c38f3b2623/ambari-server/src/main/resources/scripts/configs.py 4) give necessary permissions chmod -R 750 configs.py 5) Retry the Operation with --unsafe option [root@asnaik1 certs]# /var/lib/ambari-server/resources/scripts/configs.py --port=8443 --action=set --host=asnaik1.openstacklocal --cluster=asnaik --config-type=kafka-env --user=admin --password=admin --key=kafka_log_dir --value=/tmp --protocol=https --unsafe NOTE: Remember to add --unsafe option as that was the change associated with the apache-jira
... View more
12-20-2018
06:10 PM
The Abouve Article doesnt work in ambari-2.7.3 due to a bug . Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/custom_actions/scripts/remove_previous_stacks.py", line 119, in <module>
RemovePreviousStacks().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute
method(env)
File "/var/lib/ambari-agent/cache/custom_actions/scripts/remove_previous_stacks.py", line 49, in actionexecute
self.remove_stack_version(structured_output, low_version)
File "/var/lib/ambari-agent/cache/custom_actions/scripts/remove_previous_stacks.py", line 54, in remove_stack_version
packages_to_remove = self.get_packages_to_remove(version)
File "/var/lib/ambari-agent/cache/custom_actions/scripts/remove_previous_stacks.py", line 77, in get_packages_to_remove
all_installed_packages = self.pkg_provider.all_installed_packages()
AttributeError: 'YumManager' object has no attribute 'all_installed_packages' Please refer to article if you face the same bug : https://community.hortonworks.com/articles/230893/remove-old-stack-versions-script-doesnt-work-in-am.html
... View more
12-20-2018
06:09 PM
6 Kudos
Disclaimer: This article is based on my personal experience and knowledge. Don't take it as a standard guidelines, understand the concept and modify it for your environmental best practices and use case. Ambari unofficially supports a curl command which will help deleting the old stacks and packages from each of the host . the script is described in https://issues.apache.org/jira/browse/AMBARI-18435 and also in the article : https://community.hortonworks.com/articles/202904/how-to-remove-all-previous-version-hdp-directories.html But this above script doesn't work in ambari-2.7.3 version. Root cause : the script will fail with below exception :
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/custom_actions/scripts/remove_previous_stacks.py", line 119, in <module>
RemovePreviousStacks().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute
method(env)
File "/var/lib/ambari-agent/cache/custom_actions/scripts/remove_previous_stacks.py", line 49, in actionexecute
self.remove_stack_version(structured_output, low_version)
File "/var/lib/ambari-agent/cache/custom_actions/scripts/remove_previous_stacks.py", line 54, in remove_stack_version
packages_to_remove = self.get_packages_to_remove(version)
File "/var/lib/ambari-agent/cache/custom_actions/scripts/remove_previous_stacks.py", line 77, in get_packages_to_remove
all_installed_packages = self.pkg_provider.all_installed_packages()
AttributeError: 'YumManager' object has no attribute 'all_installed_packages'
this was due to fix of : https://issues.apache.org/jira/browse/AMBARI-21738 ( https://github.com/apache/ambari/commit/e7c4ed761072256dabd881242a0eea40d94cf8af) the fix is not handled properly in remove_previous_stacks.py Workaround : 1) go to each ambari-agent node and edit the file remove_previous_stacks.py
[root@asn1 current]#vi /var/lib/ambari-agent/cache/custom_actions/scripts/remove_previous_stacks.py
2) go to line : 77 edit the line from :
all_installed_packages = self.pkg_provider.all_installed_packages()
to
all_installed_packages = self.pkg_provider.installed_packages()
3) retry the operation via curl again ex:
curl 'http://asn1.openstacklocal:8080/api/v1/clusters/asnaik/requests' -u admin:admin -H "X-Requested-By: ambari" -X POST -d'{"RequestInfo":{"context":"remove_previous_stacks", "action" : "remove_previous_stacks", "parameters" : {"version":"3.0.1.1-84"}}, "Requests/resource_filters": [{"hosts":"asn1.openstacklocal"}]}'
The Operation has to be success now. If you are facing any issue please comment in this thread and tag me Please upvote this article if you find this helpful
... View more
Labels:
12-20-2018
04:25 PM
3 Kudos
Disclaimer: This article is based on my personal experience and knowledge. Don't take it as a standard guidelines, understand the concept and modify it for your environmental best practices and use case.Its always suggest contact Hortonworks Support if you have trouble in your production cluster Problem Statement: My Cluster is upgraded from ambari-2.7.1 to ambari-2.7.3 and HDP-3.0.1 to HDP-3.1. Whenever I am trying to change some configs and save it. the Below Error is shown :
Error message: </strong>Stack Advisor reported an error . Exit Code: 2. Error: KeyError: 'beeline_jdbc_url_default'
StdOut file: /var/run/ambari-server/stack-recommendations/12/stackadvisor.out
StdErr file: /var/run/ambari-server/stack-recommendations/12/stackadvisor.err
and in Ambari-server.log I can see
Caused by: org.apache.ambari.server.api.services.stackadvisor.StackAdvisorException: Stack Advisor reported an error. Exit Code: 2. Error: KeyError: 'beeline_jdbc_url_default'
StdOut file: /var/run/ambari-server/stack-recommendations/12/stackadvisor.out
StdErr file: /var/run/ambari-server/stack-recommendations/12/stackadvisor.err
at org.apache.ambari.server.api.services.stackadvisor.StackAdvisorRunner.processLogs(StackAdvisorRunner.java:149)
at org.apache.ambari.server.api.services.stackadvisor.StackAdvisorRunner.runScript(StackAdvisorRunner.java:89)
at org.apache.ambari.server.api.services.stackadvisor.commands.StackAdvisorCommand.invoke(StackAdvisorCommand.java:314)
at org.apache.ambari.server.api.services.stackadvisor.StackAdvisorHelper.validate(StackAdvisorHelper.java:94)
at org.apache.ambari.server.controller.internal.ValidationResourceProvider.createResources(ValidationResourceProvider.java:127)
... 105 more
And in stackadvisor.err i can see the below error :
[root@asn1 ~]# cat /var/run/ambari-server/stack-recommendations/12/stackadvisor.err
Traceback (most recent call last):
File "/var/lib/ambari-server/resources/scripts/stack_advisor.py", line 190, in <module>
main(sys.argv)
File "/var/lib/ambari-server/resources/scripts/stack_advisor.py", line 142, in main
result = stackAdvisor.validateConfigurations(services, hosts)
File "/var/lib/ambari-server/resources/scripts/../stacks/stack_advisor.py", line 1079, in validateConfigurations
validationItems = self.getConfigurationsValidationItems(services, hosts)
File "/var/lib/ambari-server/resources/scripts/../stacks/stack_advisor.py", line 1468, in getConfigurationsValidationItems
items.extend(self.getConfigurationsValidationItemsForService(configurations, recommendedDefaults, service, services, hosts))
File "/var/lib/ambari-server/resources/scripts/../stacks/stack_advisor.py", line 1521, in getConfigurationsValidationItemsForService
items.extend(serviceAdvisor.getServiceConfigurationsValidationItems(configurations, recommendedDefaults, services, hosts))
File "/var/lib/ambari-server/resources/stacks/HDP/3.1/services/HIVE/service_advisor.py", line 143, in getServiceConfigurationsValidationItems
return validator.validateListOfConfigUsingMethod(configurations, recommendedDefaults, services, hosts, validator.validators)
File "/var/lib/ambari-server/resources/scripts/../stacks/stack_advisor.py", line 1491, in validateListOfConfigUsingMethod
validationItems = method(siteProperties, siteRecommendations, configurations, services, hosts)
File "/var/lib/ambari-server/resources/stacks/HDP/3.1/services/HIVE/service_advisor.py", line 785, in validateHiveConfigurationsEnvFromHDP30
beeline_jdbc_url_default = hive_env["beeline_jdbc_url_default"]
KeyError: 'beeline_jdbc_url_default'
Root Cause : This issue happens due to hive-env is missing a parameter called "beeline_jdbc_url_default" which by default get added while upgrading ambari. if you have upgraded from ambari-2.7.1 to ambari-2.7.3 only then this issue will happen. Solution: Go to Ambari-server Execute the below command to add the missing config via configs.py (please note this cannot be added via ui) [root@asn1 ~]# /var/lib/ambari-server/resources/scripts/configs.py -l <AMBARI_IP> -t 8080 -u <ADMIN_USERNAME> -p <ADMIN_PASSWORD> -a set -n <CLUSTER_NAME> -c hive-env -k beeline_jdbc_url_default -v container for example : [root@asn1 ~]# /var/lib/ambari-server/resources/scripts/configs.py -l asn1.openstacklocal -t 8080 -u admin -p admin -a set -n asnaik -c hive-env -k beeline_jdbc_url_default -v container
2018-12-20 16:21:33,820 INFO ### Performing "set":
2018-12-20 16:21:33,820 INFO ### new property - "beeline_jdbc_url_default":"container"
2018-12-20 16:21:33,835 INFO ### on (Site:hive-env, Tag:version1545035376545)
2018-12-20 16:21:33,843 INFO ### PUTting json into: doSet_version1545322893843244.json
2018-12-20 16:21:34,054 INFO ### NEW Site:hive-env, Tag:version1545322893843244 Go to Ambari-UI and restart the services are requested. Now you can save any configs there won't be any Consistency Check Failed Error message.
... View more
12-20-2018
06:01 AM
2 Kudos
Disclaimer:This article is based on my personal experience and knowledge.Don't take it as a standard guidelines, understand the concept and modify it for your environmental best practices and use case. Always contact Hortonworks support if its production cluster Problem Statement: I have installed HDF-3.1 and ambari-2.6.2.2 and i am upgrading my ambari to 2.7.0.0 to upgrade my HDF to 3.2+ versions and its failing with below exception :
INFO: about to run command: /usr/java/jdk1.8.0_162/bin/java -cp '/etc/ambari-server/conf:/usr/lib/ambari-server/*:/usr/java/latest/mysql-connector-java.jar:/usr/share/java/mysql-connector-java.jar' org.apache.ambari.server.upgrade.SchemaUpgradeHelper > /var/log/ambari-server/ambari-server.out 2>&1
INFO:
process_pid=16599
Traceback (most recent call last):
File "/usr/sbin/ambari-server.py", line 1060, in <module>
mainBody()
File "/usr/sbin/ambari-server.py", line 1030, in mainBody
main(options, args, parser)
File "/usr/sbin/ambari-server.py", line 980, in main
action_obj.execute()
File "/usr/sbin/ambari-server.py", line 79, in execute
self.fn(*self.args, **self.kwargs)
File "/usr/lib/ambari-server/lib/ambari_server/serverUpgrade.py", line 262, in upgrade
retcode = run_schema_upgrade(args)
File "/usr/lib/ambari-server/lib/ambari_server/serverUpgrade.py", line 162, in run_schema_upgrade
upgrade_response = json.loads(stdout)
File "/usr/lib/ambari-server/lib/ambari_simplejson/__init__.py", line 307, in loads
return _default_decoder.decode(s)
File "/usr/lib/ambari-server/lib/ambari_simplejson/decoder.py", line 335, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/ambari-server/lib/ambari_simplejson/decoder.py", line 353, in raw_decode
raise ValueError("No JSON object could be decoded")
ValueError: No JSON object could be decoded
And inspecting the ambari-server log I found this exception :
2018-09-04 06:34:55,967 ERROR [main] SchemaUpgradeHelper:238 - Upgrade failed. java.lang.RuntimeException: Trying to create a ServiceComponent not recognized in stack info, clusterName=c174, serviceName=AMBARI_INFRA, componentName=INFRA_SOLR_CLIENT, stackInfo=HDF-3.1 at org.apache.ambari.server.state.ServiceComponentImpl.updateComponentInfo
Ps: here C174 is my cluster name Root cause : Starting from ambari-2.7.0 the order of ambari upgrade has been changed. first, we need to perform upgrade mpack command then perform the ambari-server upgrade. The order of execution will be :
ambari-server upgrade-mpack \
--mpack=http://public-repo-1.hortonworks.com/HDF/centos7/3.x/updates/<version>/tars/hdf_ambari_mp/hdf-ambari-mpack-<version>-<build-number>.tar.gz \
--verbose
ambari-server upgrade
Please refer to document before upgrade : HDF-3.3 : https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.3.0/ambari-managed-hdf-upgrade/content/hdf-upgrade_ambari_and_the_hdf_management_pack.html HDF-3.2 : https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.2.0/ambari-managed-hdf-upgrade/content/hdf-upgrade_ambari_and_the_hdf_management_pack.html
... View more
Labels:
12-19-2018
06:08 PM
Consider my cluster is having SSO configured and my SSO certificate contains character larger than 2048 and i am upgrading to ambari-2.7.0 using the command : ambari-server upgrade
There is a high chance your upgrade will get stuck at schema upgrade phase and in the ambari-server log you will find the below exception :
Internal Exception: java.sql.BatchUpdateException: Batch entry 2 INSERT INTO ambari_configuration (property_name, category_name, property_value) VALUES ('ambari.sso.provider.certificate','sso-configuration',<LONG VERTIFICATE VALUE>') was aborted: ERROR: value too long for type character varying(2048) Call getNextException to see other errors in the batch.
Root Cause : While upgrade to ambari-2.7.x + versions ambari will move the sensitive data from ambari.properties to ambari database table : ambari_configuration . if any of this sensitive data contains more than 2048 characters it will fail. usually certificates issues wont be more than 2048 characters but some cases if its higher than 2048. you might hit the abouve error.
This issue is fixed in Ambari-2.8 as part of : https://issues.apache.org/jira/browse/AMBARI-24992
the work around here is :
Edit the ambari.properties file and remove the entry for authentication.jwt.publicKey Perform the Ambari upgrade Manually alter the database:
ALTER TABLE ambari_configuration ALTER COLUMN property_value TYPE VARCHAR(4000);
NOTE: This syntax is specific to Postgres
4. Manually insert the relevant PEM file's contents into the database
INSERT INTO ambari_configuration(category_name, property_name, property_value)
VALUES ('sso-configuration', 'ambari.sso.provider.certificate', '<CONTENT OF THE PEM FILE>');
5. Restart ambari database
... View more
Labels:
12-19-2018
05:52 PM
2 Kudos
Disclaimer:This article is based on my personal experience and knowledge.Don't take it as a standard guidelines, understand the concept and modify it for your environmental best practices and use case. Always contact Hortonworks support if its production cluster
Issue Description: Ambari Showing Wrong NIFI Version After Upgrading to HDF 3.3 , as per release notes , the NIFI version in HDF-3.3 is 1.8 but ambari shows it as 1.7 in Stacks and Versions page.
Root cause : This is a known bug in hdf-ambari-mpack-3.3.0.0-165 ( the mpack used by ambari to manage HDF-3.3 ) This issue is fixedin HDF-3.3.1 mpack.
Solution : we can ignore the version shown in the stacks and services as its just ambari shows its wrong. we can verify the NIFI version in nifi ui or from hdf-select command in Host where nifi is installed.
Workaround :
Navigate to : /var/lib/ambari-server/resources/mpacks/hdf-ambari-mpack-3.3.0.0-165/hdp-addon-services/HDF/3.3/NIFI/1.7.0
Edit the File : metainfo.xml
Change the line :
<version>1.7.0</version>
to
<version>1.8.0</version>
Restart ambari server.
... View more
Labels:
12-19-2018
03:02 PM
@Tim Verhoeven, It will be helpfull if you could kindly create a new thread for issue you are facing.
... View more
09-21-2018
09:04 AM
1 Kudo
There can be situation where you are adding a Host in Ambari and the Add Host Wizard is stuck in UI with message : Please wait while the hosts are being checked for potential problems It will be tougher for you to proceed as next button will be disabled and you migt need to wait indefinitely for the Host to respond. To analyze what happened.We might need to look on ambari-agent hosts and ambari-server logs. As usually this Host checking will take longer we can actually skip the same using the Ambari experimental Wizards Disabling HostCheck On AddHost Wizard Steps 1) navigate in another Tab to URL : http://<AMBARI-SERVER>:8080/#/experimental 2) Tick on 'disableHostCheckOnAddHostWizard' checkbox and save it : 3) Close the Add Host Wizard using close button in UI and retry the same operation again. Note : Please note this article is not applicable on Ambari-2.7.0 and higher versions.
... View more
Labels:
09-07-2018
04:58 PM
@Fraser Campbell, Glad that it helped you :). Please vote for this article if you liked it.
... View more
- « Previous
- Next »