Created 12-13-2016 12:02 AM
Hello all,
I'm doing an install of HDP 2.5.3. using Ambari 2.4.2.0-136, and the final install continues to fail because of this:
Traceback (most recent call last): File "/var/lib/ambari-agent/cache/common-services/ATLAS/0.1.0.2.3/package/scripts/atlas_client.py", line 57, in <module> AtlasClient().execute() File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute method(env) File "/var/lib/ambari-agent/cache/common-services/ATLAS/0.1.0.2.3/package/scripts/atlas_client.py", line 45, in install self.install_packages(env) File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 567, in install_packages retry_count=agent_stack_retry_count) File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__ self.env.run() File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run self.run_action(resource, action) File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action provider_action() File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 54, in action_install self.install_package(package_name, self.resource.use_repos, self.resource.skip_repos) File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py", line 51, in install_package self.checked_call_with_retries(cmd, sudo=True, logoutput=self.get_logoutput()) File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 86, in checked_call_with_retries return self._call_with_retries(cmd, is_checked=True, **kwargs) File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 98, in _call_with_retries code, out = func(cmd, **kwargs) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner result = function(command, **kwargs) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call tries=tries, try_sleep=try_sleep) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper result = _call(command, **kwargs_copy) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 293, in _call raise ExecutionFailed(err_msg, code, out, err) resource_management.core.exceptions.ExecutionFailed: Execution of '/usr/bin/yum -d 0 -e 0 -y install atlas-metadata_2_5_0_0_1245' returned 1. Error: Nothing to do
The error "Nothing to do" is being thrown because "atlas-metadata_2_5_0_0_1245" doesn't exist in the HDP 2.5.3 repo, manually trying to install this shows this. When I do "yum search atlas" it shows this version: atlas-metadata_2_5_3_0_37 but yet Ambari is not set to install this version for some reason even though in the install wizard I specified 2.5.3.
Here is what my HDP.repo looks like on every node:
[HDP-2.5] name=HDP-2.5 baseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.3.0 path=/ enabled=1 gpgcheck=0
I ran through this exact same install 2 weeks ago (2.5, but it might have been 2.5.0, I had it grab the latest 2.5), and I had zero problems. However I should mention these machines were all used in this previous install and I'm doing a re-install right now. I've ensured that everything has been wiped off of the machines based on information found here: https://community.hortonworks.com/questions/1110/how-to-completely-remove-uninstall-ambari-and-hdp.h... and here: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_command-line-installation/content/ch_unin...
Ambari checks before install all passed with no errors. Can anyone please help me figure this out? Reinstalling all of the machines from scratch is nearly impossible right now. But I don't think there is anything wrong with the repo installation on the machines, Ambari installed that right, it simply is just not looking for the right packages.
Note: Datanode / YARN services were installed properly, this seems to just be Atlas & App Timeline Server.
Thanks
Created 12-13-2016 09:36 PM
This is resolved. There were a couple directories leftover in the /usr/hdp location using the old version, it seems Ambari will use this file path to determine the version needed, not sure how to articulate this further but yeah, something metadata wise was changed based on these files existing still.
Also doing "python /usr/lib/python2.6/site-packages/ambari_agent/HostCleanup.py --silent" on every machine helped clean up things that were missed. I forgot to run this step.
Created 12-13-2016 12:09 AM
Seems like you have old repos under /etc/yum.repos.d/.
Please try this
rm -f /etc/yum.repos.d/HDP*
yum clean all
And try install from Ambari. Ambari will create the repo files it needs.
Created 12-13-2016 04:51 AM
Hi @rgangappa Thanks for the reply. Unfortunately I've tried this already and it didn't fix the issue.
I don't believe the HDP.repo file is the problem, the correct repo is being used, it's just Ambari is instructing yum to install the wrong package - - a package in which does not exist in the correct repo that is loaded into yum.
Created 12-13-2016 09:36 PM
This is resolved. There were a couple directories leftover in the /usr/hdp location using the old version, it seems Ambari will use this file path to determine the version needed, not sure how to articulate this further but yeah, something metadata wise was changed based on these files existing still.
Also doing "python /usr/lib/python2.6/site-packages/ambari_agent/HostCleanup.py --silent" on every machine helped clean up things that were missed. I forgot to run this step.
Created 05-04-2017 01:34 PM
I had this issue with 2.6 to. thanks ! Just that other people can find it this was my error:
Execution of '/usr/bin/zypper --quiet install --auto-agree-with-licenses --no-confirm hadoop_2_5_0_2_3-yarn' returned 104. No provider of 'hadoop_2_5_0_2_3-yarn' found.