Created 10-17-2017 11:29 AM
Hi,
I am trying to install HCP 1.3 (Package) with HDP 2.5 I am experience repository error. Any help will be appericated
Traceback (most recent call last): File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py", line 174, in <module> DataNode().execute() File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 285, in execute method(env) File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py", line 49, in install self.install_packages(env) File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 576, in install_packages retry_count=agent_stack_retry_count) File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__ self.env.run() File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run self.run_action(resource, action) File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action provider_action() File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 54, in action_install self.install_package(package_name, self.resource.use_repos, self.resource.skip_repos) File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py", line 51, in install_package self.checked_call_with_retries(cmd, sudo=True, logoutput=self.get_logoutput()) File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 86, in checked_call_with_retries return self._call_with_retries(cmd, is_checked=True, **kwargs) File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 98, in _call_with_retries code, out = func(cmd, **kwargs) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner result = function(command, **kwargs) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call tries=tries, try_sleep=try_sleep) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper result = _call(command, **kwargs_copy) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 293, in _call raise ExecutionFailed(err_msg, code, out, err) resource_management.core.exceptions.ExecutionFailed: Execution of '/usr/bin/yum -d 0 -e 0 -y install hadoop_2_5_3_0_37' returned 1. One of the configured repositories failed (ES-Curator-4.x), and yum doesn't have enough cached data to continue. At this point the only safe thing yum can do is fail. There are a few ways to work "fix" this: 1. Contact the upstream for the repository and get them to fix the problem. 2. Reconfigure the baseurl/etc. for the repository, to point to a working upstream. This is most often useful if you are using a newer distribution release than is supported by the repository (and the packages for the previous distribution release still work). 3. Run the command with the repository temporarily disabled yum --disablerepo=ES-Curator-4.x ... 4. Disable the repository permanently, so yum won't use it by default. Yum will then just ignore the repository until you permanently enable it again or use --enablerepo for temporary usage: yum-config-manager --disable ES-Curator-4.x or subscription-manager repos --disable=ES-Curator-4.x 5. Configure the failing repository to be skipped, if it is unavailable. Note that yum will try to contact the repo. when it runs most commands, so will have to try and fail each time (and thus. yum will be be much slower). If it is a very temporary problem though, this is often a nice compromise: yum-config-manager --save --setopt=ES-Curator-4.x.skip_if_unavailable=true failure: repodata/repomd.xml from ES-Curator-4.x: [Errno 256] No more mirrors to try. http://packages.elastic.co/curator/4/centos/7/repodata/repomd.xml: [Errno 14] curl#6 - "Could not resolve host: packages.elastic.co; Unknown error"stdout: /var/lib/ambari-agent/data/output-239.txt
2017-10-17 16:46:51,684 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2017-10-17 16:46:51,685 - Group['metron'] {} 2017-10-17 16:46:51,686 - Group['livy'] {} 2017-10-17 16:46:51,686 - Group['elasticsearch'] {} 2017-10-17 16:46:51,687 - Group['spark'] {} 2017-10-17 16:46:51,687 - Group['zeppelin'] {} 2017-10-17 16:46:51,687 - Group['hadoop'] {} 2017-10-17 16:46:51,687 - Group['kibana'] {} 2017-10-17 16:46:51,687 - Group['users'] {} 2017-10-17 16:46:51,688 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-10-17 16:46:51,689 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-10-17 16:46:51,689 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-10-17 16:46:51,690 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-10-17 16:46:51,691 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']} 2017-10-17 16:46:51,691 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-10-17 16:46:51,692 - User['metron'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-10-17 16:46:51,693 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-10-17 16:46:51,693 - User['elasticsearch'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-10-17 16:46:51,694 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-10-17 16:46:51,695 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']} 2017-10-17 16:46:51,696 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-10-17 16:46:51,696 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-10-17 16:46:51,697 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-10-17 16:46:51,698 - User['kibana'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-10-17 16:46:51,698 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-10-17 16:46:51,699 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-10-17 16:46:51,700 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-10-17 16:46:51,701 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2017-10-17 16:46:51,702 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'} 2017-10-17 16:46:51,710 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if 2017-10-17 16:46:51,710 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'} 2017-10-17 16:46:51,712 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2017-10-17 16:46:51,713 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'} 2017-10-17 16:46:51,721 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if 2017-10-17 16:46:51,721 - Group['hdfs'] {} 2017-10-17 16:46:51,722 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']} 2017-10-17 16:46:51,722 - FS Type: 2017-10-17 16:46:51,722 - Directory['/etc/hadoop'] {'mode': 0755} 2017-10-17 16:46:51,723 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777} 2017-10-17 16:46:51,743 - Initializing 6 repositories 2017-10-17 16:46:51,744 - Repository['HDP-2.5'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.0.0', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP', 'mirror_list': None} 2017-10-17 16:46:51,751 - File['/etc/yum.repos.d/HDP.repo'] {'content': '[HDP-2.5]\nname=HDP-2.5\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.0.0\n\npath=/\nenabled=1\ngpgcheck=0'} 2017-10-17 16:46:51,752 - Repository['HDP-UTILS-1.1.0.21'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None} 2017-10-17 16:46:51,755 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': '[HDP-UTILS-1.1.0.21]\nname=HDP-UTILS-1.1.0.21\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7\n\npath=/\nenabled=1\ngpgcheck=0'} 2017-10-17 16:46:51,755 - Repository['HCP-1.3.0.0-51'] {'base_url': 'http://s3.amazonaws.com/dev.hortonworks.com/HCP/centos6/1.x/BUILDS/1.3.0.0-51', 'action': ['create'], 'components': [u'METRON', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'METRON', 'mirror_list': None} 2017-10-17 16:46:51,758 - File['/etc/yum.repos.d/METRON.repo'] {'content': '[HCP-1.3.0.0-51]\nname=HCP-1.3.0.0-51\nbaseurl=http://s3.amazonaws.com/dev.hortonworks.com/HCP/centos6/1.x/BUILDS/1.3.0.0-51\n\npath=/\nenabled=1\ngpgcheck=0'} 2017-10-17 16:46:51,759 - Repository['ES-Curator-4.x'] {'base_url': 'http://packages.elastic.co/curator/4/centos/7', 'action': ['create'], 'components': [u'CURATOR', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'CURATOR', 'mirror_list': None} 2017-10-17 16:46:51,761 - File['/etc/yum.repos.d/CURATOR.repo'] {'content': '[ES-Curator-4.x]\nname=ES-Curator-4.x\nbaseurl=http://packages.elastic.co/curator/4/centos/7\n\npath=/\nenabled=1\ngpgcheck=0'} 2017-10-17 16:46:51,762 - Repository['kibana-4.x'] {'base_url': 'http://packages.elastic.co/kibana/4.5/centos', 'action': ['create'], 'components': [u'KIBANA', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'KIBANA', 'mirror_list': None} 2017-10-17 16:46:51,765 - File['/etc/yum.repos.d/KIBANA.repo'] {'content': '[kibana-4.x]\nname=kibana-4.x\nbaseurl=http://packages.elastic.co/kibana/4.5/centos\n\npath=/\nenabled=1\ngpgcheck=0'} 2017-10-17 16:46:51,765 - Repository['elasticsearch-2.x'] {'base_url': 'https://packages.elastic.co/elasticsearch/2.x/centos', 'action': ['create'], 'components': [u'ELASTICSEARCH', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ELASTICSEARCH', 'mirror_list': None} 2017-10-17 16:46:51,768 - File['/etc/yum.repos.d/ELASTICSEARCH.repo'] {'content': '[elasticsearch-2.x]\nname=elasticsearch-2.x\nbaseurl=https://packages.elastic.co/elasticsearch/2.x/centos\n\npath=/\nenabled=1\ngpgcheck=0'} 2017-10-17 16:46:51,769 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2017-10-17 16:46:51,860 - Skipping installation of existing package unzip 2017-10-17 16:46:51,861 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2017-10-17 16:46:51,875 - Skipping installation of existing package curl 2017-10-17 16:46:51,876 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2017-10-17 16:46:51,890 - Skipping installation of existing package hdp-select 2017-10-17 16:46:52,208 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2017-10-17 16:46:52,210 - Stack Feature Version Info: stack_version=2.5, version=None, current_cluster_version=None -> 2.5 2017-10-17 16:46:52,225 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2017-10-17 16:46:52,237 - checked_call['rpm -q --queryformat '%{version}-%{release}' hdp-select | sed -e 's/\.el[0-9]//g''] {'stderr': -1} 2017-10-17 16:46:52,276 - checked_call returned (0, '2.5.3.0-37', '') 2017-10-17 16:46:52,280 - Package['hadoop_2_5_3_0_37'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2017-10-17 16:46:52,373 - Installing package hadoop_2_5_3_0_37 ('/usr/bin/yum -d 0 -e 0 -y install hadoop_2_5_3_0_37') 2017-10-17 16:48:16,670 - Execution of '/usr/bin/yum -d 0 -e 0 -y install hadoop_2_5_3_0_37' returned 1. One of the configured repositories failed (HDP-UTILS-1.1.0.21), and yum doesn't have enough cached data to continue. At this point the only safe thing yum can do is fail. There are a few ways to work "fix" this: 1. Contact the upstream for the repository and get them to fix the problem. 2. Reconfigure the baseurl/etc. for the repository, to point to a working upstream. This is most often useful if you are using a newer distribution release than is supported by the repository (and the packages for the previous distribution release still work). 3. Run the command with the repository temporarily disabled yum --disablerepo=HDP-UTILS-1.1.0.21 ... 4. Disable the repository permanently, so yum won't use it by default. Yum will then just ignore the repository until you permanently enable it again or use --enablerepo for temporary usage: yum-config-manager --disable HDP-UTILS-1.1.0.21 or subscription-manager repos --disable=HDP-UTILS-1.1.0.21 5. Configure the failing repository to be skipped, if it is unavailable. Note that yum will try to contact the repo. when it runs most commands, so will have to try and fail each time (and thus. yum will be be much slower). If it is a very temporary problem though, this is often a nice compromise: yum-config-manager --save --setopt=HDP-UTILS-1.1.0.21.skip_if_unavailable=true failure: repodata/repomd.xml from HDP-UTILS-1.1.0.21: [Errno 256] No more mirrors to try. http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7/repodata/repomd.xml: [Errno 12] Timeout on http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7/repodata/repomd.xml: (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds') http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7/repodata/repomd.xml: [Errno 14] curl#7 - "Failed connect to public-repo-1.hortonworks.com:80; Operation now in progress" http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7/repodata/repomd.xml: [Errno 14] curl#6 - "Could not resolve host: public-repo-1.hortonworks.com; Unknown error" 2017-10-17 16:48:16,670 - Failed to install package hadoop_2_5_3_0_37. Executing '/usr/bin/yum clean metadata' 2017-10-17 16:48:16,883 - Retrying to install package hadoop_2_5_3_0_37 after 30 seconds Command failed after 1 tries
Created 10-17-2017 11:36 AM
The error says :
http://packages.elastic.co/curator/4/centos/7/repodata/repomd.xml: [Errno 14] curl#6 - "Could not resolve host: packages.elastic.co; Unknown error" #7 - "Failed connect to public-repo-1.hortonworks.com:80; Operation now in progress"http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7/repodata/repomd.xml: [Errno 14] curl#6 - "Could not resolve host: public-repo-1.hortonworks.com; Unknown error"
.
Looks like the host where the yum install command is running does not have the internet access to reach the public-repo-1.hortonworks.com OR packages.elastic.co
# wget http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7/repodata/repomd.xml AND # wget http://packages.elastic.co/curator/4/centos/7/repodata/repomd.xml
.
Please check your Internet connectivity ?
Also please double check if by any chance you have enabled any yum proxy setting ? Or if your host needs any proxy setting then please that setting to the "/etc/yum.conf"
# cat /etc/yum.conf Example: # grep 'proxy' ~/.bash_profile # grep 'proxy' /etc/profile # grep 'proxy' /etc/yum.conf
.
.
Created 10-18-2017 07:02 AM
Unfortunately its is not pinging it, that means HCP Package is broken ?
Pinging dualstack.download-colb-770446651.us-east-1.elb.amazonaws.com [54.225.18 8.6] with 32 bytes of data: Request timed out. Request timed out. Request timed out. Request timed out.
Created 10-18-2017 07:40 AM
No, The HCP packages are not broken But it looks like either you have a Network (Connectivity Issue) OR Some Proxy issue on the host where you are trying to install the packages.
Example: I can see that the wget is working fine without any issue:
[root@sandbox ~]# wget http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7/repodata/repomd.xml --2017-10-18 07:38:58-- http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7/repodata/repomd.xml Resolving public-repo-1.hortonworks.com... 52.222.178.239, 52.222.178.149, 52.222.178.202, ... Connecting to public-repo-1.hortonworks.com|52.222.178.239|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 2996 (2.9K) [application/xml] Saving to: “repomd.xml” 100%[================================================================================================================================================================================================>] 2,996 --.-K/s in 0s 2017-10-18 07:38:58 (334 MB/s) - “repomd.xml” saved [2996/2996]
.
And
[root@sandbox ~]# wget http://packages.elastic.co/curator/4/centos/7/repodata/repomd.xml --2017-10-18 07:39:25-- http://packages.elastic.co/curator/4/centos/7/repodata/repomd.xml Resolving packages.elastic.co... 184.72.218.26, 184.73.156.41, 54.225.188.6, ... Connecting to packages.elastic.co|184.72.218.26|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 1312 (1.3K) [application/xml] Saving to: “repomd.xml.1” 100%[================================================================================================================================================================================================>] 1,312 --.-K/s in 0s 2017-10-18 07:39:26 (165 MB/s) - “repomd.xml.1” saved [1312/1312]
For testing you can check some other common website in the "wget" commmand to verify if the internet has any issue on your host.
.
Created 11-19-2017 01:40 PM
I am experiencing this error now
stderr: /var/lib/ambari-agent/data/errors-118.txtTraceback (most recent call last): File "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_grafana.py", line 69, in <module> AmsGrafana().execute() File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 285, in execute method(env) File "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_grafana.py", line 31, in install self.install_packages(env) File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 576, in install_packages retry_count=agent_stack_retry_count) File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__ self.env.run() File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run self.run_action(resource, action) File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action provider_action() File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 54, in action_install self.install_package(package_name, self.resource.use_repos, self.resource.skip_repos) File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py", line 51, in install_package self.checked_call_with_retries(cmd, sudo=True, logoutput=self.get_logoutput()) File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 86, in checked_call_with_retries return self._call_with_retries(cmd, is_checked=True, **kwargs) File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 98, in _call_with_retries code, out = func(cmd, **kwargs) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner result = function(command, **kwargs) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call tries=tries, try_sleep=try_sleep) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper result = _call(command, **kwargs_copy) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 293, in _call raise ExecutionFailed(err_msg, code, out, err) resource_management.core.exceptions.ExecutionFailed: Execution of '/usr/bin/yum -d 0 -e 0 -y install ambari-metrics-monitor' returned 1. Error: Nothing to dostdout: /var/lib/ambari-agent/data/output-118.txt
2017-11-19 17:20:37,959 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2017-11-19 17:20:37,960 - Group['metron'] {} 2017-11-19 17:20:37,961 - Group['livy'] {} 2017-11-19 17:20:37,961 - Group['elasticsearch'] {} 2017-11-19 17:20:37,961 - Group['spark'] {} 2017-11-19 17:20:37,962 - Group['zeppelin'] {} 2017-11-19 17:20:37,962 - Group['hadoop'] {} 2017-11-19 17:20:37,962 - Group['kibana'] {} 2017-11-19 17:20:37,962 - Group['users'] {} 2017-11-19 17:20:37,963 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-11-19 17:20:37,963 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-11-19 17:20:37,964 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-11-19 17:20:37,965 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-11-19 17:20:37,966 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']} 2017-11-19 17:20:37,966 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-11-19 17:20:37,967 - User['metron'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-11-19 17:20:37,968 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-11-19 17:20:37,969 - User['elasticsearch'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-11-19 17:20:37,969 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-11-19 17:20:37,970 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']} 2017-11-19 17:20:37,971 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-11-19 17:20:37,971 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-11-19 17:20:37,972 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-11-19 17:20:37,973 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-11-19 17:20:37,974 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-11-19 17:20:37,974 - User['kibana'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-11-19 17:20:37,975 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-11-19 17:20:37,976 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-11-19 17:20:37,976 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-11-19 17:20:37,977 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2017-11-19 17:20:37,979 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'} 2017-11-19 17:20:37,987 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if 2017-11-19 17:20:37,987 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'} 2017-11-19 17:20:37,988 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2017-11-19 17:20:37,989 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'} 2017-11-19 17:20:37,997 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if 2017-11-19 17:20:37,997 - Group['hdfs'] {} 2017-11-19 17:20:37,997 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']} 2017-11-19 17:20:37,998 - FS Type: 2017-11-19 17:20:37,998 - Directory['/etc/hadoop'] {'mode': 0755} 2017-11-19 17:20:38,012 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'} 2017-11-19 17:20:38,013 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777} 2017-11-19 17:20:38,028 - Initializing 6 repositories 2017-11-19 17:20:38,028 - Repository['HDP-2.5'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.3.0', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP', 'mirror_list': None} 2017-11-19 17:20:38,035 - File['/etc/yum.repos.d/HDP.repo'] {'content': '[HDP-2.5]\nname=HDP-2.5\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.3.0\n\npath=/\nenabled=1\ngpgcheck=0'} 2017-11-19 17:20:38,036 - Repository['HDP-UTILS-1.1.0.21'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None} 2017-11-19 17:20:38,039 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': '[HDP-UTILS-1.1.0.21]\nname=HDP-UTILS-1.1.0.21\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7\n\npath=/\nenabled=1\ngpgcheck=0'} 2017-11-19 17:20:38,039 - Repository['HCP-1.3.0.0-51'] {'base_url': 'http://s3.amazonaws.com/dev.hortonworks.com/HCP/centos6/1.x/BUILDS/1.3.0.0-51', 'action': ['create'], 'components': [u'METRON', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'METRON', 'mirror_list': None} 2017-11-19 17:20:38,042 - File['/etc/yum.repos.d/METRON.repo'] {'content': '[HCP-1.3.0.0-51]\nname=HCP-1.3.0.0-51\nbaseurl=http://s3.amazonaws.com/dev.hortonworks.com/HCP/centos6/1.x/BUILDS/1.3.0.0-51\n\npath=/\nenabled=1\ngpgcheck=0'} 2017-11-19 17:20:38,043 - Repository['ES-Curator-4.x'] {'base_url': 'http://packages.elastic.co/curator/4/centos/7', 'action': ['create'], 'components': [u'CURATOR', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'CURATOR', 'mirror_list': None} 2017-11-19 17:20:38,046 - File['/etc/yum.repos.d/CURATOR.repo'] {'content': '[ES-Curator-4.x]\nname=ES-Curator-4.x\nbaseurl=http://packages.elastic.co/curator/4/centos/7\n\npath=/\nenabled=1\ngpgcheck=0'} 2017-11-19 17:20:38,046 - Repository['kibana-4.x'] {'base_url': 'http://packages.elastic.co/kibana/4.5/centos', 'action': ['create'], 'components': [u'KIBANA', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'KIBANA', 'mirror_list': None} 2017-11-19 17:20:38,049 - File['/etc/yum.repos.d/KIBANA.repo'] {'content': '[kibana-4.x]\nname=kibana-4.x\nbaseurl=http://packages.elastic.co/kibana/4.5/centos\n\npath=/\nenabled=1\ngpgcheck=0'} 2017-11-19 17:20:38,049 - Repository['elasticsearch-2.x'] {'base_url': 'https://packages.elastic.co/elasticsearch/2.x/centos', 'action': ['create'], 'components': [u'ELASTICSEARCH', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ELASTICSEARCH', 'mirror_list': None} 2017-11-19 17:20:38,052 - File['/etc/yum.repos.d/ELASTICSEARCH.repo'] {'content': '[elasticsearch-2.x]\nname=elasticsearch-2.x\nbaseurl=https://packages.elastic.co/elasticsearch/2.x/centos\n\npath=/\nenabled=1\ngpgcheck=0'} 2017-11-19 17:20:38,053 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2017-11-19 17:20:38,144 - Skipping installation of existing package unzip 2017-11-19 17:20:38,144 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2017-11-19 17:20:38,161 - Skipping installation of existing package curl 2017-11-19 17:20:38,162 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2017-11-19 17:20:38,179 - Skipping installation of existing package hdp-select 2017-11-19 17:20:38,387 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2017-11-19 17:20:38,390 - checked_call['hostid'] {} 2017-11-19 17:20:38,396 - checked_call returned (0, 'a8c0570a') 2017-11-19 17:20:38,400 - Package['ambari-metrics-monitor'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2017-11-19 17:20:38,492 - Installing package ambari-metrics-monitor ('/usr/bin/yum -d 0 -e 0 -y install ambari-metrics-monitor') 2017-11-19 17:22:18,506 - Execution of '/usr/bin/yum -d 0 -e 0 -y install ambari-metrics-monitor' returned 1. Error: Nothing to do 2017-11-19 17:22:18,506 - Failed to install package ambari-metrics-monitor. Executing '/usr/bin/yum clean metadata' 2017-11-19 17:22:18,727 - Retrying to install package ambari-metrics-monitor after 30 seconds Command failed after 1 tries
Network is 8 Mb/s, its working ok. Can you please help @Jay Kumar SenSharma
Created 11-19-2017 06:20 PM
Looks like a similar discussion if going on as part of another thread, So can you please close one of the discussion, that way we can continue the discussion on single thread.
https://community.hortonworks.com/questions/148091/fail-to-install-hcp-13-on-grafana-step.html