Support Questions
Find answers, ask questions, and share your expertise

Eroor while Configuring Ambari Cluster: install App Timeline Server and Atlas metadat server install

Eroor while Configuring Ambari Cluster: install App Timeline Server and Atlas metadat server install

3981-2.jpeg

3950-1.jpeg

The above are the error at the time Install, Start and Test of Ambari Cluster Install wizard

please help me folks!

5 REPLIES 5

Re: Eroor while Configuring Ambari Cluster: install App Timeline Server and Atlas metadat server install

! Accumulo Client Install

stderr: Traceback (most recent call last): File "/var/lib/ambari-agent/cache/common-services/ACCUMULO/1.6.1.2.2.0/package/scripts/accumulo_client.py", line 65, in <module> AccumuloClient().execute() File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute method(env) File "/var/lib/ambari-agent/cache/common-services/ACCUMULO/1.6.1.2.2.0/package/scripts/accumulo_client.py", line 36, in install self.install_packages(env) File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 404, in install_packages Package(name) File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__ self.env.run() File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 158, in run self.run_action(resource, action) File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 121, in run_action provider_action() File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 49, in action_install self.install_package(package_name, self.resource.use_repos, self.resource.skip_repos) File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py", line 49, in install_package shell.checked_call(cmd, sudo=True, logoutput=self.get_logoutput()) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner result = function(command, **kwargs) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call tries=tries, try_sleep=try_sleep) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper result = _call(command, **kwargs_copy) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call raise Fail(err_msg) resource_management.core.exceptions.Fail: Execution of '/usr/bin/yum -d 0 -e 0 -y install 'accumulo_2_4_*'' returned 1. No Presto metadata available for base Error downloading packages: 1:perl-ExtUtils-ParseXS-3.18-2.el7.noarch: [Errno 256] No more mirrors to try.

stdout: 2016-05-04 12:58:34,301 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2016-05-04 12:58:34,304 - Group['spark'] {} 2016-05-04 12:58:34,305 - Group['hadoop'] {} 2016-05-04 12:58:34,306 - Group['users'] {} 2016-05-04 12:58:34,306 - Group['knox'] {} 2016-05-04 12:58:34,306 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-05-04 12:58:34,308 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-05-04 12:58:34,309 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-05-04 12:58:34,310 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']} 2016-05-04 12:58:34,311 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-05-04 12:58:34,312 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-05-04 12:58:34,313 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']} 2016-05-04 12:58:34,314 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']} 2016-05-04 12:58:34,315 - User['accumulo'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-05-04 12:58:34,316 - Adding user User['accumulo'] 2016-05-04 12:58:34,500 - User['mahout'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-05-04 12:58:34,502 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-05-04 12:58:34,503 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']} 2016-05-04 12:58:34,505 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-05-04 12:58:34,506 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-05-04 12:58:34,507 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-05-04 12:58:34,509 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-05-04 12:58:34,510 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-05-04 12:58:34,512 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-05-04 12:58:34,513 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-05-04 12:58:34,514 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-05-04 12:58:34,516 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-05-04 12:58:34,517 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2016-05-04 12:58:34,520 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'} 2016-05-04 12:58:34,527 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if 2016-05-04 12:58:34,528 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'recursive': True, 'mode': 0775, 'cd_access': 'a'} 2016-05-04 12:58:34,529 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2016-05-04 12:58:34,530 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'} 2016-05-04 12:58:34,537 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if 2016-05-04 12:58:34,538 - Group['hdfs'] {} 2016-05-04 12:58:34,538 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']} 2016-05-04 12:58:34,540 - Directory['/etc/hadoop'] {'mode': 0755} 2016-05-04 12:58:34,541 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0777} 2016-05-04 12:58:34,559 - Repository['HDP-2.4'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.4.0.0', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP', 'mirror_list': None} 2016-05-04 12:58:34,573 - File['/etc/yum.repos.d/HDP.repo'] {'content': '[HDP-2.4]\nname=HDP-2.4\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.4.0.0\n\npath=/\nenabled=1\ngpgcheck=0'} 2016-05-04 12:58:34,574 - Repository['HDP-UTILS-1.1.0.20'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/centos7', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None} 2016-05-04 12:58:34,580 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': '[HDP-UTILS-1.1.0.20]\nname=HDP-UTILS-1.1.0.20\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/centos7\n\npath=/\nenabled=1\ngpgcheck=0'} 2016-05-04 12:58:34,581 - Package['unzip'] {} 2016-05-04 12:58:34,823 - Skipping installation of existing package unzip 2016-05-04 12:58:34,823 - Package['curl'] {} 2016-05-04 12:58:34,951 - Skipping installation of existing package curl 2016-05-04 12:58:34,951 - Package['hdp-select'] {} 2016-05-04 12:58:35,076 - Skipping installation of existing package hdp-select 2016-05-04 12:58:35,339 - Package['accumulo_2_4_*'] {} 2016-05-04 12:58:35,557 - Installing package accumulo_2_4_* ('/usr/bin/yum -d 0 -e 0 -y install 'accumulo_2_4_*'')

Re: Eroor while Configuring Ambari Cluster: install App Timeline Server and Atlas metadat server install

I think this Bug is for all the services for example Atlas Metadata Store, Accumulo Service ... etc.,

in the Ambari Cluster Install Wizard.

If One Service is Fixed then everything will be troubleshooted. and will be fixed i hope

Re: Eroor while Configuring Ambari Cluster: install App Timeline Server and Atlas metadat server install

thank you for ur support @Emil

i tried maually to install it but i am getting using:

$ yum install perl-ExtUtils-Embed

No Presto metadata available for base

Error downloading packages: 1:perl-ExtUtils-ParseXS-3.18-2.el7.noarch:

how to fix this issue.

if i tried to install this downloaded package: perl-ExtUtils-ParseXS-3.18-2.el7.noarch:

the same error i am getting : No Presto metadata available for base

Error downloading package

please help me to fix this issue

Re: Eroor while Configuring Ambari Cluster: install App Timeline Server and Atlas metadat server install

even i am also facing the issues

Re: Eroor while Configuring Ambari Cluster: install App Timeline Server and Atlas metadat server install

New Contributor

Did anybody get a solution to this isse?