Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Unable to Install HAdoop on ambari-server node and lost another node metrics

Unable to Install HAdoop on ambari-server node and lost another node metrics

New Contributor

4257-ambari-hadoop.png

error is below

stderr: <script id="metamorph-23737-start" type="text/x-placeholder"></script>Traceback (most recent call last): File "/var/lib/ambari-agent/cache/common-services/ACCUMULO/1.6.1.2.2.0/package/scripts/accumulo_client.py", line 65, in <module> AccumuloClient().execute() File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute method(env) File "/var/lib/ambari-agent/cache/common-services/ACCUMULO/1.6.1.2.2.0/package/scripts/accumulo_client.py", line 36, in install self.install_packages(env) File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 404, in install_packages Package(name) File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__ self.env.run() File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 158, in run self.run_action(resource, action) File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 121, in run_action provider_action() File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 49, in action_install self.install_package(package_name, self.resource.use_repos, self.resource.skip_repos) File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py", line 49, in install_package shell.checked_call(cmd, sudo=True, logoutput=self.get_logoutput()) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner result = function(command, **kwargs) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call tries=tries, try_sleep=try_sleep) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper result = _call(command, **kwargs_copy) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call raise Fail(err_msg) resource_management.core.exceptions.Fail: Execution of '/usr/bin/yum -d 0 -e 0 -y install 'accumulo_2_3_*'' returned 1. Error: Package: netcat-openbsd-1.89-98.8.x86_64 (HDP-UTILS-1.1.0.20) Requires: update-alternatives Error: Package: zookeeper_2_3_0_0_2557-3.4.6.2.3.0.0-2557.noarch (HDP-2.3.0.0) Requires: update-alternatives Error: Package: hadoop_2_3_0_0_2557-2.7.1.2.3.0.0-2557.x86_64 (HDP-2.3.0.0) Requires: insserv Error: Package: accumulo_2_3_0_0_2557-1.7.0.2.3.0.0-2557.x86_64 (HDP-2.3.0.0) Requires: insserv You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest<script id="metamorph-23737-end" type="text/x-placeholder"></script> stdout: <script id="metamorph-23739-start" type="text/x-placeholder"></script>2016-05-16 14:37:24,628 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2016-05-16 14:37:24,629 - Group['spark'] {} 2016-05-16 14:37:24,630 - Group['hadoop'] {} 2016-05-16 14:37:24,631 - Group['users'] {} 2016-05-16 14:37:24,631 - Group['knox'] {} 2016-05-16 14:37:24,631 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-05-16 14:37:24,632 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-05-16 14:37:24,632 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-05-16 14:37:24,633 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']} 2016-05-16 14:37:24,633 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-05-16 14:37:24,634 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-05-16 14:37:24,635 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']} 2016-05-16 14:37:24,635 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']} 2016-05-16 14:37:24,636 - User['accumulo'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-05-16 14:37:24,636 - User['mahout'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-05-16 14:37:24,637 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-05-16 14:37:24,638 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']} 2016-05-16 14:37:24,638 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-05-16 14:37:24,639 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-05-16 14:37:24,639 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-05-16 14:37:24,640 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-05-16 14:37:24,640 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-05-16 14:37:24,641 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-05-16 14:37:24,642 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-05-16 14:37:24,642 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-05-16 14:37:24,643 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2016-05-16 14:37:24,643 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2016-05-16 14:37:24,645 - Writing File['/var/lib/ambari-agent/tmp/changeUid.sh'] because it doesn't exist 2016-05-16 14:37:24,645 - Changing permission for /var/lib/ambari-agent/tmp/changeUid.sh from 644 to 555 2016-05-16 14:37:24,645 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'} 2016-05-16 14:37:24,650 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if 2016-05-16 14:37:24,651 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'recursive': True, 'mode': 0775, 'cd_access': 'a'} 2016-05-16 14:37:24,651 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2016-05-16 14:37:24,652 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'} 2016-05-16 14:37:24,657 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if 2016-05-16 14:37:24,657 - Group['hdfs'] {} 2016-05-16 14:37:24,657 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']} 2016-05-16 14:37:24,658 - Directory['/etc/hadoop'] {'mode': 0755} 2016-05-16 14:37:24,658 - Creating directory Directory['/etc/hadoop'] since it doesn't exist. 2016-05-16 14:37:24,659 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0777} 2016-05-16 14:37:24,659 - Creating directory Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] since it doesn't exist. 2016-05-16 14:37:24,659 - Changing owner for /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir from 0 to hdfs 2016-05-16 14:37:24,659 - Changing group for /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir from 0 to hadoop 2016-05-16 14:37:24,659 - Changing permission for /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir from 755 to 777 2016-05-16 14:37:24,671 - Repository['HDP-2.3'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.3.4.7', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP', 'mirror_list': None} 2016-05-16 14:37:24,678 - File['/etc/yum.repos.d/HDP.repo'] {'content': '[HDP-2.3]\nname=HDP-2.3\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.3.4.7\n\npath=/\nenabled=1\ngpgcheck=...'} 2016-05-16 14:37:24,678 - Repository['HDP-UTILS-1.1.0.20'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/centos7', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None} 2016-05-16 14:37:24,681 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': '[HDP-UTILS-1.1.0.20]\nname=HDP-UTILS-1.1.0.20\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/centos7\n\npath=/\nenabled=1\ngpgcheck...'} 2016-05-16 14:37:24,682 - Package['unzip'] {} 2016-05-16 14:37:24,765 - Skipping installation of existing package unzip 2016-05-16 14:37:24,766 - Package['curl'] {} 2016-05-16 14:37:24,775 - Skipping installation of existing package curl 2016-05-16 14:37:24,775 - Package['hdp-select'] {} 2016-05-16 14:37:24,785 - Skipping installation of existing package hdp-select 2016-05-16 14:37:24,960 - Package['accumulo_2_3_*'] {} 2016-05-16 14:37:25,038 - Installing package accumulo_2_3_* ('/usr/bin/yum -d 0 -e 0 -y install 'accumulo_2_3_*'')<script id="metamorph-23739-end" type="text/x-placeholder"></script>
1 REPLY 1

Re: Unable to Install HAdoop on ambari-server node and lost another node metrics

New Contributor

First time on node 1 i have installed only ambari-server and node2 installed manual ambari-agent & hadoop components NN,DN,SN,SPARK etc....

second time was trying to install on node1 ambari-agent and its components and i got this above error and now i lost everything am unable to monitor node2 through ambari-server niether am installing hadoop components on node1