Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

accumulo install failing

avatar
Super Collaborator

its trying to install an older version of accumulo and failing .

stderr:   /var/lib/ambari-agent/data/errors-113.txt
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/ACCUMULO/1.6.1.2.2.0/package/scripts/accumulo_client.py", line 66, in <module>
    AccumuloClient().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 329, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/ACCUMULO/1.6.1.2.2.0/package/scripts/accumulo_client.py", line 37, in install
    self.install_packages(env)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 693, in install_packages
    retry_count=agent_stack_retry_count)
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 54, in action_install
    self.install_package(package_name, self.resource.use_repos, self.resource.skip_repos)
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py", line 51, in install_package
    self.checked_call_with_retries(cmd, sudo=True, logoutput=self.get_logoutput())
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 86, in checked_call_with_retries
    return self._call_with_retries(cmd, is_checked=True, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 98, in _call_with_retries
    code, out = func(cmd, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 72, in inner
    result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 102, in checked_call
    tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 150, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call
    raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of '/usr/bin/yum -d 0 -e 0 -y install accumulo_2_4_3_0_227' returned 1. Error: Nothing to dostdout:   /var/lib/ambari-agent/data/output-113.txt
2017-06-20 14:41:24,319 - Stack Feature Version Info: stack_version=2.5, version=None, current_cluster_version=None -> 2.5
2017-06-20 14:41:24,322 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
User Group mapping (user_group) is missing in the hostLevelParams
2017-06-20 14:41:24,323 - Group['livy'] {}
2017-06-20 14:41:24,324 - Group['spark'] {}
2017-06-20 14:41:24,324 - Group['hadoop'] {}
2017-06-20 14:41:24,325 - Group['users'] {}
2017-06-20 14:41:24,325 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-20 14:41:24,325 - Adding user User['hive']
2017-06-20 14:41:24,348 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-20 14:41:24,349 - Adding user User['storm']
2017-06-20 14:41:24,367 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-20 14:41:24,367 - Adding user User['zookeeper']
2017-06-20 14:41:24,392 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-06-20 14:41:24,393 - Adding user User['oozie']
2017-06-20 14:41:24,411 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-20 14:41:24,412 - Adding user User['ams']
2017-06-20 14:41:24,431 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-06-20 14:41:24,431 - Adding user User['falcon']
2017-06-20 14:41:24,449 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-06-20 14:41:24,449 - Adding user User['tez']
2017-06-20 14:41:24,466 - User['accumulo'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-20 14:41:24,466 - Adding user User['accumulo']
2017-06-20 14:41:24,484 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-20 14:41:24,485 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-20 14:41:24,485 - Adding user User['spark']
2017-06-20 14:41:24,511 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-06-20 14:41:24,512 - Adding user User['ambari-qa']
2017-06-20 14:41:24,537 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-20 14:41:24,537 - Adding user User['flume']
2017-06-20 14:41:24,563 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-20 14:41:24,563 - Adding user User['kafka']
2017-06-20 14:41:24,591 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-20 14:41:24,591 - Adding user User['hdfs']
2017-06-20 14:41:24,617 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-20 14:41:24,617 - Adding user User['sqoop']
2017-06-20 14:41:24,646 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-20 14:41:24,647 - Adding user User['yarn']
2017-06-20 14:41:24,673 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-20 14:41:24,673 - Adding user User['mapred']
2017-06-20 14:41:24,701 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-20 14:41:24,702 - Adding user User['hbase']
2017-06-20 14:41:24,729 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-20 14:41:24,730 - Adding user User['hcat']
2017-06-20 14:41:24,757 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-06-20 14:41:24,759 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-06-20 14:41:24,762 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2017-06-20 14:41:24,763 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2017-06-20 14:41:24,763 - Changing owner for /tmp/hbase-hbase from 1024 to hbase
2017-06-20 14:41:24,764 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-06-20 14:41:24,765 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2017-06-20 14:41:24,768 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2017-06-20 14:41:24,768 - Group['hdfs'] {}
2017-06-20 14:41:24,769 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'hdfs']}
2017-06-20 14:41:24,769 - Modifying user hdfs
2017-06-20 14:41:24,792 - FS Type: 
2017-06-20 14:41:24,792 - Directory['/etc/hadoop'] {'mode': 0755}
2017-06-20 14:41:24,792 - Creating directory Directory['/etc/hadoop'] since it doesn't exist.
2017-06-20 14:41:24,793 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-06-20 14:41:24,793 - Changing owner for /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir from 0 to hdfs
2017-06-20 14:41:24,794 - Directory['/var/lib/ambari-agent/tmp/AMBARI-artifacts/'] {'create_parents': True}
2017-06-20 14:41:24,794 - Creating directory Directory['/var/lib/ambari-agent/tmp/AMBARI-artifacts/'] since it doesn't exist.
2017-06-20 14:41:24,794 - File['/var/lib/ambari-agent/tmp/jdk-8u112-linux-x64.tar.gz'] {'content': DownloadSource('http://hadoop1.tolls.dot.state.fl.us:8080/resources//jdk-8u112-linux-x64.tar.gz'), 'not_if': 'test -f /var/lib/ambari-agent/tmp/jdk-8u112-linux-x64.tar.gz'}
2017-06-20 14:41:24,797 - Downloading the file from http://hadoop1.tolls.dot.state.fl.us:8080/resources//jdk-8u112-linux-x64.tar.gz
2017-06-20 14:41:26,206 - File['/var/lib/ambari-agent/tmp/jdk-8u112-linux-x64.tar.gz'] {'mode': 0755}
2017-06-20 14:41:26,207 - Changing permission for /var/lib/ambari-agent/tmp/jdk-8u112-linux-x64.tar.gz from 644 to 755
2017-06-20 14:41:26,207 - Directory['/usr/jdk64'] {}
2017-06-20 14:41:26,207 - Execute[('chmod', 'a+x', '/usr/jdk64')] {'sudo': True}
2017-06-20 14:41:26,212 - Execute['cd /var/lib/ambari-agent/tmp/jdk_tmp_JalOZR && tar -xf /var/lib/ambari-agent/tmp/jdk-8u112-linux-x64.tar.gz && ambari-sudo.sh cp -rp /var/lib/ambari-agent/tmp/jdk_tmp_JalOZR/* /usr/jdk64'] {}
2017-06-20 14:41:30,155 - Directory['/var/lib/ambari-agent/tmp/jdk_tmp_JalOZR'] {'action': ['delete']}
2017-06-20 14:41:30,155 - Removing directory Directory['/var/lib/ambari-agent/tmp/jdk_tmp_JalOZR'] and all its content
2017-06-20 14:41:30,244 - File['/usr/jdk64/jdk1.8.0_112/bin/java'] {'mode': 0755, 'cd_access': 'a'}
2017-06-20 14:41:30,244 - Execute[('chmod', '-R', '755', '/usr/jdk64/jdk1.8.0_112')] {'sudo': True}
2017-06-20 14:41:30,270 - Initializing 2 repositories
2017-06-20 14:41:30,270 - Repository['HDP-2.5'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.5.3.0', 'action': ['create'], 'components': ['HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP', 'mirror_list': None}
2017-06-20 14:41:30,278 - File['/etc/yum.repos.d/HDP.repo'] {'content': InlineTemplate(...)}
2017-06-20 14:41:30,278 - Writing File['/etc/yum.repos.d/HDP.repo'] because it doesn't exist
2017-06-20 14:41:30,279 - Repository['HDP-UTILS-1.1.0.21'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos6', 'action': ['create'], 'components': ['HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None}
2017-06-20 14:41:30,283 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': InlineTemplate(...)}
2017-06-20 14:41:30,284 - Writing File['/etc/yum.repos.d/HDP-UTILS.repo'] because it doesn't exist
2017-06-20 14:41:30,284 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-20 14:41:30,361 - Skipping installation of existing package unzip
2017-06-20 14:41:30,362 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-20 14:41:30,373 - Skipping installation of existing package curl
2017-06-20 14:41:30,373 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-20 14:41:30,384 - Installing package hdp-select ('/usr/bin/yum -d 0 -e 0 -y install hdp-select')
2017-06-20 14:41:35,089 - Package['accumulo_2_4_3_0_227'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-20 14:41:35,167 - Installing package accumulo_2_4_3_0_227 ('/usr/bin/yum -d 0 -e 0 -y install accumulo_2_4_3_0_227')
2017-06-20 14:41:35,598 - Execution of '/usr/bin/yum -d 0 -e 0 -y install accumulo_2_4_3_0_227' returned 1. Error: Nothing to do
2017-06-20 14:41:35,599 - Failed to install package accumulo_2_4_3_0_227. Executing '/usr/bin/yum clean metadata'
2017-06-20 14:41:35,775 - Retrying to install package accumulo_2_4_3_0_227 after 30 seconds

Command failed after 1 tries
[root@hadoop5 ~]# yum list | grep accumulo
accumulo.noarch                             1.7.0.2.5.3.0-37.el6         HDP-2.5
accumulo-conf-standalone.noarch             1.7.0.2.5.3.0-37.el6         HDP-2.5
accumulo-source.noarch                      1.7.0.2.5.3.0-37.el6         HDP-2.5
accumulo-test.noarch                        1.7.0.2.5.3.0-37.el6         HDP-2.5
accumulo_2_5_3_0_37.x86_64                  1.7.0.2.5.3.0-37.el6         HDP-2.5
accumulo_2_5_3_0_37-conf-standalone.x86_64  1.7.0.2.5.3.0-37.el6         HDP-2.5
accumulo_2_5_3_0_37-source.x86_64           1.7.0.2.5.3.0-37.el6         HDP-2.5
accumulo_2_5_3_0_37-test.x86_64             1.7.0.2.5.3.0-37.el6         HDP-2.5
[root@hadoop5 ~]# yum repolist
Loaded plugins: fastestmirror, refresh-packagekit, security
Loading mirror speeds from cached hostfile
 * base: ftp.osuosl.org
 * epel: archive.linux.duke.edu
 * extras: centos.sonn.com
 * updates: bay.uchicago.edu
repo id                                                repo name                                                                           status
HDP-2.5                                                HDP-2.5                                                                                200
HDP-UTILS-1.1.0.21                                     HDP-UTILS-1.1.0.21                                                                      56
ambari-2.5.1.0                                         ambari Version - ambari-2.5.1.0                                                         12
base                                                   CentOS-6 - Base                                                                      6,706
epel                                                   Extra Packages for Enterprise Linux 6 - x86_64                                      12,344
extras                                                 CentOS-6 - Extras                                                                       45
updates                                                CentOS-6 - Updates                                                                     379
repolist: 19,742
1 ACCEPTED SOLUTION

avatar
Super Collaborator

ambari was picking a left of directory from 2.4.x in /usr/hdp folder. After I deleted the error and reinstalled Ambari-server the installation is moving forward now.

cd /usr/hdp

rm -rf 2.4.3.0-227

View solution in original post

2 REPLIES 2

avatar
Super Collaborator

also checked and internet connection is fine and I can download the HDP repo file , still accumulo is unable to install

[root@hadoop5 yum.repos.d]# wget http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.5.3.0/hdp.repo
--2017-06-20 15:03:35--  http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.5.3.0/hdp.repo
Resolving dotatofwproxy.tolls.dot.state.fl.us... 10.100.30.27
Connecting to dotatofwproxy.tolls.dot.state.fl.us|10.100.30.27|:8080... connected.
Proxy request sent, awaiting response... 200 OK
Length: 574 [binary/octet-stream]
Saving to: “hdp.repo”
100%[=======================================================================================================>] 574         --.-K/s   in 0s
2017-06-20 15:03:35 (138 MB/s) - “hdp.repo” saved [574/574]
[root@hadoop5 yum.repos.d]# pwd
/etc/yum.repos.d
[root@hadoop5 yum.repos.d]# ls
ambari.repo       CentOS-Debuginfo.repo  CentOS-Media.repo  epel.repo          hdp.repo  HDP-UTILS.repo
CentOS-Base.repo  CentOS-fasttrack.repo  CentOS-Vault.repo  epel-testing.repo  HDP.repo
[root@hadoop5 yum.repos.d]# more hdp.repo
#VERSION_NUMBER=2.5.3.0-37
[HDP-2.5.3.0]
name=HDP Version - HDP-2.5.3.0
baseurl=http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.5.3.0
gpgcheck=1
gpgkey=http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.5.3.0/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1

[HDP-UTILS-1.1.0.21]
name=HDP-UTILS Version - HDP-UTILS-1.1.0.21
baseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos6
gpgcheck=1
gpgkey=http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.5.3.0/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1
[root@hadoop5 yum.repos.d]#

[root@hadoop5 yum.repos.d]# more HDP.repo
[HDP-2.5]
name=HDP-2.5
baseurl=http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.5.3.0
path=/
enabled=1
gpgcheck=0
[root@hadoop5 yum.repos.d]#



avatar
Super Collaborator

ambari was picking a left of directory from 2.4.x in /usr/hdp folder. After I deleted the error and reinstalled Ambari-server the installation is moving forward now.

cd /usr/hdp

rm -rf 2.4.3.0-227