Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Elasticsearch issue while starting Apache Metron service in Ambari

Highlighted

Elasticsearch issue while starting Apache Metron service in Ambari

New Contributor
stderr: /var/lib/ambari-agent/data/errors-4966.txt
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/ELASTICSEARCH/2.3.3/package/scripts/elastic_slave.py", line 71, in <module>
    Elasticsearch().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/ELASTICSEARCH/2.3.3/package/scripts/elastic_slave.py", line 32, in install
    self.install_packages(env)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 567, in install_packages
    retry_count=agent_stack_retry_count)
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 54, in action_install
    self.install_package(package_name, self.resource.use_repos, self.resource.skip_repos)
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py", line 49, in install_package
    self.checked_call_with_retries(cmd, sudo=True, logoutput=self.get_logoutput())
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 83, in checked_call_with_retries
    return self._call_with_retries(cmd, is_checked=True, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 91, in _call_with_retries
    code, out = func(cmd, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 71, in inner
    result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 93, in checked_call
    tries=tries, try_sleep=try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 141, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 294, in _call
    raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of '/usr/bin/yum -d 0 -e 0 -y install elasticsearch-2.3.3' returned 1. Error: Nothing to do
stdout: /var/lib/ambari-agent/data/output-4966.txt
2017-06-06 14:47:25,987 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.3.0-37
2017-06-06 14:47:25,987 - Checking if need to create versioned conf dir /etc/hadoop/2.5.3.0-37/0
2017-06-06 14:47:25,988 - call[('ambari-python-wrap', '/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.3.0-37', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2017-06-06 14:47:26,029 - call returned (1, '/etc/hadoop/2.5.3.0-37/0 exist already', '')
2017-06-06 14:47:26,029 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.3.0-37', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2017-06-06 14:47:26,068 - checked_call returned (0, '')
2017-06-06 14:47:26,069 - Ensuring that hadoop has the correct symlink structure
2017-06-06 14:47:26,069 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-06-06 14:47:26,072 - Group['livy'] {}
2017-06-06 14:47:26,074 - Group['elasticsearch'] {}
2017-06-06 14:47:26,074 - Group['spark'] {}
2017-06-06 14:47:26,074 - Group['solr'] {}
2017-06-06 14:47:26,075 - Group['zeppelin'] {}
2017-06-06 14:47:26,075 - Group['hadoop'] {}
2017-06-06 14:47:26,075 - Group['users'] {}
2017-06-06 14:47:26,076 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-06 14:47:26,077 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-06 14:47:26,078 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-06 14:47:26,079 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-06-06 14:47:26,080 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-06 14:47:26,081 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-06-06 14:47:26,082 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-06 14:47:26,083 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-06 14:47:26,084 - User['elasticsearch'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-06 14:47:26,085 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-06 14:47:26,086 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-06-06 14:47:26,087 - User['solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-06 14:47:26,088 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-06 14:47:26,089 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-06 14:47:26,090 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-06 14:47:26,091 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-06 14:47:26,092 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-06 14:47:26,093 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-06 14:47:26,094 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-06 14:47:26,095 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-06-06 14:47:26,098 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-06-06 14:47:26,105 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2017-06-06 14:47:26,106 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2017-06-06 14:47:26,107 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-06-06 14:47:26,109 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2017-06-06 14:47:26,116 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2017-06-06 14:47:26,117 - Group['hdfs'] {}
2017-06-06 14:47:26,118 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'hdfs']}
2017-06-06 14:47:26,118 - FS Type: 
2017-06-06 14:47:26,119 - Directory['/etc/hadoop'] {'mode': 0755}
2017-06-06 14:47:26,144 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-06-06 14:47:26,145 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-06-06 14:47:26,167 - Initializing 2 repositories
2017-06-06 14:47:26,168 - Repository['HDP-2.5'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.5.3.0', 'action': ['create'], 'components': ['HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP', 'mirror_list': None}
2017-06-06 14:47:26,182 - File['/etc/yum.repos.d/HDP.repo'] {'content': '[HDP-2.5]\nname=HDP-2.5\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.5.3.0\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-06-06 14:47:26,183 - Repository['HDP-UTILS-1.1.0.21'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos6', 'action': ['create'], 'components': ['HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None}
2017-06-06 14:47:26,189 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': '[HDP-UTILS-1.1.0.21]\nname=HDP-UTILS-1.1.0.21\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos6\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-06-06 14:47:26,190 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-06 14:47:26,336 - Skipping installation of existing package unzip
2017-06-06 14:47:26,337 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-06 14:47:26,354 - Skipping installation of existing package curl
2017-06-06 14:47:26,354 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-06 14:47:26,371 - Skipping installation of existing package hdp-select
2017-06-06 14:47:26,609 - Install ES Data Node
2017-06-06 14:47:26,611 - Version 2.5.3.0-37 was provided as effective cluster version.  Using package version 2_5_3_0_37
2017-06-06 14:47:26,613 - Package['elasticsearch-2.3.3'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-06 14:47:26,743 - Installing package elasticsearch-2.3.3 ('/usr/bin/yum -d 0 -e 0 -y install elasticsearch-2.3.3')

Command failed after 1 tries
1 REPLY 1

Re: Elasticsearch issue while starting Apache Metron service in Ambari

Rising Star

Please make sure of 2 things:

  1. Elasticsearch 2.3.3 is available in your configured yum repos
  2. Elasticsearch is not already installed on the machine, verify running sudo rpm -qa | grep elasticsearch
Don't have an account?
Coming from Hortonworks? Activate your account here