Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Ambari "Start all services" fails

Ambari "Start all services" fails

New Contributor

We are using AWS EC2 instances and when starting a cluster with "Start All Services" fails with below errors -

2018-04-19 13:29:57,881 - Could not determine stack version for component accumulo-client by calling '/usr/bin/hdp-select status accumulo-client > /tmp/tmpmWsqI_'. Return Code: 1, Output: .
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/after-INSTALL/scripts/hook.py", line 37, in <module>
    AfterInstallHook().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 329, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/after-INSTALL/scripts/hook.py", line 31, in hook
    setup_stack_symlinks()
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/after-INSTALL/scripts/shared_initialization.py", line 52, in setup_stack_symlinks
    stack_select.select_all(version)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/stack_select.py", line 135, in select_all
    Execute(command, only_if = only_if_command)
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 262, in action_run
    tries=self.resource.tries, try_sleep=self.resource.try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 72, in inner
    result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 102, in checked_call
    tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 150, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call
    raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'ambari-sudo.sh /usr/bin/hdp-select set all `ambari-python-wrap /usr/bin/hdp-select versions | grep ^2.6.3.0-235 | tail -1`' returned 1.   File "/usr/bin/hdp-select", line 242
    print "ERROR: Invalid package - " + name
                                    ^
SyntaxError: Missing parentheses in call to 'print'. Did you mean print(t "ERROR: Invalid package - " + name)?


2018-04-19 13:29:57,075 - Stack Feature Version Info: stack_version=2.6, version=2.6.3.0-235, current_cluster_version=2.6.3.0-235 -> 2.6.3.0-235
2018-04-19 13:29:57,085 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
User Group mapping (user_group) is missing in the hostLevelParams
2018-04-19 13:29:57,086 - Group['livy'] {}
2018-04-19 13:29:57,088 - Group['spark'] {}
2018-04-19 13:29:57,088 - Group['ranger'] {}
2018-04-19 13:29:57,088 - Group['zeppelin'] {}
2018-04-19 13:29:57,089 - Group['hadoop'] {}
2018-04-19 13:29:57,089 - Group['users'] {}
2018-04-19 13:29:57,089 - Group['knox'] {}
2018-04-19 13:29:57,089 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2018-04-19 13:29:57,090 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2018-04-19 13:29:57,090 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2018-04-19 13:29:57,090 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2018-04-19 13:29:57,091 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2018-04-19 13:29:57,091 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'ranger']}
2018-04-19 13:29:57,092 - User['accumulo'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2018-04-19 13:29:57,092 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2018-04-19 13:29:57,093 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2018-04-19 13:29:57,093 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2018-04-19 13:29:57,093 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2018-04-19 13:29:57,094 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2018-04-19 13:29:57,094 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2018-04-19 13:29:57,095 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2018-04-19 13:29:57,095 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2018-04-19 13:29:57,095 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'zeppelin', u'hadoop']}
2018-04-19 13:29:57,096 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2018-04-19 13:29:57,096 - User['mahout'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2018-04-19 13:29:57,097 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2018-04-19 13:29:57,097 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2018-04-19 13:29:57,097 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2018-04-19 13:29:57,098 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2018-04-19 13:29:57,098 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2018-04-19 13:29:57,099 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2018-04-19 13:29:57,099 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2018-04-19 13:29:57,100 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-04-19 13:29:57,101 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2018-04-19 13:29:57,124 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2018-04-19 13:29:57,124 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2018-04-19 13:29:57,125 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-04-19 13:29:57,126 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2018-04-19 13:29:57,148 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2018-04-19 13:29:57,148 - Group['hdfs'] {}
2018-04-19 13:29:57,149 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']}
2018-04-19 13:29:57,149 - FS Type: 
2018-04-19 13:29:57,149 - Directory['/etc/hadoop'] {'mode': 0755}
2018-04-19 13:29:57,165 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2018-04-19 13:29:57,165 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2018-04-19 13:29:57,191 - Initializing 2 repositories
2018-04-19 13:29:57,191 - Repository['HDP-2.6'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.3.0', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP', 'mirror_list': None}
2018-04-19 13:29:57,199 - File['/etc/yum.repos.d/HDP.repo'] {'content': '[HDP-2.6]\nname=HDP-2.6\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.3.0\n\npath=/\nenabled=1\ngpgcheck=0'}
2018-04-19 13:29:57,201 - Repository['HDP-UTILS-1.1.0.21'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None}
2018-04-19 13:29:57,204 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': '[HDP-UTILS-1.1.0.21]\nname=HDP-UTILS-1.1.0.21\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7\n\npath=/\nenabled=1\ngpgcheck=0'}
2018-04-19 13:29:57,205 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-04-19 13:29:57,311 - Skipping installation of existing package unzip
2018-04-19 13:29:57,311 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-04-19 13:29:57,325 - Skipping installation of existing package curl
2018-04-19 13:29:57,325 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-04-19 13:29:57,339 - Skipping installation of existing package hdp-select
2018-04-19 13:29:57,662 - Version 2.6.3.0-235 was provided as effective cluster version.  Using package version 2_6_3_0_235
2018-04-19 13:29:57,663 - Package['accumulo_2_6_3_0_235'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-04-19 13:29:57,740 - Skipping installation of existing package accumulo_2_6_3_0_235
2018-04-19 13:29:57,758 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2018-04-19 13:29:57,759 - Directory['/usr/hdp/current/accumulo-client/conf'] {'owner': 'accumulo', 'group': 'hadoop', 'create_parents': True, 'mode': 0755}
2018-04-19 13:29:57,760 - XmlConfig['accumulo-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/accumulo-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'accumulo', 'configurations': ...}
2018-04-19 13:29:57,774 - Generating config: /usr/hdp/current/accumulo-client/conf/accumulo-site.xml
2018-04-19 13:29:57,774 - File['/usr/hdp/current/accumulo-client/conf/accumulo-site.xml'] {'owner': 'accumulo', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2018-04-19 13:29:57,792 - File['/usr/hdp/current/accumulo-client/conf/accumulo-env.sh'] {'content': InlineTemplate(...), 'owner': 'accumulo', 'group': 'hadoop', 'mode': 0644}
2018-04-19 13:29:57,792 - PropertiesFile['/usr/hdp/current/accumulo-client/conf/client.conf'] {'owner': 'accumulo', 'group': 'hadoop', 'properties': {'instance.zookeeper.host': u'cdpm02.dev:2181,cdpm01.dev:2181', 'instance.name': u'hdp-accumulo-instance', 'instance.zookeeper.timeout': u'30s'}}
2018-04-19 13:29:57,797 - Generating properties file: /usr/hdp/current/accumulo-client/conf/client.conf
2018-04-19 13:29:57,798 - File['/usr/hdp/current/accumulo-client/conf/client.conf'] {'owner': 'accumulo', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None}
2018-04-19 13:29:57,800 - Writing File['/usr/hdp/current/accumulo-client/conf/client.conf'] because contents don't match
2018-04-19 13:29:57,801 - File['/usr/hdp/current/accumulo-client/conf/log4j.properties'] {'content': ..., 'owner': 'accumulo', 'group': 'hadoop', 'mode': 0644}
2018-04-19 13:29:57,801 - TemplateConfig['/usr/hdp/current/accumulo-client/conf/auditLog.xml'] {'owner': 'accumulo', 'template_tag': None, 'group': 'hadoop'}
2018-04-19 13:29:57,805 - File['/usr/hdp/current/accumulo-client/conf/auditLog.xml'] {'content': Template('auditLog.xml.j2'), 'owner': 'accumulo', 'group': 'hadoop', 'mode': None}
2018-04-19 13:29:57,805 - TemplateConfig['/usr/hdp/current/accumulo-client/conf/generic_logger.xml'] {'owner': 'accumulo', 'template_tag': None, 'group': 'hadoop'}
2018-04-19 13:29:57,808 - File['/usr/hdp/current/accumulo-client/conf/generic_logger.xml'] {'content': Template('generic_logger.xml.j2'), 'owner': 'accumulo', 'group': 'hadoop', 'mode': None}
2018-04-19 13:29:57,809 - TemplateConfig['/usr/hdp/current/accumulo-client/conf/monitor_logger.xml'] {'owner': 'accumulo', 'template_tag': None, 'group': 'hadoop'}
2018-04-19 13:29:57,810 - File['/usr/hdp/current/accumulo-client/conf/monitor_logger.xml'] {'content': Template('monitor_logger.xml.j2'), 'owner': 'accumulo', 'group': 'hadoop', 'mode': None}
2018-04-19 13:29:57,811 - File['/usr/hdp/current/accumulo-client/conf/accumulo-metrics.xml'] {'content': StaticFile('accumulo-metrics.xml'), 'owner': 'accumulo', 'group': 'hadoop', 'mode': 0644}
2018-04-19 13:29:57,812 - TemplateConfig['/usr/hdp/current/accumulo-client/conf/tracers'] {'owner': 'accumulo', 'template_tag': None, 'group': 'hadoop'}
2018-04-19 13:29:57,813 - File['/usr/hdp/current/accumulo-client/conf/tracers'] {'content': Template('tracers.j2'), 'owner': 'accumulo', 'group': 'hadoop', 'mode': None}
2018-04-19 13:29:57,813 - TemplateConfig['/usr/hdp/current/accumulo-client/conf/gc'] {'owner': 'accumulo', 'template_tag': None, 'group': 'hadoop'}
2018-04-19 13:29:57,815 - File['/usr/hdp/current/accumulo-client/conf/gc'] {'content': Template('gc.j2'), 'owner': 'accumulo', 'group': 'hadoop', 'mode': None}
2018-04-19 13:29:57,815 - TemplateConfig['/usr/hdp/current/accumulo-client/conf/monitor'] {'owner': 'accumulo', 'template_tag': None, 'group': 'hadoop'}
2018-04-19 13:29:57,816 - File['/usr/hdp/current/accumulo-client/conf/monitor'] {'content': Template('monitor.j2'), 'owner': 'accumulo', 'group': 'hadoop', 'mode': None}
2018-04-19 13:29:57,817 - TemplateConfig['/usr/hdp/current/accumulo-client/conf/slaves'] {'owner': 'accumulo', 'template_tag': None, 'group': 'hadoop'}
2018-04-19 13:29:57,818 - File['/usr/hdp/current/accumulo-client/conf/slaves'] {'content': Template('slaves.j2'), 'owner': 'accumulo', 'group': 'hadoop', 'mode': None}
2018-04-19 13:29:57,818 - TemplateConfig['/usr/hdp/current/accumulo-client/conf/masters'] {'owner': 'accumulo', 'template_tag': None, 'group': 'hadoop'}
2018-04-19 13:29:57,820 - File['/usr/hdp/current/accumulo-client/conf/masters'] {'content': Template('masters.j2'), 'owner': 'accumulo', 'group': 'hadoop', 'mode': None}
2018-04-19 13:29:57,821 - TemplateConfig['/usr/hdp/current/accumulo-client/conf/hadoop-metrics2-accumulo.properties'] {'owner': 'accumulo', 'template_tag': None, 'group': 'hadoop'}
2018-04-19 13:29:57,827 - File['/usr/hdp/current/accumulo-client/conf/hadoop-metrics2-accumulo.properties'] {'content': Template('hadoop-metrics2-accumulo.properties.j2'), 'owner': 'accumulo', 'group': 'hadoop', 'mode': None}
2018-04-19 13:29:57,881 - Could not determine stack version for component accumulo-client by calling '/usr/bin/hdp-select status accumulo-client > /tmp/tmpmWsqI_'. Return Code: 1, Output: .
2018-04-19 13:29:58,229 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2018-04-19 13:29:58,230 - Executing hdp-select set all on 2.6.3.0-235
2018-04-19 13:29:58,230 - Execute['ambari-sudo.sh /usr/bin/hdp-select set all `ambari-python-wrap /usr/bin/hdp-select versions | grep ^2.6.3.0-235 | tail -1`'] {'only_if': 'ls -d /usr/hdp/2.6.3.0-235*'}

Command failed after 1 tries	


	
Don't have an account?
Coming from Hortonworks? Activate your account here