Created 11-30-2017 09:40 AM
Hi, I am trying to start YARN Service in Ambari but it is giving error. I'm using multi-nodes. Please find below details of stderr and stdout. Thanks.
stderr: /var/lib/ambari-agent/data/errors-3528.txt
Traceback (most recent call last): File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/application_timeline_server.py", line 94, in <module> ApplicationTimelineServer().execute() File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 329, in execute method(env) File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/application_timeline_server.py", line 44, in start self.configure(env) # FOR SECURITY File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 119, in locking_configure original_configure(obj, *args, **kw) File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/application_timeline_server.py", line 55, in configure yarn(name='apptimelineserver') File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk return fn(*args, **kwargs) File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/yarn.py", line 356, in yarn mode=0755 File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 166, in __init__ self.env.run() File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run self.run_action(resource, action) File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action provider_action() File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 604, in action_create_on_execute self.action_delayed("create") File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 601, in action_delayed self.get_hdfs_resource_executor().action_delayed(action_name, self) File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 328, in action_delayed self._assert_valid() File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 287, in _assert_valid self.target_status = self._get_file_status(target) File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 430, in _get_file_status list_status = self.util.run_command(target, 'GETFILESTATUS', method='GET', ignore_status_codes=['404'], assertable_result=False) File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 177, in run_command return self._run_command(*args, **kwargs) File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 235, in _run_command _, out, err = get_user_call_output(cmd, user=self.run_user, logoutput=self.logoutput, quiet=False) File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/get_user_call_output.py", line 61, in get_user_call_output raise ExecutionFailed(err_msg, code, files_output[0], files_output[1]) resource_management.core.exceptions.ExecutionFailed: Execution of 'curl -sS -L -w '%{http_code}' -X GET 'http://myhostname:50070/webhdfs/v1/ats/done?op=GETFILESTATUS&user.name=hdfs' 1>/tmp/tmpdOQron 2>/tmp/tmprXPUdn' returned 7. curl: (7) Failed connect to myhostname:50070; No route to host 000
stdout: /var/lib/ambari-agent/data/output-3528.txt
2017-11-30 01:59:09,238 - Stack Feature Version Info: Cluster Stack=2.5, Cluster Current Version=None, Command Stack=None, Command Version=2.5.3.0-37 -> 2.5.3.0-37 2017-11-30 01:59:09,260 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2017-11-30 01:59:09,464 - Stack Feature Version Info: Cluster Stack=2.5, Cluster Current Version=None, Command Stack=None, Command Version=2.5.3.0-37 -> 2.5.3.0-37 2017-11-30 01:59:09,473 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf User Group mapping (user_group) is missing in the hostLevelParams 2017-11-30 01:59:09,474 - Group['metron'] {} 2017-11-30 01:59:09,475 - Group['livy'] {} 2017-11-30 01:59:09,475 - Group['elasticsearch'] {} 2017-11-30 01:59:09,475 - Group['spark'] {} 2017-11-30 01:59:09,476 - Group['zeppelin'] {} 2017-11-30 01:59:09,476 - Group['hadoop'] {} 2017-11-30 01:59:09,476 - Group['kibana'] {} 2017-11-30 01:59:09,476 - Group['users'] {} 2017-11-30 01:59:09,477 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2017-11-30 01:59:09,478 - call['/var/lib/ambari-agent/tmp/changeUid.sh hive'] {} 2017-11-30 01:59:09,489 - call returned (0, '1001') 2017-11-30 01:59:09,489 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1001} 2017-11-30 01:59:09,492 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2017-11-30 01:59:09,494 - call['/var/lib/ambari-agent/tmp/changeUid.sh storm'] {} 2017-11-30 01:59:09,505 - call returned (0, '1002') 2017-11-30 01:59:09,506 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1002} 2017-11-30 01:59:09,508 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2017-11-30 01:59:09,509 - call['/var/lib/ambari-agent/tmp/changeUid.sh zookeeper'] {} 2017-11-30 01:59:09,521 - call returned (0, '1003') 2017-11-30 01:59:09,521 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1003} 2017-11-30 01:59:09,523 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2017-11-30 01:59:09,525 - call['/var/lib/ambari-agent/tmp/changeUid.sh ams'] {} 2017-11-30 01:59:09,536 - call returned (0, '1004') 2017-11-30 01:59:09,536 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1004} 2017-11-30 01:59:09,538 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2017-11-30 01:59:09,540 - call['/var/lib/ambari-agent/tmp/changeUid.sh tez'] {} 2017-11-30 01:59:09,551 - call returned (0, '1005') 2017-11-30 01:59:09,551 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': 1005} 2017-11-30 01:59:09,553 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2017-11-30 01:59:09,555 - call['/var/lib/ambari-agent/tmp/changeUid.sh zeppelin'] {} 2017-11-30 01:59:09,565 - call returned (0, '1007') 2017-11-30 01:59:09,566 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'zeppelin', u'hadoop'], 'uid': 1007} 2017-11-30 01:59:09,567 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2017-11-30 01:59:09,568 - call['/var/lib/ambari-agent/tmp/changeUid.sh metron'] {} 2017-11-30 01:59:09,579 - call returned (0, '1008') 2017-11-30 01:59:09,580 - User['metron'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1008} 2017-11-30 01:59:09,582 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2017-11-30 01:59:09,583 - call['/var/lib/ambari-agent/tmp/changeUid.sh livy'] {} 2017-11-30 01:59:09,594 - call returned (0, '1009') 2017-11-30 01:59:09,594 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1009} 2017-11-30 01:59:09,596 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2017-11-30 01:59:09,597 - call['/var/lib/ambari-agent/tmp/changeUid.sh elasticsearch'] {} 2017-11-30 01:59:09,608 - call returned (0, '1010') 2017-11-30 01:59:09,608 - User['elasticsearch'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1010} 2017-11-30 01:59:09,610 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2017-11-30 01:59:09,612 - call['/var/lib/ambari-agent/tmp/changeUid.sh spark'] {} 2017-11-30 01:59:09,624 - call returned (0, '1019') 2017-11-30 01:59:09,624 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1019} 2017-11-30 01:59:09,626 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None} 2017-11-30 01:59:09,628 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2017-11-30 01:59:09,630 - call['/var/lib/ambari-agent/tmp/changeUid.sh flume'] {} 2017-11-30 01:59:09,641 - call returned (0, '1011') 2017-11-30 01:59:09,642 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1011} 2017-11-30 01:59:09,644 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2017-11-30 01:59:09,645 - call['/var/lib/ambari-agent/tmp/changeUid.sh kafka'] {} 2017-11-30 01:59:09,655 - call returned (0, '1012') 2017-11-30 01:59:09,655 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1012} 2017-11-30 01:59:09,657 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2017-11-30 01:59:09,658 - call['/var/lib/ambari-agent/tmp/changeUid.sh hdfs'] {} 2017-11-30 01:59:09,668 - call returned (0, '1013') 2017-11-30 01:59:09,669 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1013} 2017-11-30 01:59:09,671 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2017-11-30 01:59:09,673 - call['/var/lib/ambari-agent/tmp/changeUid.sh yarn'] {} 2017-11-30 01:59:09,683 - call returned (0, '1014') 2017-11-30 01:59:09,683 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1014} 2017-11-30 01:59:09,685 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2017-11-30 01:59:09,687 - call['/var/lib/ambari-agent/tmp/changeUid.sh kibana'] {} 2017-11-30 01:59:09,697 - call returned (0, '1016') 2017-11-30 01:59:09,697 - User['kibana'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1016} 2017-11-30 01:59:09,699 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2017-11-30 01:59:09,701 - call['/var/lib/ambari-agent/tmp/changeUid.sh mapred'] {} 2017-11-30 01:59:09,710 - call returned (0, '1015') 2017-11-30 01:59:09,711 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1015} 2017-11-30 01:59:09,712 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2017-11-30 01:59:09,714 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {} 2017-11-30 01:59:09,723 - call returned (0, '1017') 2017-11-30 01:59:09,724 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1017} 2017-11-30 01:59:09,726 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2017-11-30 01:59:09,727 - call['/var/lib/ambari-agent/tmp/changeUid.sh hcat'] {} 2017-11-30 01:59:09,737 - call returned (0, '1018') 2017-11-30 01:59:09,738 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1018} 2017-11-30 01:59:09,739 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2017-11-30 01:59:09,740 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'} 2017-11-30 01:59:09,747 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if 2017-11-30 01:59:09,747 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'} 2017-11-30 01:59:09,749 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2017-11-30 01:59:09,750 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2017-11-30 01:59:09,751 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {} 2017-11-30 01:59:09,761 - call returned (0, '1017') 2017-11-30 01:59:09,762 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1017'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'} 2017-11-30 01:59:09,769 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1017'] due to not_if 2017-11-30 01:59:09,769 - Group['hdfs'] {} 2017-11-30 01:59:09,770 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']} 2017-11-30 01:59:09,770 - FS Type: 2017-11-30 01:59:09,771 - Directory['/etc/hadoop'] {'mode': 0755} 2017-11-30 01:59:09,793 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'} 2017-11-30 01:59:09,795 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777} 2017-11-30 01:59:09,814 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'} 2017-11-30 01:59:09,828 - Skipping Execute[('setenforce', '0')] due to only_if 2017-11-30 01:59:09,828 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'} 2017-11-30 01:59:09,831 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'} 2017-11-30 01:59:09,832 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'} 2017-11-30 01:59:09,838 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'} 2017-11-30 01:59:09,841 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'} 2017-11-30 01:59:09,850 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644} 2017-11-30 01:59:09,864 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs', 'group': 'hadoop'} 2017-11-30 01:59:09,865 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755} 2017-11-30 01:59:09,866 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'} 2017-11-30 01:59:09,871 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644} 2017-11-30 01:59:09,876 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755} 2017-11-30 01:59:10,143 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2017-11-30 01:59:10,144 - Stack Feature Version Info: Cluster Stack=2.5, Cluster Current Version=None, Command Stack=None, Command Version=2.5.3.0-37 -> 2.5.3.0-37 2017-11-30 01:59:10,145 - call['ambari-python-wrap /usr/bin/hdp-select status hadoop-yarn-resourcemanager'] {'timeout': 20} 2017-11-30 01:59:10,183 - call returned (0, 'hadoop-yarn-resourcemanager - 2.5.3.0-37') 2017-11-30 01:59:10,234 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2017-11-30 01:59:10,255 - Directory['/var/log/hadoop-yarn/nodemanager/recovery-state'] {'owner': 'yarn', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'} 2017-11-30 01:59:10,257 - Directory['/var/run/hadoop-yarn'] {'owner': 'yarn', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'} 2017-11-30 01:59:10,258 - Directory['/var/run/hadoop-yarn/yarn'] {'owner': 'yarn', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'} 2017-11-30 01:59:10,258 - Directory['/var/log/hadoop-yarn/yarn'] {'owner': 'yarn', 'group': 'hadoop', 'create_parents': True, 'cd_access': 'a'} 2017-11-30 01:59:10,259 - Directory['/var/run/hadoop-mapreduce'] {'owner': 'mapred', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'} 2017-11-30 01:59:10,259 - Directory['/var/run/hadoop-mapreduce/mapred'] {'owner': 'mapred', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'} 2017-11-30 01:59:10,260 - Directory['/var/log/hadoop-mapreduce'] {'owner': 'mapred', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'} 2017-11-30 01:59:10,260 - Directory['/var/log/hadoop-mapreduce/mapred'] {'owner': 'mapred', 'group': 'hadoop', 'create_parents': True, 'cd_access': 'a'} 2017-11-30 01:59:10,261 - Directory['/var/log/hadoop-yarn'] {'owner': 'yarn', 'group': 'hadoop', 'ignore_failures': True, 'create_parents': True, 'cd_access': 'a'} 2017-11-30 01:59:10,262 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {u'final': {u'fs.defaultFS': u'true'}}, 'owner': 'hdfs', 'configurations': ...} 2017-11-30 01:59:10,272 - Generating config: /usr/hdp/current/hadoop-client/conf/core-site.xml 2017-11-30 01:59:10,272 - File['/usr/hdp/current/hadoop-client/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2017-11-30 01:59:10,293 - XmlConfig['hdfs-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {u'final': {u'dfs.support.append': u'true', u'dfs.datanode.data.dir': u'true', u'dfs.namenode.http-address': u'true', u'dfs.namenode.name.dir': u'true', u'dfs.webhdfs.enabled': u'true', u'dfs.datanode.failed.volumes.tolerated': u'true'}}, 'owner': 'hdfs', 'configurations': ...} 2017-11-30 01:59:10,301 - Generating config: /usr/hdp/current/hadoop-client/conf/hdfs-site.xml 2017-11-30 01:59:10,301 - File['/usr/hdp/current/hadoop-client/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2017-11-30 01:59:10,338 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'yarn', 'configurations': ...} 2017-11-30 01:59:10,344 - Generating config: /usr/hdp/current/hadoop-client/conf/mapred-site.xml 2017-11-30 01:59:10,345 - File['/usr/hdp/current/hadoop-client/conf/mapred-site.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2017-11-30 01:59:10,371 - Changing owner for /usr/hdp/current/hadoop-client/conf/mapred-site.xml from 1015 to yarn 2017-11-30 01:59:10,371 - XmlConfig['yarn-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'yarn', 'configurations': ...} 2017-11-30 01:59:10,376 - Generating config: /usr/hdp/current/hadoop-client/conf/yarn-site.xml 2017-11-30 01:59:10,377 - File['/usr/hdp/current/hadoop-client/conf/yarn-site.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2017-11-30 01:59:10,439 - XmlConfig['capacity-scheduler.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'yarn', 'configurations': ...} 2017-11-30 01:59:10,444 - Generating config: /usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml 2017-11-30 01:59:10,444 - File['/usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2017-11-30 01:59:10,453 - Changing owner for /usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml from 1013 to yarn 2017-11-30 01:59:10,453 - Directory['/hadoop/yarn/timeline'] {'owner': 'yarn', 'group': 'hadoop', 'create_parents': True, 'cd_access': 'a'} 2017-11-30 01:59:10,453 - Directory['/hadoop/yarn/timeline'] {'owner': 'yarn', 'group': 'hadoop', 'create_parents': True, 'cd_access': 'a'} 2017-11-30 01:59:10,454 - HdfsResource['/ats/done'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'dfs_type': '', 'default_fs': 'hdfs://slot2:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'change_permissions_for_parents': True, 'owner': 'yarn', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/apps/hive/warehouse', u'/mr-history/done', u'/app-logs', u'/tmp'], 'mode': 0755} 2017-11-30 01:59:10,456 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://slot2:50070/webhdfs/v1/ats/done?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpdOQron 2>/tmp/tmprXPUdn''] {'logoutput': None, 'quiet': False} 2017-11-30 01:59:10,748 - call returned (7, '') Command failed after 1 tries
Created 11-30-2017 09:47 AM
We see the error as :
Failed connect to myhostname:50070; No route to host
So please check the host where yarn service startup is failing , If that host is able to resolve "myhostname" (Is this your NameNode hostname / FQDN) ?
Are you using correct hostname for your NameNode?
Your log shows two different hostnames for your NameNode (which one is correct)? Please correct your Yarn configs to point to correct NameNode hostname.
slot2:50070 ---> this one http://slot2:50070/webhdfs/v1/ats/done?op=GETFILESTATUS&user.name=hdfs myhostname:50070 ---> or this one Failed connect to myhostname:50070; No route to host
.
Please check the Yarn host as following:
# ping myhostname # cat /etc/hosts # nc -v myhostname 50070 (OR) # telnet myhostname 50070
.
On the NameNode host please check if the NameNode is running fine? Check any error/exceptions in namenode logs?
# less /var/log/hadoop/hdfs/hadoop-.....namenode...log
It has opened port 50070 properly or not?
# netstat -tnlpa | grep 50070
Also if the "iptables" firewall is disabled on your NameNode host ?
# hostname -f # service iptables stop
.
Created 11-30-2017 09:47 AM
We see the error as :
Failed connect to myhostname:50070; No route to host
So please check the host where yarn service startup is failing , If that host is able to resolve "myhostname" (Is this your NameNode hostname / FQDN) ?
Are you using correct hostname for your NameNode?
Your log shows two different hostnames for your NameNode (which one is correct)? Please correct your Yarn configs to point to correct NameNode hostname.
slot2:50070 ---> this one http://slot2:50070/webhdfs/v1/ats/done?op=GETFILESTATUS&user.name=hdfs myhostname:50070 ---> or this one Failed connect to myhostname:50070; No route to host
.
Please check the Yarn host as following:
# ping myhostname # cat /etc/hosts # nc -v myhostname 50070 (OR) # telnet myhostname 50070
.
On the NameNode host please check if the NameNode is running fine? Check any error/exceptions in namenode logs?
# less /var/log/hadoop/hdfs/hadoop-.....namenode...log
It has opened port 50070 properly or not?
# netstat -tnlpa | grep 50070
Also if the "iptables" firewall is disabled on your NameNode host ?
# hostname -f # service iptables stop
.
Created 11-30-2017 10:13 AM
Hi Jay, thanks for replying. myhostname is not really my hostname. I kept it confidential.
1) I did 'telnet myhostname 50070'. This is the result.
[root@myhost ~]# nc -v myhostname 50070 Ncat: Version 6.40 ( http://nmap.org/ncat ) Ncat: Connected to myhostip:50070. . HTTP/1.1 400 Bad Request Connection: close Server: Jetty(6.1.26.hwx)
2) When I grep 50070.
[root@myhost hdfs]# netstat -tnlpa | grep 50070 tcp 0 0 myhostip:50070 0.0.0.0:* LISTEN 17042/jav a tcp 0 0 myhostip:50070 myhostip:53422 TIME_WAIT - tcp 0 0 myhostip:53080 myhostip:50070 CLOSE_WAIT 194862/nc tcp 0 0 myhostip:50070 myhostip:53420 TIME_WAIT - tcp 0 0 myhostip:50070 myhostip:53424 TIME_WAIT - tcp 0 0 myhostip:50070 myhostip:53426 TIME_WAIT - tcp 0 0 myhostip:50070 myhostip:53440 TIME_WAIT - tcp 0 0 myhostip:50070 myhostip:53418 TIME_WAIT -