<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: WebHDFS only reachable from local server in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/WebHDFS-only-reachable-from-local-server/m-p/152086#M28605</link>
    <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/10173/aminefellah.html" nodeid="10173"&gt;@Mohamed Amine FELLAH&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/10173/aminefellah.html" nodeid="10173"&gt;&lt;/A&gt;Please check if any firewall is running on source as well as destination hosts or between them. Either stop it or configure the proper rules. &lt;/P&gt;&lt;P&gt;Check that master.done.local is reachable from history server and can connect on port 50070. Use telnet for that.&lt;/P&gt;&lt;PRE&gt;telnet master.done.local 50070&lt;/PRE&gt;&lt;P&gt;I assume that you have already verified that running &lt;/P&gt;&lt;PRE&gt;wget &lt;A href="http://master.done.local:50070/webhdfs/v1/app-logs?op=GETFILESTATUS&amp;amp;user.name=hdfs" target="_blank"&gt;http://master.done.local:50070/webhdfs/v1/app-logs?op=GETFILESTATUS&amp;amp;user.name=hdfs&lt;/A&gt;&lt;/PRE&gt;&lt;P&gt;works from master.done.local&lt;/P&gt;</description>
    <pubDate>Tue, 17 May 2016 16:02:52 GMT</pubDate>
    <dc:creator>rpathak</dc:creator>
    <dc:date>2016-05-17T16:02:52Z</dc:date>
    <item>
      <title>WebHDFS only reachable from local server</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/WebHDFS-only-reachable-from-local-server/m-p/152085#M28604</link>
      <description>&lt;PRE&gt;I have installed a new cluster with 4 server, And it seems that WebHTFS is only reachable from localhost.
When I try to start History Server it fails and generates this log:
&amp;lt;p&amp;gt;stderr: 
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/historyserver.py", line 182, in &amp;lt;module&amp;gt;
    HistoryServer().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/historyserver.py", line 92, in start
    self.configure(env) # FOR SECURITY
  File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/historyserver.py", line 55, in configure
    yarn(name="historyserver")
  File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
    return fn(*args, **kwargs)
  File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/yarn.py", line 72, in yarn
    recursive_chmod=True
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 158, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 121, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 427, in action_create_on_execute
    self.action_delayed("create")
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 424, in action_delayed
    self.get_hdfs_resource_executor().action_delayed(action_name, self)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 265, in action_delayed
    self._assert_valid()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 249, in _assert_valid
    self.target_status = self._get_file_status(target)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 305, in _get_file_status
    list_status = self.util.run_command(target, 'GETFILESTATUS', method='GET', ignore_status_codes=['404'], assertable_result=False)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 197, in run_command
    _, out, err = get_user_call_output(cmd, user=self.run_user, logoutput=self.logoutput, quiet=False)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/get_user_call_output.py", line 61, in get_user_call_output
    raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'curl -sS -L -w '%{http_code}' -X GET 'http://master.done.local:50070/webhdfs/v1/app-logs?op=GETFILESTATUS&amp;amp;user.name=hdfs' 1&amp;gt;/tmp/tmp84wzpU 2&amp;gt;/tmp/tmpP5bna8' returned 7. curl: (7) Failed connect to master.done.local:50070; Connection refused
000
 stdout:
2016-05-17 09:44:53,857 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.2.0-258
2016-05-17 09:44:53,857 - Checking if need to create versioned conf dir /etc/hadoop/2.4.2.0-258/0
2016-05-17 09:44:53,857 - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.2.0-258 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-05-17 09:44:53,880 - call returned (1, '/etc/hadoop/2.4.2.0-258/0 exist already', '')
2016-05-17 09:44:53,880 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.4.2.0-258 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-05-17 09:44:53,904 - checked_call returned (0, '')
2016-05-17 09:44:53,904 - Ensuring that hadoop has the correct symlink structure
2016-05-17 09:44:53,904 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-05-17 09:44:54,043 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.2.0-258
2016-05-17 09:44:54,043 - Checking if need to create versioned conf dir /etc/hadoop/2.4.2.0-258/0
2016-05-17 09:44:54,043 - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.2.0-258 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-05-17 09:44:54,063 - call returned (1, '/etc/hadoop/2.4.2.0-258/0 exist already', '')
2016-05-17 09:44:54,064 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.4.2.0-258 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-05-17 09:44:54,086 - checked_call returned (0, '')
2016-05-17 09:44:54,087 - Ensuring that hadoop has the correct symlink structure
2016-05-17 09:44:54,087 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-05-17 09:44:54,088 - Group['spark'] {}
2016-05-17 09:44:54,089 - Group['hadoop'] {}
2016-05-17 09:44:54,089 - Group['users'] {}
2016-05-17 09:44:54,089 - Group['knox'] {}
2016-05-17 09:44:54,090 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-05-17 09:44:54,090 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-05-17 09:44:54,091 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-05-17 09:44:54,092 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2016-05-17 09:44:54,092 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-05-17 09:44:54,093 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2016-05-17 09:44:54,093 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2016-05-17 09:44:54,094 - User['mahout'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-05-17 09:44:54,094 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-05-17 09:44:54,095 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2016-05-17 09:44:54,095 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-05-17 09:44:54,096 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-05-17 09:44:54,097 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-05-17 09:44:54,097 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-05-17 09:44:54,098 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-05-17 09:44:54,098 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-05-17 09:44:54,099 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-05-17 09:44:54,099 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-05-17 09:44:54,101 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2016-05-17 09:44:54,105 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2016-05-17 09:44:54,105 - Group['hdfs'] {}
2016-05-17 09:44:54,105 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']}
2016-05-17 09:44:54,106 - Directory['/etc/hadoop'] {'mode': 0755}
2016-05-17 09:44:54,118 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2016-05-17 09:44:54,119 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0777}
2016-05-17 09:44:54,130 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce &amp;amp;&amp;amp; getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2016-05-17 09:44:54,150 - Skipping Execute[('setenforce', '0')] due to not_if
2016-05-17 09:44:54,151 - Directory['/var/log/hadoop'] {'owner': 'root', 'mode': 0775, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-05-17 09:44:54,152 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root', 'recursive': True, 'cd_access': 'a'}
2016-05-17 09:44:54,153 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive': True, 'cd_access': 'a'}
2016-05-17 09:44:54,157 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2016-05-17 09:44:54,158 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2016-05-17 09:44:54,159 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2016-05-17 09:44:54,167 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'}
2016-05-17 09:44:54,167 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2016-05-17 09:44:54,172 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2016-05-17 09:44:54,177 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2016-05-17 09:44:54,341 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.2.0-258
2016-05-17 09:44:54,341 - Checking if need to create versioned conf dir /etc/hadoop/2.4.2.0-258/0
2016-05-17 09:44:54,341 - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.2.0-258 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-05-17 09:44:54,362 - call returned (1, '/etc/hadoop/2.4.2.0-258/0 exist already', '')
2016-05-17 09:44:54,362 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.4.2.0-258 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-05-17 09:44:54,386 - checked_call returned (0, '')
2016-05-17 09:44:54,386 - Ensuring that hadoop has the correct symlink structure
2016-05-17 09:44:54,387 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-05-17 09:44:54,410 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.2.0-258
2016-05-17 09:44:54,410 - Checking if need to create versioned conf dir /etc/hadoop/2.4.2.0-258/0
2016-05-17 09:44:54,411 - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.2.0-258 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-05-17 09:44:54,434 - call returned (1, '/etc/hadoop/2.4.2.0-258/0 exist already', '')
2016-05-17 09:44:54,435 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.4.2.0-258 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-05-17 09:44:54,472 - checked_call returned (0, '')
2016-05-17 09:44:54,472 - Ensuring that hadoop has the correct symlink structure
2016-05-17 09:44:54,472 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-05-17 09:44:54,478 - HdfsResource['/app-logs'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'default_fs': 'hdfs://master.done.local:8020', 'user': 'hdfs', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'recursive_chmod': True, 'owner': 'yarn', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'mode': 0777}
2016-05-17 09:44:54,481 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://master.done.local:50070/webhdfs/v1/app-logs?op=GETFILESTATUS&amp;amp;user.name=hdfs'"'"' 1&amp;gt;/tmp/tmp84wzpU 2&amp;gt;/tmp/tmpP5bna8''] {'logoutput': None, 'quiet': False}
2016-05-17 09:44:54,519 - call returned (7, '')
&amp;lt;/p&amp;gt;
&lt;/PRE&gt;</description>
      <pubDate>Tue, 17 May 2016 14:58:51 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/WebHDFS-only-reachable-from-local-server/m-p/152085#M28604</guid>
      <dc:creator>amine_fellah</dc:creator>
      <dc:date>2016-05-17T14:58:51Z</dc:date>
    </item>
    <item>
      <title>Re: WebHDFS only reachable from local server</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/WebHDFS-only-reachable-from-local-server/m-p/152086#M28605</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/10173/aminefellah.html" nodeid="10173"&gt;@Mohamed Amine FELLAH&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/10173/aminefellah.html" nodeid="10173"&gt;&lt;/A&gt;Please check if any firewall is running on source as well as destination hosts or between them. Either stop it or configure the proper rules. &lt;/P&gt;&lt;P&gt;Check that master.done.local is reachable from history server and can connect on port 50070. Use telnet for that.&lt;/P&gt;&lt;PRE&gt;telnet master.done.local 50070&lt;/PRE&gt;&lt;P&gt;I assume that you have already verified that running &lt;/P&gt;&lt;PRE&gt;wget &lt;A href="http://master.done.local:50070/webhdfs/v1/app-logs?op=GETFILESTATUS&amp;amp;user.name=hdfs" target="_blank"&gt;http://master.done.local:50070/webhdfs/v1/app-logs?op=GETFILESTATUS&amp;amp;user.name=hdfs&lt;/A&gt;&lt;/PRE&gt;&lt;P&gt;works from master.done.local&lt;/P&gt;</description>
      <pubDate>Tue, 17 May 2016 16:02:52 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/WebHDFS-only-reachable-from-local-server/m-p/152086#M28605</guid>
      <dc:creator>rpathak</dc:creator>
      <dc:date>2016-05-17T16:02:52Z</dc:date>
    </item>
    <item>
      <title>Re: WebHDFS only reachable from local server</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/WebHDFS-only-reachable-from-local-server/m-p/152087#M28606</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/872/rahulpathak109.html" nodeid="872"&gt;@Rahul Pathak&lt;/A&gt; Thanks for your response,&lt;/P&gt;&lt;P&gt;master.done.local is reachable form remote servers (ping test)&lt;/P&gt;&lt;P&gt;I m running Centos7 on all servers, I disabled Selinux and firewall on all machines,&lt;/P&gt;&lt;P&gt;Connection to port 50070 from remote servers is refused&lt;/P&gt;&lt;P&gt;Connection to port 50070 is refused from localhost if we pass througth the ethernet interface&lt;/P&gt;&lt;P&gt;Connection to port 50070 is accepted from localhost if we pass througth the 127.0.0.1&lt;/P&gt;</description>
      <pubDate>Tue, 17 May 2016 16:36:51 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/WebHDFS-only-reachable-from-local-server/m-p/152087#M28606</guid>
      <dc:creator>amine_fellah</dc:creator>
      <dc:date>2016-05-17T16:36:51Z</dc:date>
    </item>
    <item>
      <title>Re: WebHDFS only reachable from local server</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/WebHDFS-only-reachable-from-local-server/m-p/152088#M28607</link>
      <description>&lt;P&gt;I precise that Webhdfs is running&lt;/P&gt;</description>
      <pubDate>Tue, 17 May 2016 16:42:14 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/WebHDFS-only-reachable-from-local-server/m-p/152088#M28607</guid>
      <dc:creator>amine_fellah</dc:creator>
      <dc:date>2016-05-17T16:42:14Z</dc:date>
    </item>
    <item>
      <title>Re: WebHDFS only reachable from local server</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/WebHDFS-only-reachable-from-local-server/m-p/152089#M28608</link>
      <description>&lt;P&gt;Resolved master.done.local host resolves to 127.0.0.1 on the local machine&lt;/P&gt;&lt;P&gt;because in /etc/hosts&lt;/P&gt;&lt;P&gt;Their was : master.done.local 127.0.0.1&lt;/P&gt;&lt;P&gt;I change it to master.done.local &amp;lt;External IP Address&amp;gt;&lt;/P&gt;&lt;P&gt;Then I restarted nameNode, &lt;/P&gt;&lt;P&gt;When I restarted History Server it worked&lt;/P&gt;</description>
      <pubDate>Tue, 17 May 2016 17:27:12 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/WebHDFS-only-reachable-from-local-server/m-p/152089#M28608</guid>
      <dc:creator>amine_fellah</dc:creator>
      <dc:date>2016-05-17T17:27:12Z</dc:date>
    </item>
  </channel>
</rss>

