Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

HDFS port 8020 not accessible from outside.

avatar

Followed all the instructions to set up the Hadoop 3.0 cluster and Ambari 2.7. HDFS port not accessible from outside.

1. The output of "netstat -tulapn | grep 8020 " from inside the server

tcp 0 0 127.0.1.1:8020 0.0.0.0:* LISTEN 11892/java

tcp 0 0 127.0.1.1:8020 127.0.0.1:52128 ESTABLISHED 11892/java

tcp 0 0 127.0.0.1:55998 127.0.1.1:8020 TIME_WAIT -

tcp 0 0 127.0.0.1:52128 127.0.1.1:8020 ESTABLISHED 11608/java

tcp 0 0 127.0.0.1:56118 127.0.1.1:8020 ESTABLISHED 13891/java

tcp 0 0 127.0.1.1:8020 127.0.0.1:56118 ESTABLISHED 11892/java

2. The output of "nc -zv mighadoop01.mydomain 8020" from outside.

nc: connectx to mighadoop01.mydomain port 8020 (tcp) failed: Connection refused

3. The output of "nc -zv mighadoop01.mydomain 8020" from inside the server.

Connection to mighadoop01.mydomain 8020 port [tcp/*] succeeded!

4. Server's /etc/hosts file

127.0.0.1localhost localhost.localdomain

::1ip6-localhost ip6-loopback

172.31.16.140mighadoop01 mighadoop01.mydomain

Any pointers would be much appreciated.

1 ACCEPTED SOLUTION

avatar
Expert Contributor

Please check the below property value in file hdfs-site.xml

<property>

<name>dfs.namenode.rpc-address</name>

<value></value>

</property>

if it is set as server hostname:8020 then ensure that server hostname is resolving to proper IP address

View solution in original post

11 REPLIES 11

avatar
Expert Contributor

Please check the below property value in file hdfs-site.xml

<property>

<name>dfs.namenode.rpc-address</name>

<value></value>

</property>

if it is set as server hostname:8020 then ensure that server hostname is resolving to proper IP address

avatar

Yes. From outside the server, I could resolve the other ports.

nc -zv mighadoop01.mydomain 8080

Output:

found 0 associations

found 1 connections: 1:flags=82<CONNECTED,PREFERRED> outif utun1 src 172.141.0.6 port 61294 dst 172.31.16.140 port 8080 rank info not available TCP aux info available Connection to mighadoop01.mydomain port 8080 [tcp/http-alt] succeeded!

avatar
Expert Contributor

bind address is the main issue if you saw the port 8020 is bind to IP address 127.0.1.1:8020 while your other working ports are bind to IP address 172.31.16.140:9000 correctly hence please check dfs.namenode.rpc-address property from hdfs site file

avatar

My dfs.namenode.rpc-address is below:

<property>
      <name>dfs.namenode.rpc-address</name>
      <value>mighadoop01.mydomain:8020</value>
</property>

avatar

This issue is now fixed. The problem was when I changed the rpc-address, I was just restarting the ambari server and agent. I was not restarting all the HDFS components, which was the issue. Thanks for all your suggestions.

avatar
Super Guru

Please confirm SELINUX is disabled and Firewall is off/disabled.

avatar

Yes. SELINUX and firewall is off/disabled.

avatar
Master Guru

Are you running on AWS or another cloud provider? They often block ports.

is your domain setup properly?

is it available in DNS?

are the machines able to communicate over other ports?

it /etc/hosts setup correct

is there an external firewall between them?

Can you SSH between them?

avatar

Yes to all your questions. And yes, we are on AWS. Machines can communicate over other ports. Please see my response to the previous answer.