Support Questions

Find answers, ask questions, and share your expertise

HDFS NameNode Web UI connection faile to http://ip......internal:50070(urlopen) error [Error 111] connection refused

avatar
Expert Contributor

I keep getting this error, no idea why? please help.

4 REPLIES 4

avatar
Rising Star

You might need to provide a little further information.

Do you have namenode HA.

Is there a port listening on 50070 on that node (netstat -plant | grep 50070) Is there any information in the namenode logs> (/var/log/hadoop/hdfs/)

avatar
Expert Contributor

Thank you Cdraper for getting back to me.

1. I dont have HA setup, but I think HDP requires to have secondary name node, I didnt do any configure yet, but snamenode is on the list to start

2. do yo think Hbase space issue could cause this problem. as Hbase stores all metadata for namenode and cluster, if Hbase has problem, then namenode cant start, so does snamenode then HDFS then entire cluster?

3. if I use m4.xlarge instance on AWS, ended up with warnings on hbase_master_heapsize, hbase_collector_heapszie etc, if you change to the recommended values, the new sets of parameters come up. one time I have changed 7 times and never get rid of that warning. if I ignore the warning, my namenode never, never started.

4. if I use m4.large, I dont get the these heapsize warnings, instead I get warning on packages not install, so I tried to manually install them on host, it just dont execute or yum dont have this package.

I am kind of give up on HDP 2.4 install on RHEL 6, 7/AWS/EC2, so what do you suggest, shall I try ubuntu or Centos, which version? HDP stable version is 2.4.1.0 and ubuntu 12?

After all these experiment, I just feel HDP with AWS is not compatible, I could be wrong. either AWS/EC2 root / size too samll, you can resize in instance creation process, but dont get the size you define. if instance RAM = 8gb or more, then HDP get lost on configure them, that is why all these heap_size size issues....

sorry long story, thank you for your help.

Robin

avatar
Rising Star

1) Standby namenode stores a second copy of the fsimage and should be up and ideally hosted on a second node. (this is not HA)

2) No Hbase does not store any information for Namenode or any other service as far as I am aware. We do have an embedded Hbase server for our Metrics system but thats out side the scope of this conversation.

3)Here is something important you need to know. Anything you allocate to heap size on a java program will be allocated at run-time. So if you have 5 apps each assigned 1G ram heap on a 4G system 4 will start but the 5th will fail because it cannot allocate the RAM. Simple example.

4) Check you have the HDP.repo and Ambari.repo in /etc/yum.repos.d/ redhat/centos6 is not a problem at all. I would stick with that in my personal opinion as there is much more OS specific detail of HDP on this platform. Other OS are also fine but for beginners I would stick with Centos6/7

----

How you should approach this:

Stop everything from Ambari. Start Zookeeper 1 or 3 nodes depends on setup but not 2.

Namenode usually likes ZK up before it starts. Now start namenode and snamenode. Attach any failure logs here as an attachment. HDFS is the first system that needs to be up. I assume you have not installed ranger at this time? In case you have remove it will complicate things at this point. If I was learning all over again I would just start with HDFS/Zookeeper/YARN/MAPRED get those working on a single node and do some tutorials. Everything else will build off this and can be added a service at a time.

avatar
Expert Contributor

Thank you Cdraper, Finally I use this combination, RHEL 7.3 (m4.large) , ambari 2.4.10 and HDP 2.5 install 7 nodes on AWS. the only issues I deal with is change the namenode java heap size to 4gb in my case. then everything worked.

your info is still very helpful. I am going to setup kafka 3 nodes and mongodb from here. will keep you posted.

thanks again.

Robin