Support Questions

Find answers, ask questions, and share your expertise

All Services Running, Views say Cluster is Not Connected

avatar

So I just very recently installed HDP 2.3.4.0 using the Ambari Install wizard and a local repository. I installed on 5 nodes according to default/suggested options on the wizard. This is my first experience with the Ambari and HDP environment, so I am a bit lost.

After a bit of bug fixing, all of the services are running (dashboard-aug31.png) on all 5 nodes, with no alerts. Yet, the widgets all say n/a or infinitely load, and all of the views are empty with messages like "cluster not connected", "NullPointerException", etc. Obviously there is a large flaw in my setup, and I don't know how to figure out what it is or how to fix it. I can't find anyone posting as to having the same thing happen to them. Does anyone have any ideas?

I haven't started actually using it yet, so there is no data anywhere. Here are screenshots of the views:

yarn-queue-manager.png smartsense-view.png hive-view.png tez-view.png

1 ACCEPTED SOLUTION

avatar

It was a proxy issue, I restarted everything after configuring the proxy properly and the widgets worked again.

View solution in original post

5 REPLIES 5

avatar
Super Guru

@Savanna Endicott

You can try below steps -

1. Login to ambari

2. Click on "admin" dropdown ->Manage Ambari -> click on "Views"-Add new view or you can check configuration for existing view.

Pls find below screenshot -

7143-screen-shot-2016-08-31-at-41749-pm.png

7144-screen-shot-2016-08-31-at-41818-pm.png

avatar

@Sagar Shimpi

I looked at the options for configuring existing views but I don't think this will fix the issue, as I think it is an HDFS connection/installation problem and not just a UI issue do to the errors.

Unless you have an idea for configuration that could fix the "cluster's connection" and the hdfs recognition.

avatar
Master Guru

It seems something is wrong with hdfs, can you try "hdfs dfs -ls /"? If it doesn't work can you go to HDFS->Configs and check "Name node directories" and "Data node directories". Remove any "unwelcome" members from there, like "/tmp" (Ambari will suggest volumes on all mounting points). Then make sure the remaining directories are owned by user "hdfs". [NN dirs must exist on NN and Secondary NN node, DN dirs must exist on all data nodes.] Finally, restart hdfs, and check NN and DN logs in /var/log/hadoop/hdfs.

avatar

It seems that there is only one directory in each and they are not "unwelcome", and are owned by hdfs. But thank you for the idea!

[EDIT]: hdfs dfs -ls / did show me 8 items, no errors.

avatar

It was a proxy issue, I restarted everything after configuring the proxy properly and the widgets worked again.