Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Hue, Hive, Impala, HBase suddenly not working: connection failed for all of them (clouderaquickstart

avatar
Contributor

I am developing and testing on Cloudera Quickstart, I have been doing so for about a month. Friday I was going to try some put methods on HBASE, but browsing from Hue I got these errors:

 

 

#Errors accessing Hue
hadoop.hdfs_clusters.default.webhdfs_url Current value: http://localhost:50070/webhdfs/v1 Failed to access filesystem root Hive Editor Failed to access Hive warehouse: /user/hive/warehouse Impala Editor No available Impalad to send queries to.

#Error accessing Hbase
Api Error: java.net.SocketTimeoutException: callTimeout=0, callDuration=203: at
[stack trace, I am omitting it]
at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null at

#From Hbase Log:
2016-06-19 12:06:39,306 INFO [quickstart:60000.activeMasterManager] Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
2016-06-19 12:06:39,621 INFO [master/quickstart.cloudera/127.0.0.1:60000] regionserver.HRegionServer: ClusterId : 36bbf1ec-2337-4143-b2fb-fd933333cd8f
2016-06-19 12:06:40,510 FATAL [quickstart:60000.activeMasterManager] master.HMaster: Failed to become active master
java.net.ConnectException: Call From quickstart.cloudera/127.0.0.1 to quickstart.cloudera:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
[Stack trace again]

 

1 ACCEPTED SOLUTION

avatar
Contributor

The problem was with the FSImage file (the last one), which was corrupted and didn't allow the hdfs namenode service to start.

To understand if you have my same issue:

  1. List the services and examin which one are failing. HDFS namenode probably will be there with a FAILED status
  2. Look for the open ports: 50070 (or the one you put in the conf file inside /etc/hdfs/conf... ) won't be open, hence all the services that connect to namenode receive a "Connection Refused" error. Hbase will give a "znode == null" error
  3. look for the namenode logs under /var/logs/hdfs/ *namenode*.out and look where the FSImage files are
  4. Go there: if you have one FIImage file only, as far as I know, you're screwed. If you have more than one:
    1. deleted the last one,
    2. restart the service and let the hdfs rebuild the correct FSImage from the Edit file
    3. either restart the machine or restart all of the services as specified on the Cloudera docs.

Hope this will save you time.

View solution in original post

1 REPLY 1

avatar
Contributor

The problem was with the FSImage file (the last one), which was corrupted and didn't allow the hdfs namenode service to start.

To understand if you have my same issue:

  1. List the services and examin which one are failing. HDFS namenode probably will be there with a FAILED status
  2. Look for the open ports: 50070 (or the one you put in the conf file inside /etc/hdfs/conf... ) won't be open, hence all the services that connect to namenode receive a "Connection Refused" error. Hbase will give a "znode == null" error
  3. look for the namenode logs under /var/logs/hdfs/ *namenode*.out and look where the FSImage files are
  4. Go there: if you have one FIImage file only, as far as I know, you're screwed. If you have more than one:
    1. deleted the last one,
    2. restart the service and let the hdfs rebuild the correct FSImage from the Edit file
    3. either restart the machine or restart all of the services as specified on the Cloudera docs.

Hope this will save you time.