Support Questions

Find answers, ask questions, and share your expertise
Announcements
Welcome to the upgraded Community! Read this blog to see What’s New!

Not able to access HDFS, getting Connection exception.

avatar
New Contributor

I started the Cloudera VM normally, But when I am doing a list on the files in HDFS, I am getting a Connection Exception as follows:

 

[cloudera@quickstart ~]$ hadoop fs -ls /user/
ls: Call From quickstart.cloudera/127.0.0.1 to quickstart.cloudera:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

 

I guess it is because of the Hadoop services not running but can somebody please suggest, when i am starting the CLoudera VM as required why getting this error?

 

12 REPLIES 12

avatar
Master Collaborator

All the basic Hadoop services should be running when you start the VM. Port 8020 is for the hadoop-hdfs-namenode service, so my guess is that service has failed and just needs to be restarted.

 

You can check the status of a service with

service <service-name> status

and you can restart a service with

service <service-name> restart

So 'service hadoop-hdfs-namenode restart' may be all you need. Also check the hadoop-hdfs-datanode service as it may also need to be restarted.

 

The services should have been running, so if they're not it means something went wrong. If you're curious or if you continue to have a problem, have a look at the NameNode logs in /var/log/hadoop-hdfs for anything that looks like a fatal error and post it back here.

 

avatar
New Contributor

My datanode service is running fine but yes the namenode servoce is not running.

I restarted it but the restart is getting failed:

 

[root@quickstart cloudera]# service hadoop-hdfs-datanode status
Hadoop datanode is running                                 [  OK  ]
[root@quickstart cloudera]# service hadoop-hdfs-namenode restart
no namenode to stop
Stopped Hadoop namenode:                                   [  OK  ]
starting namenode, logging to /var/log/hadoop-hdfs/hadoop-hdfs-namenode-quickstart.cloudera.out
Failed to start Hadoop namenode. Return value: 1           [FAILED]

 

Please advice.

avatar
New Contributor

I am facing the same issue. Please help me if u have found some soution to this. Please help 

avatar
Explorer

did you resolve the issue I am facing the same issue when trying to execute a command even after starting the service and having the status say okay.

avatar
New Contributor

First please see the status of service using this command

sudo service hadoop-hdfs-<service_name> status;

ex- sudo service hadoop-hdfs-namenode status;

If status is stop , please try to start using below command

sudo service hadoop-hdfs-<service_name> start;

If it's running , first stop it and again restart this.

sudo service hadoop-hdfs-<service_name> stop;

sudo service hadoop-hdfs-<service_name> restart;

 

Hope it will work for you.

avatar
Explorer
Thanks for the response this did not work for me unfortunately. This is what I tried.
First I checked the status it was not running then I started the service with
sudo service hadoop-hdfs-datanode start
Then tried hadoop fs -ls /
This gave me the same error as before. Do I need to also start a namenode or something but Im thinking I shouldnt because I am not in control of namenodes and on other coworkers computers it just works. Any suggestions are appreciated.

avatar
Explorer
Hi thanks for the docs. I actually needed to boot it from the cloudera manager. I could not do it from the commandline must have something to do with my setup.

avatar
Champion
WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: 0.0.0.0/0.0.0.0:8022

From the above error it is clear that the external datanode is having trouble connecting to Namenode.

You can do one thing. 

 

Check the status of the namenode you are connect by

Sudo Service hadoop-hdfs-namenode status
Sudo Service hadoop-hdfs-secondarynamenode status

if it has not started then you may start it by replacing the status with start if you dont have authorization you should contact the hadoop admin . Also please check the same for Secondarynamenode. 

 

Thanks

avatar
Explorer

I have the same problem, tail command is getting this output:

 

2015-08-06 07:47:26,459 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: 0.0.0.0/0.0.0.0:8022
2015-08-06 07:47:32,462 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8022. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-08-06 07:47:33,463 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8022. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-08-06 07:47:34,464 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8022. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-08-06 07:47:35,465 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8022. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-08-06 07:47:36,466 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8022. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-08-06 07:47:37,467 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8022. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-08-06 07:47:38,468 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8022. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-08-06 07:47:39,469 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8022. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-08-06 07:47:40,471 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8022. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-08-06 07:47:41,472 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8022. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-08-06 07:47:41,473 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: 0.0.0.0/0.0.0.0:8022

Before i notice this problem i updated the centos image doing sudo yum update, 3GB of new data...

 

Is there any way to see whats going on with a graphical user interface?

 

 

avatar
New Contributor

You can check the status of a service with

sudo service <service-name> status

and you can restart a service with

sudo service <service-name> restart

If you run the above command without sudo, you might get error message like Error: Root User required. 

avatar
New Contributor
Try hdfs dfs -ls /
Labels