11-10-2016 04:47 AM - last edited on 11-10-2016 05:16 AM by cjervis
I have installed Cloudera 5.9 on a single node cluster ( OS : RHEL 7.2).Cluster was functional and working fine.
Suddenly while running a Spark job, it went into a bad health.
If I run any hadoop command gives below error
Ex: sudo -u hdfs hadoop fs -ls /tmp
:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
Tried to change the port from 8020 to 9000. Still returns the same error.
jps command shows
Cloudera manager UI is running but all the components are in Unknown health.
Below messages are seen
Request to the Service Monitor failed. This may cause slow page responses. View the status of the Service Monitor.
Request to the Host Monitor failed. This may cause slow page responses. View the status of the Host Monitor.
05-31-2018 11:51 PM
Hello guys even I got the same issue while running Apache Hadoop in my machine. When I tried to access HDFS I got following error
ls: Call From chinni/172.17.0.1 to chinni:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
Can anybody tell the solution.
04-18-2019 10:39 AM
A common cause for this is the Hadoop service isn't running.
To start hadoop services in cloudera quickstart VM, you can use below commands:
sudo service hadoop-hdfs-datanode start
sudo service hadoop-hdfs-journalnode start
sudo service hadoop-hdfs-namenode start
sudo service hadoop-hdfs-secondarynamenode start
sudo service hadoop-httpfs start
sudo service hadoop-yarn-nodemanager start
sudo service hadoop-yarn-resourcemanager start
sudo service hadoop-mapreduce-historyserver start
and then try your hadoop commands.
Hope it will work.
for more details visit https://wiki.apache.org/hadoop/ConnectionRefused