Support Questions
Find answers, ask questions, and share your expertise
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

cloudera manager could not connect to host

cloudera manager could not connect to host

I just installed cloudera manager on my ubuntu 14 OS using path A, automated installation by cloudera manager.  I came to the part where I specify hosts.  I used 'ifconfig' to determine my IP address, and entered that in the list, then clicked search.  It found my machine but returned a warning message: Could not connect to host.  At the top it says, "1 hosts scanned, 0 running SSH."


As a quick guess, I tried doing 'ssh-agent' from my terminal, but unfortunately I was still not able to get cloudera manager to connect. 


I did 'sudo netstat -tulpn' to confirm that nothing is running on port 22.  My question is, what process is supposed to be running on that?  The cloudera-scm-agent?


If that is the case, why did the instructions not tell me to start the agent first? 


I checked my list of services, 'cloudera-scm-agent' was not installed by the automatic installer.  So I am not sure what step I missed here.


Can somebody please tell me what I am doing wrong here?


Re: cloudera manager could not connect to host

New Contributor

Install openssh-server and it will work:

sudo apt-get install openssh-server 

Re: cloudera manager could not connect to host

New Contributor

Same issue for me.I have run command sudo apt-get install openssh-server in my host. installed successfully.But same issue happening on

Specify hosts for your CDH cluster installation


 showing up : 

1 hosts scanned, 0 running SSH

showing result:

 Could not connect to host.


Re: cloudera manager could not connect to host

New Contributor
thank, that helped

Re: cloudera manager could not connect to host

Expert Contributor

Hi, @michaelscottknapp@Yuvi,


1) Have you try to connect from master to any datanode via ssh?


2) If you cannot connect try this solution:

   2.1)On /etc/hosts file you need especificated de IP first and canonical name after, for example(in one node master):

 master localhost localhost.localdomain localhost4 localhost4.localdomain4
        ::1           master localhost localhost.localdomain localhost6 localhost6.localdomain6  worker1  worker2


  2.2)Generate keys on each nodes:

       ssh-keygen -t rsa -P ""


  2.3)Copy keys:

      Only in master node:

         cat /root/.ssh/ >> /root/.ssh/authorized_keys
         chmod 700 ~/.ssh
         chmod 600 ~/.ssh/authorized_keys
         ssh-copy-id -i $HOME/.ssh/ root@master

         ssh-copy-id -i $HOME/.ssh/ root@worker1
         ssh-copy-id -i $HOME/.ssh/ root@worker2


      Only in datanodes:

          ssh-copy-id -i $HOME/.ssh/ root@master

           ssh-copy-id -i $HOME/.ssh/ root@workerX (where X is 1 or 2)



Now you can connect from master to datanodes, from datanodes to master and nodes between.





Don't have an account?
Coming from Hortonworks? Activate your account here