I created 3 VMs called nn1, nn2, and jt1 and assigned static IP addresses to each. I ensured that all VMs are able to ping one another.
I also edited the hosts file on each VM to that nn1, nn2 and jt1 have entries against their IP addresses.
Then I installed jdk-6u45-linux-x64-rpm.bin on each of the VMs.
Next i executed this command on all 3 VMs
wget http:// archive.cloudera.com/cdh4/redhat/6/x86_64/cdh/cloudera-cdh4.repo cp cloudera-cdh4.repo /etc/yum.repos.d/ rpm --import http://archive.cloudera.com/cdh4/redhat/6/x86_64/cdh/RPM- GPG-KEY-cloudera
Then I installed the following on all 3 nodes
yum install hadoop-hdfs-namenode yum install hadoop-hdfs-journalnode yum install zookeeper-server
I installed this only on nn1 and nn2
yum install hadoop-hdfs-zkfc
Finally I edit the /etc/zookeper/conf/zoo.cfg and added 3 lines (on all 3 machines)
server.1=nn1:2888:3888 server.2=nn2:2888:3888 server.3=jt1:2888:3888
I also executed this command on each machine
firewall-cmd --zone=dmz --add-port=2888/tcp --permanent firewall-cmd --zone=dmz --add-port=3888/tcp --permanent firewall-cmd --reload
But I keep seeing these errors in the /var/log/zookeeper/zookeeper.log file of each machine
WARN [WorkerSender[myid=1]:QuorumCnxManager@368] - Cannot open channel to 3 at election address jt1/192.168.1.32:3888 java.net.NoRouteToHostExceptoin: No route to host at java.net.PlainSocketImpl.socketConnect(Native Method)
Can you please tell me if I have left out a step in my cdh 4.1 configuration and also how can I troubleshoot this.
Also... I can see the word "WARN" here so is it just a warning? how can I know if my zookeeper is running successfully (perhaps with some warnings).
Can you please help me and sorry if this is FAQ.
no route to host suggests you are getting blocked still, using ping as a network test usually isn't sufficient as it uses the ICMP protocol and doesn't have a port per se.
a better test would be:
nc -z 192.168.1.32 3888
I was able to successfully nc into all machines except one.
I went and re-issued these commands on that machine and that resolved the issue.
servicectl stop firewalld.service
servicectl disable firewalld.service
Thank you so much!!!!!