Support Questions

Find answers, ask questions, and share your expertise

Unable to setup an EC2 hadoop cluster - questions/doubts about /etc/hosts, oozie sharelib

avatar
New Contributor

Hi, 

 

   I have been trying to get a 4 node cluster up for testing some Hive queries.  1 master + 3 slave. After a few mistakes, I realized that I should update /etc/hosts to point to each other. So I gave the same /etc/hosts to everyone and mapped the internal ip to master/slave[1,2,3].clouderamanager.localdomain and used the same name when setting up CM services on each (CM was able to resolve those.) 

 

 The problem came during the first run of services for oozie sharelib. It said that there are 0 nodes for replication. But I am not sure I can do anything about that when there is a first run of services. Unless, I was wrong in trying to setup the master + slaves together and should have first setup the master and then the slaves separately. Is that the issue? 

 

 ALso, this maybe obvious but host inspector was pinging 9000/9001 and that was not part of the ports mentioned. Maybe something to be part of documentation? Apologies if I missed reading it on the page. 

 

  Error: 

 

 

File /user/oozie/share/lib/lib_20140814211624/hive/stringtemplate-3.2.1.jar could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.
1 ACCEPTED SOLUTION

avatar
My first guess is that it cannot find a datanode. When you visit the HDFS
service in Cloudera Manager, look at "instances". Do you see any Datanodes?
If you do
- ensure the roles are started
- try a simple "hadoop fs -put" from one of the nodes and try to add a file
to HDFS

If you don't see any Datanodes, then add the roles to the slave nodes and
restart the HDFS service.

Regards,
Gautam Gopalakrishnan

View solution in original post

1 REPLY 1

avatar
My first guess is that it cannot find a datanode. When you visit the HDFS
service in Cloudera Manager, look at "instances". Do you see any Datanodes?
If you do
- ensure the roles are started
- try a simple "hadoop fs -put" from one of the nodes and try to add a file
to HDFS

If you don't see any Datanodes, then add the roles to the slave nodes and
restart the HDFS service.

Regards,
Gautam Gopalakrishnan