- last edited on
we have setup a hadoop(2.7.2)cluster in fully distributed mode but datanode is starting on slave machine after I ran this command "start-all.sh"
these are the screenshots of terminals of master(vinith),secondarymaster(harsha),slave1(balaji)
Have you tried to start a HDFS DataNode with the following command on each designated node:
# $HADOOP_PREFIX/sbin/hadoop-daemons.sh --config $HADOOP_CONF_DIR --script hdfs start datanode
After starting the DataNode like above please share the DataNode log/out file if it fails to start.
Also when you tried to start the components using "start-dfs.sh" then before that have you already copied the Hadoop Binary and Hadoop configuration files to slave nodes as well?Are these slave nodes able to resolve the master node without any issue ... please check their "/etc/hosts" file as well.
Following is a blog post which you can refer just to make sure that you did not miss any step before starting the datanode. https://www.linode.com/docs/databases/hadoop/how-to-install-and-set-up-hadoop-cluster/
If you still face any issue then please share the logs specially on the dataNodes whatever logs are generated.