Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

hadoop 3 nodes configuration issues: Datanode dont appear in slaves and Nodemanager is appearing in salves but also in master

avatar
Rising Star

Im installing hadoop 2.7.1 on 3 nodes and Im having some difficulties in the configuration process.

I want to have:

node1 (master) - as the namenode and resource manager

node2 (slave) - as the datanode and nodemanager

node3(slave) - as the datanode and nodemanager

Im doing the configurations like below to achieve the goal:

etc/hosts file:

127.0.0.1 localhost
192.168.1.60 NameNode
192.168.1.61 Slave1
192.168.1.62 Slave2

core-site.xml:

<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://NameNode:9000</value>
</property>
</configuration>

hdfs-site.xml:

<configuration>
<name>dfs.replication</name>
<value>3</value>

In the slaves files i enter the hostnames of the slaves machines:

Slave1

Slave2

I created a masters file and entered the hostname of the master machine:

NameNode

Note: I didnt configure the yarn-site.xml and mapred-site.xml files. Its needed?

Problem:

With my configuration above Im having two issues when start all deamons and check with jpscommand:

1) the node manager appears in the master and not only in the slaves machines

2) the datanode dont appear in the slaves machines

jps in the master machine:

ResourceManager
NameNode
NodeManager
SecondaryNameNode

jps command in slave machines:

NodeManager
1 ACCEPTED SOLUTION

avatar
Guru

Check if you find nodemanager and datanode logs on slave nodes where they didn't start. They should tell what went wrong. Most likely they failed with errors. You may also need yarn-site.xml to configure yarn.nodemanager.log-dirs and yarn.nodemanager.local-dirs params.

Try following these instructions for start and stop services since this is the supported way. Also, always try to configure FQDN instead of just hostnames.

View solution in original post

5 REPLIES 5

avatar
Guru

Check if you find nodemanager and datanode logs on slave nodes where they didn't start. They should tell what went wrong. Most likely they failed with errors. You may also need yarn-site.xml to configure yarn.nodemanager.log-dirs and yarn.nodemanager.local-dirs params.

Try following these instructions for start and stop services since this is the supported way. Also, always try to configure FQDN instead of just hostnames.

avatar
Rising Star

Thanks, Where I cant find that logs? Im trying to find but without success. And when I execute the echo $HOSTANAME command the fully hostname that I get is what I put in the question.

avatar
Guru

/var/log/hadoop/hdfs/hadoop-hdfs-datanode-<hostname>.log has datanode log and /var/log/hadoop-yarn/yarn/yarn-yarn-nodemanager-<hostname>.log has nodemanager on each node. You can also look at .out files with same name in the same directories.

avatar
Super Collaborator

I am assuming you are trying to install manually from apache. Is there a reason you are not using HDP w/ Ambari?

avatar
Rising Star

Yes Im trying to install manually, because I think its better to learn the process how to get hadoop running.