Support Questions

Find answers, ask questions, and share your expertise

Where does CDH5 store the list of data nodes?

avatar
Expert Contributor

I have built and existing demo cluster. It was working perfectly well.

 

however because of some changes, I have to change the IP address of each of the data nodes.

 

I did a grep -R 'oldIP' /etc on each machine and edited the files which contained the old IP addresses and replaced them with new IP.

 

I rebooted each machine.

 

However despite doing that when I do

 

sudo -u hdfs hadoop dfsadmin -report

 

it shows me 2 dead data nodes and it lists the old IP addresses.

 

How can I remove old IP and then replace them with new IP addresses?

1 ACCEPTED SOLUTION

avatar
Expert Contributor

this command did not solve the issue. 

 

So I deleted my cluster and rebuilt it using the right IP addresses. 

 

View solution in original post

5 REPLIES 5

avatar
Expert Contributor

This is the error. 

 

CRITICAL Initialization failed for Block pool BP-1219478626-192.168.1.20-1418484473049 (Datanode Uuid null) service to nn1home/10.192.128.227:8022 Datanode denied communication with namenode because the host is not in the include-list: DatanodeRegistration(10.192.128.231, datanodeUuid=ff6a2644-3140-4451-a59f-496478a000d7, infoPort=50075, ipcPort=50020, storageInfo=lv=-56;cid=cluster18;nsid=850143528;c=0) at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:889) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:4798) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:1037) at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:92) at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:26378) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)

avatar
Explorer

Are you using cloudera cm? you have to remove the old ip from the hadoop config and add the new ones.

avatar
Master Collaborator

hadoop dfsadmin -refreshNodes

 

will help you regenerate the datanode list....

avatar
If you use Cloudera Manager, you can look at the list of hosts and try
to recomission any nodes.

If not, you need to look at the file pointed to by the dfs.hosts
property and ensure the new IP address or host name is listed in it.
Then as TGrayson mentioned, you should run sudo -u hdfs hadoop
dfsadmin -refreshNodes"

Let us know if it helped.


Regards,
Gautam Gopalakrishnan

avatar
Expert Contributor

this command did not solve the issue. 

 

So I deleted my cluster and rebuilt it using the right IP addresses.