Created on 12-15-2014 10:33 AM - edited 09-16-2022 02:15 AM
I have built and existing demo cluster. It was working perfectly well.
however because of some changes, I have to change the IP address of each of the data nodes.
I did a grep -R 'oldIP' /etc on each machine and edited the files which contained the old IP addresses and replaced them with new IP.
I rebooted each machine.
However despite doing that when I do
sudo -u hdfs hadoop dfsadmin -report
it shows me 2 dead data nodes and it lists the old IP addresses.
How can I remove old IP and then replace them with new IP addresses?
Created 12-16-2014 03:07 PM
this command did not solve the issue.
So I deleted my cluster and rebuilt it using the right IP addresses.
Created 12-15-2014 10:40 AM
This is the error.
CRITICAL Initialization failed for Block pool BP-1219478626-192.168.1.20-1418484473049 (Datanode Uuid null) service to nn1home/10.192.128.227:8022 Datanode denied communication with namenode because the host is not in the include-list: DatanodeRegistration(10.192.128.231, datanodeUuid=ff6a2644-3140-4451-a59f-496478a000d7, infoPort=50075, ipcPort=50020, storageInfo=lv=-56;cid=cluster18;nsid=850143528;c=0) at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:889) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:4798) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:1037) at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:92) at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:26378) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
Created 12-15-2014 12:13 PM
Are you using cloudera cm? you have to remove the old ip from the hadoop config and add the new ones.
Created 12-15-2014 12:30 PM
hadoop dfsadmin -refreshNodes
will help you regenerate the datanode list....
Created 12-15-2014 09:12 PM
Created 12-16-2014 03:07 PM
this command did not solve the issue.
So I deleted my cluster and rebuilt it using the right IP addresses.