Created 08-24-2024 11:49 AM
2024-08-25 02:47:07,116 DEBUG org.apache.hadoop.net.NetworkTopology: Choosing random from 0 available nodes on node /, scope=, excludedScope=null, excludeNodes=[192.168.1.81:9866, 192.168.1.125:9866, 192.168.1.8>
2024-08-25 02:47:07,116 DEBUG org.apache.hadoop.net.NetworkTopology: chooseRandom returning null
2024-08-25 02:47:07,116 DEBUG org.apache.hadoop.net.NetworkTopology: No node to choose.
2024-08-25 02:47:07,116 DEBUG org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: [
Datanode None is not chosen since required storage types are unavailable for storage type DISK.
2024-08-25 02:47:07,116 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Not enough replicas was chosen. Reason: {NO_REQUIRED_STORAGE_TYPE=1}
2024-08-25 02:47:07,116 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStorage>
2024-08-25 02:47:07,116 WARN org.apache.hadoop.hdfs.protocol.BlockStoragePolicy: Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=1, selected=[], unavaila>
2024-08-25 02:47:07,116 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DISK], storagePolicy=BlockSto>
2024-08-25 02:47:07,116 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on default port 9000, call Call#10 Retry#0 org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from master-node:47624 / 192.168>
java.io.IOException: File /user/JS/input/DIC.txt._COPYING_ could only be written to 0 of the 1 minReplication nodes. There are 3 datanode(s) running and 3 node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2473)
at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:293)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3075)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:932)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:603)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:621)
at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:589)
at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:573)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1227)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1246)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1169)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1953)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:3203), This is the error
Created 08-26-2024 03:16 PM
@JSSSS Welcome to the Cloudera Community!
To help you get the best possible solution, I have tagged our HDFS experts @vaishaakb @shubham_sharma who may be able to assist you further.
Please keep us updated on your post, and we hope you find a satisfactory solution to your query.
Regards,
Diana Torres,Created 12-17-2024 12:41 PM
@JSSSS
The error is this "java.io.IOException: File /user/JS/input/DIC.txt._COPYING_ could only be written to 0 of the 1 minReplication nodes. There are 3 datanode(s) running and 3 node(s) are excluded in this operation."
All the 3 datanode according to the log are excludeNodes=[192.168.1.81:9866, 192.168.1.125:9866, 192.168.1.8> with replication factor of 3 , writes should succeed to all the 3 datanodes else the write fails.
The cluster may have under-replicated or unavailable blocks due to excluded nodes HDFS cannot use these nodes, possibly due to:
1. Verify if the DataNodes are live and connected to the NameNode
Look for the "Live nodes" and "Dead nodes" section If all 3 DataNodes are excluded, they might show up as dead or decommissioned.
Ensure the DataNodes have sufficient disk space for the write operation
Look at the HDFS data directories (/hadoop/hdfs/data)
If disk space is full, clear unnecessary files or increase disk capacity
View the list of excluded nodes
If nodes are wrongly excluded:
Refresh the NameNode to apply changes
Block Placement Policy:
If the cluster has DataNodes with specific restrictions (e.g., rack awareness), verify the block placement policy
Default: org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault
Happy hadooping
Created 08-27-2024 05:36 AM
Hi @JSSSS
It looks like either your rack topology configuration is incorrect or you have a problem with writing on datanode.
Could you please upload the screenshot of the namenode WebUI?
Also, check your rack topology -
hdfs dfsadmin -printTopology
Created 09-11-2024 06:15 AM
@JSSSS Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks.
Regards,
Diana Torres,Created 12-12-2024 10:24 AM
@JSSSS Looks like either you are running out of space in hdfs or the three DataNodes are too busy to acknowledge the request and causing the below exception. Please check if the HDFS is not reached its full capacity.
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: [
Datanode None is not chosen since required storage types are unavailable for storage type DISK.