Support Questions
Find answers, ask questions, and share your expertise

Not ready to serve the block pool

Not ready to serve the block pool

I am getting this error on one of my worker nodes that has HDFS and Yarn

In manager, I see unexpected exits due to outOfMemory errors. In configuration for this node, I do not see overcomitted memory error. Am I missing out on something? How do I fix this?

ip-172-31-10-74.ap-south-1.compute.internal:50010:DataXceiver error processing WRITE_BLOCK operation  src: / dst: / Not ready to serve the block pool, BP-1423177047-
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.checkAndWaitForBP(
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.checkAccess(
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(

Re: Not ready to serve the block pool

New Contributor

Has this problem been solved?