Support Questions

Find answers, ask questions, and share your expertise

DataNode cannot send block report to NameNode due to third-party protobuf issues ?

avatar
New Contributor

org.apache.hadoop.hdfs.server.datanode.DataNode: Unsuccessfully sent block report 0x687e2eff1ccff4e2 with lease ID 0x2e42f5718005b5ef to namenode: hadoop-standby/:8020, containing 1 storage report(s), of which we sent 0. The reports had 659503 total blocks and used 0 RPC(s). This took 83 msecs to generate and 56 msecs for RPC and NN processing. Got back no commands.
2025-01-02 19:28:31,810 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: RemoteException in offerService
org.apache.hadoop.ipc.RemoteException(java.io.IOException): java.lang.NoSuchMethodError: java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer;
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.runBlockOp(BlockManager.java:5558)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.blockReport(NameNodeRpcServer.java:1651)
at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.blockReport(DatanodeProtocolServerSideTranslatorPB.java:182)
at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:34769)
at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:621)
at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:589)
at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:573)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1227)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1246)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1169)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1953)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:3203)
Caused by: java.lang.NoSuchMethodError: java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer;
at org.apache.hadoop.thirdparty.protobuf.IterableByteBufferInputStream.read(IterableByteBufferInputStream.java:143)
at org.apache.hadoop.thirdparty.protobuf.CodedInputStream$StreamDecoder.read(CodedInputStream.java:2080)
at org.apache.hadoop.thirdparty.protobuf.CodedInputStream$StreamDecoder.tryRefillBuffer(CodedInputStream.java:2831)
at org.apache.hadoop.thirdparty.protobuf.CodedInputStream$StreamDecoder.refillBuffer(CodedInputStream.java:2777)
at org.apache.hadoop.thirdparty.protobuf.CodedInputStream$StreamDecoder.readRawByte(CodedInputStream.java:2859)
at org.apache.hadoop.thirdparty.protobuf.CodedInputStream$StreamDecoder.readRawVarint64SlowPath(CodedInputStream.java:2648)
at org.apache.hadoop.thirdparty.protobuf.CodedInputStream$StreamDecoder.readRawVarint64(CodedInputStream.java:2641)
at org.apache.hadoop.thirdparty.protobuf.CodedInputStream$StreamDecoder.readSInt64(CodedInputStream.java:2497)
at org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:419)
at org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:397)
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processFirstBlockReport(BlockManager.java:3263)
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:2945)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.lambda$blockReport$0(NameNodeRpcServer.java:1652)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:5637)
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:5614)

at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1584)
at org.apache.hadoop.ipc.Client.call(Client.java:1529)
at org.apache.hadoop.ipc.Client.call(Client.java:1426)
at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:258)
at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:139)
at com.sun.proxy.$Proxy17.blockReport(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.lambda$blockReport$2(DatanodeProtocolClientSideTranslatorPB.java:212)
at org.apache.hadoop.ipc.internal.ShadedProtobufHelper.ipc(ShadedProtobufHelper.java:160)
at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.blockReport(DatanodeProtocolClientSideTranslatorPB.java:212)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.blockReport(BPServiceActor.java:437)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:754)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:914)
at java.lang.Thread.run(Thread.java:750)

I have tried upgrading Java to the latest Java 8 version, but nothing has changed. Please help me.

2 REPLIES 2

avatar
Master Mentor

@tuyen123 

If you have installed other applications or dependencies  for Spark, Hive, etc. that use a different version of protobuf, the conflict can cause issues with the block report.
Locate conflicting protobuf JARs

 

Spoiler
find $HADOOP_HOME -name "protobuf*.jar"

 

Check if there are multiple versions present in $HADOOP_HOME/lib or other dependency paths.
  • Remove conflicting jars Keep only the protobuf JAR version that matches your Hadoop distribution e.g.protobuf-java-2.5.0.jar

  • Alternatively, explicitly set the protobuf version in your CLASSPATH.

If third-party libraries are included in your Hadoop environment, they might override the correct protobuf version. Open $HADOOP_HOME/etc/hadoop/hadoop-env.sh and prepend the correct protobuf library:

 

export HADOOP_CLASSPATH=/path/to/protobuf-java-2.5.0.jar:$HADOOP_CLASSPATH

 

Verify Classpath
Spoiler
hadoop classpath | grep protobuf

Ensure it includes the correct protobuf JAR.

Please try that and revert. Happy hadooping

avatar
New Contributor

Hi, Shelton

I have checked the Hadoop version, and it is indeed using protobuf version 2.5.0, while the protobuf version for Spark could not be found.

The versions of Hadoop and Spark are 3.4.0 and 3.5.3, respectively.

I also checked the client using Java 11 to connect to the Spark master, which is running on Java 8.

I'm not sure if the difference in Java versions between the client and the master affects the connection.

Thank you for your response; it has enlightened me in many ways.