Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

hadoop blockpool

avatar
New Contributor

We are facing issue with hadoop blockpool ID. Hadoop Blockpool name present under location app/data/dfs/data/current is showing a different IP instead of namenode IP(BP-xxxxx-10.x.x.x-xxxxxxxxxx). We have tried changing the blockpool name with the right IP in all the VERSION files and have restarted Hadoop. Now all of the data in HDFS UI is showing as missing blocks and we are facing following log traces in datanode logs,

ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in BPOfferService for Block pool BP-xxxxxxx-10.x.x.x-xxxxxxxxxx (Datanode Uuid 0fxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx) service to nn01/10.x.x.x:9000 java.lang.IllegalStateException: com.google.protobuf.InvalidProtocolBufferException: Protocol message was too large. May be malicious. Use CodedInputStream.setSizeLimit() to increase the size limit. at org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:332) at org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:310) at org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder.getBlockListAsLongs(BlockListAsLongs.java:288) at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.blockReport(DatanodeProtocolClientSideTranslatorPB.java:190) at
	org.apache.hadoop.hdfs.server.datanode.BPServiceActor.blockReport(BPServiceActor.java:475) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:688) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:823) at java.lang.Thread.run(Thread.java:748)

Someone please help us with a workaround or a permanent fix on this issue as the whole of the data is showing as missing blocks. Data is so important to us and we cannot risk losing it. Thanks in advance.

2 REPLIES 2

avatar
New Contributor

Why did you change it in the first place? What was the issue you faced because of the IP address being different? Would like to understand the reason prior to looking in to it further.

avatar
New Contributor

Hi @Raj Kumar,

Thanks for the reply. Our data was showing as missing blocks and hence we restored the data from our backup(present in different server). Due to this, Blockpool name had the IP of the backup server which we did not notice. We proceeded to restart Hadoop services and faced the following errors in the namenode logs,


ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in BPOfferService for Block pool BP-xxxxxxx-10.x.x.x-xxxxxxxxxx (Datanode Uuid 0fxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx) service to nn01/10.x.x.x:9000 java.lang.IllegalStateException: com.google.protobuf.InvalidProtocolBufferException: Protocol message was too large. May be malicious. Use CodedInputStream.setSizeLimit() to increase the size limit. at org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:332) at org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:310) at org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder.getBlockListAsLongs(BlockListAsLongs.java:288) at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.blockReport(DatanodeProtocolClientSideTranslatorPB.java:190) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.blockReport(BPServiceActor.java:475) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:688) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:823) at java.lang.Thread.run(Thread.java:748)


After this, we have manually changed the Blackpool's name to the right IP and restarted services but still, we are facing the same issue. The whole of the data is showing as missing in namenode UI.

Please help us on this, If you need any more details, please feel free to mail at densingmoses123@gmail.com we can have a quick discussion regarding this, Thanks in advance.