Support Questions

Find answers, ask questions, and share your expertise

ISSUE: Requested data length 146629817 is longer than maximum configured RPC length 134217728

avatar
Contributor

Hi,

 

 

ISSUE: Requested data length 146629817 is longer than maximum configured RPC length 134217728

Earlier, ipc.maximum.data.length used to be 64MB and got the same error and we changed that to 128MB. Now again it got exceeded and resulting data corruption/missing issues. Is there any maximum configurable value of ipc.maximum.data.length? Can we change this value above 128MB? 
 
Thanks in advance
1 ACCEPTED SOLUTION

avatar
Contributor

Yes Harsh, it's number of blocks. Block count is 6 Million. Deleted unwanted small files, now the cluster health is good

 

Is there any limit that a datanode should have only x no. of blocks?

View solution in original post

6 REPLIES 6

avatar
Mentor
Could you please add some context here?
- What CDH version are you facing this on?
- Which service or client role log do you see this message in, and do you
have the full actual log to share?

In very old CDH5 HDFS releases prior to certain optimisations of large
messages (such as block reports) this was a problem you could hit as a
function of growing # of blocks in the DNs, but unless we know your version
and exact context/component of the error its too vague to help you out.

avatar
Mentor
Could you please add some context here?
- What CDH version are you facing this on?
- Which service or client role log do you see this message in, and do you
have the full actual log to share?

In very old CDH5 HDFS releases prior to certain optimisations of large
messages (such as block reports) this was a problem you could hit as a
function of growing # of blocks in the DNs, but unless we know your version
and exact context/component of the error its too vague to help you out.

avatar
Contributor

Thanks HarshJ for your reply,

 

In namenode log , I am facing this issue.. CDH version 5.7.1

Block count reached to ~6million and how many blocks a datanode can handle/namenode get block report.

I saw block count threshold value set 3lakh in cloudera. Can you please explain about block report format and length.

 

 

avatar
Mentor
Thank you for adding the version and source detail.

Could you please share the full log snippet? The block report size is just
a hint to a past problem that used large IPC sizes - on 5.7.x you should be
seeing it capped to 1 million max blocks per IPC which wouldn't come close
to this limit, so your issue could very well be different and over some
other IPC instead. The full error would usually tell you what the call was
or who the sender was.

avatar
Expert Contributor

Based on the error message, it comes from

org.apache.hadoop.ipc.Server#checkDataLength()

 

Fundamentally, this property changes the max length of protobuf (a widely used data exchange format), and there's a reason why there needs a size limit.

 

Excerpt from protobuf doc:

https://developers.google.com/protocol-buffers/docs/reference/java/com/google/protobuf/CodedInputStr...

 

public int setSizeLimit(int limit)
Set the maximum message size. In order to prevent malicious messages from exhausting memory or causing integer overflows, CodedInputStream limits how large a message may be. The default limit is 64MB. You should set this limit as small as you can without harming your app's functionality. Note that size limits only apply when reading from an InputStream, not when constructed around a raw byte array (nor with ByteString.newCodedInput()).
 
You could increase this limit, but there are other Hadoop limits that you could also hit. For example, number of files in a directory. In summary, you should go back and check what went over the limit. It can be number of files in a directory, number of blocks on a DataNode, ... and so on. It is an indication that something went over the recommended range.

avatar
Contributor

Yes Harsh, it's number of blocks. Block count is 6 Million. Deleted unwanted small files, now the cluster health is good

 

Is there any limit that a datanode should have only x no. of blocks?