Created 05-09-2017 12:03 AM
Hi,
ISSUE: Requested data length 146629817 is longer than maximum configured RPC length 134217728
Earlier, ipc.maximum.data.length used to be 64MB and got the same error and we changed that to 128MB. Now again it got exceeded and resulting data corruption/missing issues. Is there any maximum configurable value of ipc.maximum.data.length? Can we change this value above 128MB?Thanks in advance
Created 05-09-2017 06:53 AM
Yes Harsh, it's number of blocks. Block count is 6 Million. Deleted unwanted small files, now the cluster health is good
Is there any limit that a datanode should have only x no. of blocks?
Created 05-09-2017 02:43 AM
Created 05-09-2017 02:43 AM
Created 05-09-2017 03:01 AM
Thanks HarshJ for your reply,
In namenode log , I am facing this issue.. CDH version 5.7.1
Block count reached to ~6million and how many blocks a datanode can handle/namenode get block report.
I saw block count threshold value set 3lakh in cloudera. Can you please explain about block report format and length.
Created 05-09-2017 05:30 AM
Created 05-09-2017 06:23 AM
Based on the error message, it comes from
org.apache.hadoop.ipc.Server#checkDataLength()
Fundamentally, this property changes the max length of protobuf (a widely used data exchange format), and there's a reason why there needs a size limit.
Excerpt from protobuf doc:
public int setSizeLimit(int limit)
Created 05-09-2017 06:53 AM
Yes Harsh, it's number of blocks. Block count is 6 Million. Deleted unwanted small files, now the cluster health is good
Is there any limit that a datanode should have only x no. of blocks?