Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

ISSUE: Requested data length 146629817 is longer than maximum configured RPC length 134217728

avatar
Contributor

Hi,

ISSUE: Requested data length 146629817 is longer than maximum configured RPC length 134217728

Earlier, ipc.maximum.data.length used to be 64MB and got the same error and we changed that to 128MB. Now again it got exceeded and resulting data corruption/missing issues. Is there any maximum configurable value of ipc.maximum.data.length? Can we change this value above 128MB? Thanks in advance
2 REPLIES 2

avatar
Rising Star

> Is there any maximum configurable value of ipc.maximum.data.length?

Hadoop does not enforce a maximum.

> Can we change this value above 128MB?

Yes, you may change it to 192MB or 256MB to get around the current issue.

avatar
Expert Contributor

@Rakesh Enjala we were getting the similar issue, where all of our blocks under HDFS were coming up as Under Replicated Blockshdfs-under-replicated-blocks.png

the default value for ipc.maximum.data.length is 67108864Bytes (64MB) from https://hadoop.apache.org/docs/r2.8.0/hadoop-project-dist/hadoop-common/core-default.xml in our case we were getting this as about 100MB to avoid the issue we have increased the value to 128MB and able to get the cluster back to normal but before this we have done some feasts 🙂 and which caused us some unexpected behaviors in our cluster including the data loss this is happened due to:

1) we were thinking deleting the under replicated blocks using hdfs fsck / -delete will delete only the under replicated blocks which it did but in our case we lost the some of the data as well due to the ipc.maximum.data.length issue NameNode doesn't have the actual metadata because of this we lost the blocks (data) but the files were existing with 0Bytes.

2) One of the design issues we have in our cluster was we only have a single mount point (72TB) for Datanodes which is a big mistake where it have been made at least into 6 each with 12TB.

3) Never run the hdfs fsck / -delete when you see the Requested data length 97568122 is longer than maximum configured RPC length 67108864 from the NameNode logs.

Hope this helps someone