@Rakesh Enjala we were getting the similar issue, where all of our blocks under HDFS were coming up as Under Replicated Blockshdfs-under-replicated-blocks.png
the default value for ipc.maximum.data.length is 67108864Bytes (64MB) from https://hadoop.apache.org/docs/r2.8.0/hadoop-project-dist/hadoop-common/core-default.xml in our case we were getting this as about 100MB to avoid the issue we have increased the value to 128MB and able to get the cluster back to normal but before this we have done some feasts 🙂 and which caused us some unexpected behaviors in our cluster including the data loss this is happened due to:
1) we were thinking deleting the under replicated blocks using hdfs fsck / -delete will delete only the under replicated blocks which it did but in our case we lost the some of the data as well due to the ipc.maximum.data.length issue NameNode doesn't have the actual metadata because of this we lost the blocks (data) but the files were existing with 0Bytes.
2) One of the design issues we have in our cluster was we only have a single mount point (72TB) for Datanodes which is a big mistake where it have been made at least into 6 each with 12TB.
3) Never run the hdfs fsck / -delete when you see the Requested data length 97568122 is longer than maximum configured RPC length 67108864 from the NameNode logs.
Hope this helps someone