I did see a similar issue reported here by another user, but the problem he said he had isn't what I have (AFAIK) nor did I understand how to carfully find the source of the problem.
I had to remove some IPs for privacy.
2018-04-10 08:44:48,583 WARN org.apache.hadoop.ipc.Server: Requested data length 76026807 is longer than maximum configured RPC length 67108864. RPC came from 10.3.108.191
2018-04-10 08:44:48,583 INFO org.apache.hadoop.ipc.Server: Socket Reader #1 for port 8020: readAndProcess from client <REMOVED!> threw exception [java.io.IOException: Requested data length 76026807 is longer than maximum configured RPC length 67108864. RPC came from <REMOVED!>] java.io.IOException: Requested data length 76026807 is longer than maximum configured RPC length 67108864. RPC came from <REMOVED!> at org.apache.hadoop.ipc.Server$Connection.checkDataLength(Server.java:1476) at org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:1538) at org.apache.hadoop.ipc.Server$Listener.doRead(Server.java:774) at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:647) at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:618)
[root@<REMOVE!> ~]# hdfs fsck / Connecting to namenode via http://<REMOVE!>:<REMOVE!> FSCK started by root (auth:SIMPLE) from /<REMOVE!> for path / at Tue Apr 10 08:40:20 BST 2018 Status: HEALTHY Total size: 67288369548598 B (Total open files size: 15569256448 B) Total dirs: 57673 Total files: 193390 Total symlinks: 0 (Files currently being written: 6) Total blocks (validated): 662967 (avg. block size 101495805 B) (Total open file blocks (not validated): 116) Minimally replicated blocks: 662967 (100.0 %) Over-replicated blocks: 0 (0.0 %) Under-replicated blocks: 0 (0.0 %) Mis-replicated blocks: 0 (0.0 %) Default replication factor: 3 Average block replication: 2.0007422 Corrupt blocks: 0 Missing replicas: 0 (0.0 %) Number of data-nodes: 6 Number of racks: 2 FSCK ended at Tue Apr 10 08:40:24 BST 2018 in 4021 milliseconds The filesystem under path '/' is HEALTHY