Support Questions

Find answers, ask questions, and share your expertise

ISSUE: Requested data length 76026807 is longer than maximum configured RPC length 67108864.

avatar
Explorer

I did see a similar issue reported here by another user, but the problem he said he had isn't what I have (AFAIK) nor did I understand how to carfully find the source of the problem.

 

I had to remove some IPs for privacy.

 

2018-04-10 08:44:48,583 WARN org.apache.hadoop.ipc.Server: Requested data length 76026807 is longer than maximum configured RPC length 67108864. RPC came from 10.3.108.191
2018-04-10 08:44:48,583 INFO org.apache.hadoop.ipc.Server: Socket Reader #1 for port 8020: readAndProcess from client <REMOVED!> threw exception [java.io.IOException: Requested data length 76026807 is longer than maximum configured RPC length 67108864. RPC came from <REMOVED!>] java.io.IOException: Requested data length 76026807 is longer than maximum configured RPC length 67108864. RPC came from <REMOVED!> at org.apache.hadoop.ipc.Server$Connection.checkDataLength(Server.java:1476) at org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:1538) at org.apache.hadoop.ipc.Server$Listener.doRead(Server.java:774) at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:647) at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:618)

 

  • I don't really understand the source of the problem "Requested data length". Which "request"? What is this "request"? What does it contain? Why is it 76026807?
  • I'm using hadoop-2.6.0+cdh5.7.0+1280-1.cdh5.7.0.p0.92.el6.x86_64

 

 

[root@<REMOVE!> ~]# hdfs fsck /

Connecting to namenode via http://<REMOVE!>:<REMOVE!>
FSCK started by root (auth:SIMPLE) from /<REMOVE!> for path / at Tue Apr 10 08:40:20 BST 2018

Status: HEALTHY
Total size: 67288369548598 B (Total open files size: 15569256448 B)
Total dirs: 57673
Total files: 193390
Total symlinks: 0 (Files currently being written: 6)
Total blocks (validated): 662967 (avg. block size 101495805 B) (Total open file blocks (not validated): 116)
Minimally replicated blocks: 662967 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 0 (0.0 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 3
Average block replication: 2.0007422
Corrupt blocks: 0
Missing replicas: 0 (0.0 %)
Number of data-nodes: 6
Number of racks: 2
FSCK ended at Tue Apr 10 08:40:24 BST 2018 in 4021 milliseconds

The filesystem under path '/' is HEALTHY
1 ACCEPTED SOLUTION

avatar
Mentor
> Which "request"? What is this "request"? What does it contain? Why is it 76026807?

A request in this context is a basic call from a HDFS client. A few example requests from a client could be "list this directory", "create a file", etc.

The request type is not determined at the point where the error is thrown because the 64 MB length limit safety check fires before we can deserialize/interpret the request.

As to what it contains, its not clear from the error. What is odd definitely is its size. Most client requests carry very simple attributes in them, such as a path, a list of locations, a flag and so on. Nothing in a regular client's request should be this large, unless perhaps the client in question is using ginormously large paths.

In the other cases I've seen this message, the port of request is usually 8022 which is where DataNodes send their heartbeats and block reports. These sort of 'requests' can be large depending on the amount of blocks or other datasets being sent.

Assuming you are running a configuration that uses 8020 + 8022 both, it is quite odd to observe this error over 8020. It could be a rogue client such as a network scanner sending bogus or specially crafted data for vulnerability checks (in which case this is normal to see, and is acting as designed in rejecting such requests).

You can find out more by trying to spot the program running on the client IP shown in the error, and see what form of API calls its trying to make (or if it even is a valid client).

View solution in original post

2 REPLIES 2

avatar
Mentor
> Which "request"? What is this "request"? What does it contain? Why is it 76026807?

A request in this context is a basic call from a HDFS client. A few example requests from a client could be "list this directory", "create a file", etc.

The request type is not determined at the point where the error is thrown because the 64 MB length limit safety check fires before we can deserialize/interpret the request.

As to what it contains, its not clear from the error. What is odd definitely is its size. Most client requests carry very simple attributes in them, such as a path, a list of locations, a flag and so on. Nothing in a regular client's request should be this large, unless perhaps the client in question is using ginormously large paths.

In the other cases I've seen this message, the port of request is usually 8022 which is where DataNodes send their heartbeats and block reports. These sort of 'requests' can be large depending on the amount of blocks or other datasets being sent.

Assuming you are running a configuration that uses 8020 + 8022 both, it is quite odd to observe this error over 8020. It could be a rogue client such as a network scanner sending bogus or specially crafted data for vulnerability checks (in which case this is normal to see, and is acting as designed in rejecting such requests).

You can find out more by trying to spot the program running on the client IP shown in the error, and see what form of API calls its trying to make (or if it even is a valid client).

avatar
Explorer
I ended up restarting the offending DataNodes on the IPs that appeared in the log and the warning/info went away. I wonder what it was ...