Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Significance and impact of Max Request Size on publish kafka performance

avatar
Expert Contributor

Hi All,

Thanks a lot to this awesome community.

I was wondering about the significance of Max Request Size field in PublishKafka processor attached in the image.

To imporve performance I have set "Message Demaractor" as new line (shift+enter)

Also as per my understanding the custom property added in Publish kafka processor (see image attached) does the same thing.

My single event size is not more than 4 kb, when it pushed from source to listenTCP processor, in the listenTCP porcessor we batch them and them merge them to 128 MB , the block size of hdfs cluster. So my Max request size should be 128 MB or 4 Kb or not dependent on this.

39958-qkakfak.png

Thanks

Dheeru

1 ACCEPTED SOLUTION

avatar
Master Guru

Here is the description of the Kafka properties from their source code...

max.request.size

The maximum size of a request. This is also effectively a cap on the maximum record size. Note that the server has its own cap on record size which may be different from this. This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests.

buffer.memory

The total bytes of memory the producer can use to buffer records waiting to be sent to the server. If records are sent faster than they can be delivered to the server the producer will either block or throw an exception based on the preference specified by <code>block.on.buffer.full</code>. 

This setting should correspond roughly to the total memory the producer will use, but is not a hard bound since not all memory the producer uses is used for buffering. Some additional memory will be used for compression (if compression is enabled) as well as for maintaining in-flight requests.


For your case I don't think you really need to change either of these values from the defaults since you are sending 4Kb messages. Usually you would increase max.request.size if you have a single message that is larger than 1MB.

View solution in original post

2 REPLIES 2

avatar
Expert Contributor

@Bryan Bende Thanks a lot. Could you please help my clear my understanding on this. Appreciate it

avatar
Master Guru

Here is the description of the Kafka properties from their source code...

max.request.size

The maximum size of a request. This is also effectively a cap on the maximum record size. Note that the server has its own cap on record size which may be different from this. This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests.

buffer.memory

The total bytes of memory the producer can use to buffer records waiting to be sent to the server. If records are sent faster than they can be delivered to the server the producer will either block or throw an exception based on the preference specified by <code>block.on.buffer.full</code>. 

This setting should correspond roughly to the total memory the producer will use, but is not a hard bound since not all memory the producer uses is used for buffering. Some additional memory will be used for compression (if compression is enabled) as well as for maintaining in-flight requests.


For your case I don't think you really need to change either of these values from the defaults since you are sending 4Kb messages. Usually you would increase max.request.size if you have a single message that is larger than 1MB.