Community Articles

Find and share helpful community-sourced technical articles.
Announcements
Celebrating as our community reaches 100,000 members! Thank you!
Labels (1)
avatar
Super Guru

SYMPTOM:

user is frequently seeing following exceptions:

org.apache.kafka.common.errors.RecordTooLargeException: The request included a message larger than the max message size the server will accept. 

org.apache.kafka.common.errors.RecordTooLargeException: The message is 5745799 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration. 

ROOT CAUSE:

This is happening due to max byte settings at producer level and broker level

Broker side: message.max.bytes - this is the largest size of the message that can be received by the broker from a producer Producer side: max.request.size is a limit to send the larger message.

WORKAROUND:

NA

RESOLUTION:

message.max.bytes by default is 1M (1000012 bytes) for kafka 0.10.0. if you need to publish larger messages, you will need to adjust that on the brokers and then restart them.

to avoid exception at producer side you need to increase max.request.size to send the larger message.

PS: if you get RecordTooLargeException at consumer side then Increase max.partition.fetch.bytes which help you to consume big messages.

27,865 Views
Comments
@Rajkumar Singh

Can we override these value in PublishKafka processor instead of making change at broker/producer level? If no, then why this "PublishKafka_0_10 1.5.0.3.1.0.0-564" processor in NiFi has field "Max Request Size" which allow us to modify. Default value of this field is 1MB. Thanks!