Support Questions

Find answers, ask questions, and share your expertise

RecordTooLargeException on large messages in Kafka?

avatar
Explorer

I am using HDP-2.6.5.0 with kafka 1.0.0; I have to process large (16M) messages so i set

message.max.bytes=18874368replica.fetch.max.bytes = 18874368socket.request.max.bytes =18874368

From Ambary/Kafka configs screen and restarted Kafka services

When I try to send 16M messages:

/usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list <broker-ip>:6667 --topic test < ./big.txt

I still have the same error:

ERROR Error when sending message to topic test with key: null, value: 16777239 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)

org.apache.kafka.common.errors.RecordTooLargeException: The message is 16777327 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration.

I tried to set max.request.size in producer.properties file but still have the same error. What am I missing?

Thank you,




1 ACCEPTED SOLUTION

avatar
Master Mentor

@lvic4594_ 

As you are keep getting "RecordTooLargeException" even after increasing few properties that you listed in your previous comment.  

 

So can you please let us know exactly where are you noticing those exceptions?  Broker side or Producer side or Consumer side?


Also can you please try to specify the complete path of the "producer.properties" file in the "kafka-console-producer.sh" command line just to ensure that we are using the correct producer properties file?

Example:

/usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list <broker-ip>:6667 --producer.config /etc/kafka/conf/producer.properties --topic test < ./big.txt

.
Also please verify if this file has the correct value:

# grep 'max.request.size' /etc/kafka/conf/producer.properties

.

Reference Article:  https://community.cloudera.com/t5/Community-Articles/Kafka-producer-running-into-multiple-org-apache...

Broker side: "message.max.bytes" - this is the largest size of the message that can be received by the broker from a producer. 
                     "replica.fetch.max.bytes" - The number of bytes of messages to attempt to fetch for each partition.

Producer side: "max.request.size" is a limit to send the larger message.

Consumer side: Increase "max.partition.fetch.bytes" which help you to consume big messages. max number of bytes per partition returned by the server. should be larger than the max.message.size so consumer can read the largest message sent by the broker.

 


For Consumer side can you please let us know if you have also increased the "max.partition.fetch.bytes" ? 


View solution in original post

6 REPLIES 6

avatar
Master Mentor

@lvic4594_ 

As you are keep getting "RecordTooLargeException" even after increasing few properties that you listed in your previous comment.  

 

So can you please let us know exactly where are you noticing those exceptions?  Broker side or Producer side or Consumer side?


Also can you please try to specify the complete path of the "producer.properties" file in the "kafka-console-producer.sh" command line just to ensure that we are using the correct producer properties file?

Example:

/usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list <broker-ip>:6667 --producer.config /etc/kafka/conf/producer.properties --topic test < ./big.txt

.
Also please verify if this file has the correct value:

# grep 'max.request.size' /etc/kafka/conf/producer.properties

.

Reference Article:  https://community.cloudera.com/t5/Community-Articles/Kafka-producer-running-into-multiple-org-apache...

Broker side: "message.max.bytes" - this is the largest size of the message that can be received by the broker from a producer. 
                     "replica.fetch.max.bytes" - The number of bytes of messages to attempt to fetch for each partition.

Producer side: "max.request.size" is a limit to send the larger message.

Consumer side: Increase "max.partition.fetch.bytes" which help you to consume big messages. max number of bytes per partition returned by the server. should be larger than the max.message.size so consumer can read the largest message sent by the broker.

 


For Consumer side can you please let us know if you have also increased the "max.partition.fetch.bytes" ? 


avatar
Explorer

--producer-config did the trick for kafka-console-producer.sh...Or changing "max.request.size" directly in producer code. I didn't have to modify consumer settings

avatar
Master Mentor

@lvic4594_  Great to know that the issue is resolved after making the recommended changes to use the producer.config argument explicitly 

--producer.config /etc/kafka/conf/producer.properties

 As the issue is resolved, hence it will be great to mark this thread as Solved. So that other users can quickly find the resolved threads/answers,.

avatar
Explorer

it will be great to mark this thread as Solved

Would be happy to, but don't see this available in "options" - i can only see "mark as read"

avatar
Community Manager

@lvic4594_  Marking the solution is easy. The author of the original question will see an Accept as solution button on every reply. Click the button on the reply(s) that solved the issue to mark it as such.

 

Screen Shot 2019-08-06 at 1.54.47 PM.png

 

 


Cy Jervis, Manager, Community Program
Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.

avatar
Explorer

Thanks, i didn't realized i am not actually logged at the moment