Created 08-13-2019 03:37 PM
I am using HDP-2.6.5.0 with kafka 1.0.0; I have to process large (16M) messages so i set
message.max.bytes=18874368replica.fetch.max.bytes = 18874368socket.request.max.bytes =18874368
From Ambary/Kafka configs screen and restarted Kafka services
When I try to send 16M messages:
/usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list <broker-ip>:6667 --topic test < ./big.txt
I still have the same error:
ERROR Error when sending message to topic test with key: null, value: 16777239 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback) org.apache.kafka.common.errors.RecordTooLargeException: The message is 16777327 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration.
I tried to set max.request.size in producer.properties file but still have the same error. What am I missing?
Thank you,
Created 08-19-2019 01:28 AM
As you are keep getting "RecordTooLargeException" even after increasing few properties that you listed in your previous comment.
So can you please let us know exactly where are you noticing those exceptions? Broker side or Producer side or Consumer side?
Also can you please try to specify the complete path of the "producer.properties" file in the "kafka-console-producer.sh" command line just to ensure that we are using the correct producer properties file?
Example:
/usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list <broker-ip>:6667 --producer.config /etc/kafka/conf/producer.properties --topic test < ./big.txt
.
Also please verify if this file has the correct value:
# grep 'max.request.size' /etc/kafka/conf/producer.properties
.
Reference Article: https://community.cloudera.com/t5/Community-Articles/Kafka-producer-running-into-multiple-org-apache...
Broker side: "message.max.bytes" - this is the largest size of the message that can be received by the broker from a producer.
"replica.fetch.max.bytes" - The number of bytes of messages to attempt to fetch for each partition.
Producer side: "max.request.size" is a limit to send the larger message.
Consumer side: Increase "max.partition.fetch.bytes" which help you to consume big messages. max number of bytes per partition returned by the server. should be larger than the max.message.size so consumer can read the largest message sent by the broker.
For Consumer side can you please let us know if you have also increased the "max.partition.fetch.bytes" ?
Created 08-19-2019 01:28 AM
As you are keep getting "RecordTooLargeException" even after increasing few properties that you listed in your previous comment.
So can you please let us know exactly where are you noticing those exceptions? Broker side or Producer side or Consumer side?
Also can you please try to specify the complete path of the "producer.properties" file in the "kafka-console-producer.sh" command line just to ensure that we are using the correct producer properties file?
Example:
/usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list <broker-ip>:6667 --producer.config /etc/kafka/conf/producer.properties --topic test < ./big.txt
.
Also please verify if this file has the correct value:
# grep 'max.request.size' /etc/kafka/conf/producer.properties
.
Reference Article: https://community.cloudera.com/t5/Community-Articles/Kafka-producer-running-into-multiple-org-apache...
Broker side: "message.max.bytes" - this is the largest size of the message that can be received by the broker from a producer.
"replica.fetch.max.bytes" - The number of bytes of messages to attempt to fetch for each partition.
Producer side: "max.request.size" is a limit to send the larger message.
Consumer side: Increase "max.partition.fetch.bytes" which help you to consume big messages. max number of bytes per partition returned by the server. should be larger than the max.message.size so consumer can read the largest message sent by the broker.
For Consumer side can you please let us know if you have also increased the "max.partition.fetch.bytes" ?
Created 08-19-2019 11:58 AM
--producer-config did the trick for kafka-console-producer.sh...Or changing "max.request.size" directly in producer code. I didn't have to modify consumer settings
Created 08-19-2019 05:26 PM
@lvic4594_ Great to know that the issue is resolved after making the recommended changes to use the producer.config argument explicitly
--producer.config /etc/kafka/conf/producer.properties
As the issue is resolved, hence it will be great to mark this thread as Solved. So that other users can quickly find the resolved threads/answers,.
Created 08-21-2019 06:30 AM
it will be great to mark this thread as Solved
Would be happy to, but don't see this available in "options" - i can only see "mark as read"
Created 08-22-2019 08:14 AM
@lvic4594_ Marking the solution is easy. The author of the original question will see an Accept as solution button on every reply. Click the button on the reply(s) that solved the issue to mark it as such.
Created 08-22-2019 11:07 AM
Thanks, i didn't realized i am not actually logged at the moment