Support Questions

Find answers, ask questions, and share your expertise
Announcements
We’ve updated our product names and community labels - click here for full details

Stream Replication Manager의 오류내용에 대한 설정 문의

avatar
Frequent Visitor

안녕하세요.

 

아래와 같이 Stream Replication Mananger에서 Error 로그를 발생하고 있습니다.

max.request.size 를 어떻게 설정해야 하는지 답변 요청드립니다.

 

ERROR WorkerSourceTask
WorkerSourceTask{id=MirrorSourceConnector-3} failed to send record to m16.ftl.adapter.ic.m16.edp.edes.hub.cmndcol:
org.apache.kafka.common.errors.RecordTooLargeException: The message is 1270478 bytes when serialized which is larger than 1048576, which is the value of the max.request.size configuration.
ERROR WorkerTask
WorkerSourceTask{id=MirrorSourceConnector-3} Task threw an uncaught and unrecoverable exception
org.apache.kafka.connect.errors.ConnectException: Unrecoverable exception from producer send callback
at org.apache.kafka.connect.runtime.WorkerSourceTask.maybeThrowProducerSendException(WorkerSourceTask.java:263)
at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:232)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:184)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.kafka.common.errors.RecordTooLargeException: The message is 1270478 bytes when serialized which is larger than 1048576, which is the value of the max.request.size configuration.
1 ACCEPTED SOLUTION

avatar
Expert Contributor

Hi @jaeseung 

 

The client configurations have to be passed using the cluster alias replication:

 

for consumer configs: primary->secondary.consumer.<config> > 
for producer configs: primary->secondary.producer.override.<config>

 

Please try using Under SRM configs:

<source>-><target>.producer.override.max.request.size=<desired value>

If that doesn't work use:

<source>-><target>.producer.max.request.size=<desired value>

 

View solution in original post

1 REPLY 1

avatar
Expert Contributor

Hi @jaeseung 

 

The client configurations have to be passed using the cluster alias replication:

 

for consumer configs: primary->secondary.consumer.<config> > 
for producer configs: primary->secondary.producer.override.<config>

 

Please try using Under SRM configs:

<source>-><target>.producer.override.max.request.size=<desired value>

If that doesn't work use:

<source>-><target>.producer.max.request.size=<desired value>