Member since
11-23-2021
3
Posts
0
Kudos Received
0
Solutions
11-25-2021
08:26 PM
안녕하세요 아래와 같은 오류메시지로 지속적으로 발생하고 있습니다. SMM이 CM쪽으로 API 요청에서부터 timeout이 발생하는 것으로 보입니다. 전체적인 가이드 요청드립니다. TimePeriod : LAST_ONE_WEEK, Error while fetching cluster metrics : [MetricDescriptor{metricName=MetricName(name=sum(kafka_bytes_fetched_by_partition_rate), tags=[partition, serviceName, topic], valueType=LONG, singlePointOfValue=true), queryTags={serviceName=kafka, topic=%, partition=%}, aggrFunction=SUM, postProcessFunction=null, valueType=LONG}, MetricDescriptor{metricName=MetricName(name=sum(kafka_messages_received_by_partition_rate), tags=[partition, serviceName, topic], valueType=LONG, singlePointOfValue=true), queryTags={serviceName=kafka, topic=%, partition=%}, aggrFunction=SUM, postProcessFunction=null, valueType=LONG}, MetricDescriptor{metricName=MetricName(name=sum(kafka_bytes_received_by_partition_rate), tags=[partition, serviceName, topic], valueType=LONG, singlePointOfValue=true), queryTags={serviceName=kafka, topic=%, partition=%}, aggrFunction=SUM, postProcessFunction=null, valueType=LONG}] com.hortonworks.smm.kafka.services.common.errors.InvalidCMApiResponseException: Invalid response returned CM API: http://icahubkafka005.datahub.skhynix.com:7180/api/v32/timeseries, response.status: 500,response.message: { "message" : "java.util.concurrent.TimeoutException" } at com.hortonworks.smm.kafka.services.metric.cm.CMMetricsFetcher.cmApiCall(CMMetricsFetcher.java:389) at com.hortonworks.smm.kafka.services.metric.cm.CMMetricsFetcher.cmApiPost(CMMetricsFetcher.java:368) at com.hortonworks.smm.kafka.services.metric.cm.CMMetricsFetcher.getMetricsFromCmApi(CMMetricsFetcher.java:479) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.HashMap$EntrySpliterator.forEachRemaining(HashMap.java:1699) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482) at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472) at java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:747) at java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:721) at java.util.stream.AbstractTask.compute(AbstractTask.java:316) at java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731) at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) at java.util.concurrent.ForkJoinTask.doInvoke(ForkJoinTask.java:401) at java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:734) at java.util.stream.ReduceOps$ReduceOp.evaluateParallel(ReduceOps.java:714) at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233) at java.util.stream.ReferencePipeline.reduce(ReferencePipeline.java:474) at com.hortonworks.smm.kafka.services.metric.cm.CMMetricsFetcher.queryMetrics(CMMetricsFetcher.java:464) at com.hortonworks.smm.kafka.services.metric.cm.CMMetricsFetcher.getClusterMetrics(CMMetricsFetcher.java:184) at com.hortonworks.smm.kafka.services.metric.cache.MetricsCache$RefreshMetricsCacheTask.lambda$null$21(MetricsCache.java:623) at com.hortonworks.smm.kafka.services.metric.cache.MetricsCache$RefreshMetricsCacheTask.fetchMetrics(MetricsCache.java:575) at com.hortonworks.smm.kafka.services.metric.cache.MetricsCache$RefreshMetricsCacheTask.lambda$refreshClusterMetrics$22(MetricsCache.java:622) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)
... View more
Labels:
11-23-2021
04:46 PM
안녕하세요. 아래와 같이 Stream Replication Mananger에서 Error 로그를 발생하고 있습니다. max.request.size 를 어떻게 설정해야 하는지 답변 요청드립니다. ERROR WorkerSourceTask WorkerSourceTask{id=MirrorSourceConnector-3} failed to send record to m16.ftl.adapter.ic.m16.edp.edes.hub.cmndcol: org.apache.kafka.common.errors.RecordTooLargeException: The message is 1270478 bytes when serialized which is larger than 1048576, which is the value of the max.request.size configuration. ERROR WorkerTask WorkerSourceTask{id=MirrorSourceConnector-3} Task threw an uncaught and unrecoverable exception org.apache.kafka.connect.errors.ConnectException: Unrecoverable exception from producer send callback at org.apache.kafka.connect.runtime.WorkerSourceTask.maybeThrowProducerSendException(WorkerSourceTask.java:263) at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:232) at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:184) at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.kafka.common.errors.RecordTooLargeException: The message is 1270478 bytes when serialized which is larger than 1048576, which is the value of the max.request.size configuration.
... View more
Labels:
11-23-2021
04:38 PM
안녕하세요 Streams Replication Manager 를 통해서 아래와 같이 미러링을 하고 있습니다. - Topic 갯수 : 95 - AVG Throghput : 319B/s - AVG Replication Latency : 19.6sec - Status : WARNING 위와 같이 계속 WARNING을 표시가 되는데 원인과 조치방법을 문의 드립니다.
... View more
Labels:
- Labels:
-
Apache Kafka