Member since
03-16-2018
4
Posts
0
Kudos Received
0
Solutions
03-22-2018
04:19 AM
Running At producer: ./kafka-console-producer.sh --broker-list localhost:9092 --topic girishtp or ./kafka-console-producer.sh --broker-list localhost:6667 --topic girishtp At consumer: ./kafka-console-consumer.sh --zookeeper localhost:2181 --topic girishtp --from-beginning --consumer.config /usr/hdp/2.4.3.0-227/kafka/config/consumer.properties --delete-consumer-offsets Note: Modified [ listeners : PLAINTEXT://xxx.domain:6667 ] localhost -> xxx.domain capture.png [2018-03-21 13:51:52,062] WARN Fetching topic metadata with correlation id 8 for topics [Set(girishtp)] from broker [BrokerEndPoint(0,localhost,6667)] failed (kafka.client.ClientUtils$)
java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:122)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:77)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$doSend(SyncProducer.scala:76)
at kafka.producer.SyncProducer.send(SyncProducer.scala:121)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:59)
at kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
at kafka.producer.async.DefaultEventHandler$anonfun$handle$2.apply$mcV$sp(DefaultEventHandler.scala:79)
at kafka.utils.CoreUtils$.swallow(CoreUtils.scala:79)
at kafka.utils.Logging$class.swallowError(Logging.scala:106)
at kafka.utils.CoreUtils$.swallowError(CoreUtils.scala:51)
at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:79)
at kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:105)
at kafka.producer.async.ProducerSendThread$anonfun$processEvents$3.apply(ProducerSendThread.scala:88)
at kafka.producer.async.ProducerSendThread$anonfun$processEvents$3.apply(ProducerSendThread.scala:68)
at scala.collection.immutable.Stream.foreach(Stream.scala:547)
at kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:67)
at kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:45)
[2018-03-21 13:51:52,063] ERROR fetching topic metadata for topics [Set(girishtp)] from broker [ArrayBuffer(BrokerEndPoint(0,localhost,6667))] failed (kafka.utils.CoreUtils$)
kafka.common.KafkaException: fetching topic metadata for topics [Set(girishtp)] from broker [ArrayBuffer(BrokerEndPoint(0,localhost,6667))] failed
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:73)
at kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
at kafka.producer.async.DefaultEventHandler$anonfun$handle$2.apply$mcV$sp(DefaultEventHandler.scala:79)
at kafka.utils.CoreUtils$.swallow(CoreUtils.scala:79)
at kafka.utils.Logging$class.swallowError(Logging.scala:106)
at kafka.utils.CoreUtils$.swallowError(CoreUtils.scala:51)
at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:79)
at kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:105)
at kafka.producer.async.ProducerSendThread$anonfun$processEvents$3.apply(ProducerSendThread.scala:88)
at kafka.producer.async.ProducerSendThread$anonfun$processEvents$3.apply(ProducerSendThread.scala:68)
at scala.collection.immutable.Stream.foreach(Stream.scala:547)
at kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:67)
at kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:45)
Caused by: java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:122)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:77)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$doSend(SyncProducer.scala:76)
at kafka.producer.SyncProducer.send(SyncProducer.scala:121)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:59)
... 12 more
[2018-03-21 13:51:52,065] ERROR Failed to send requests for topics girishtp with correlation ids in [0,8] (kafka.producer.async.DefaultEventHandler)
[2018-03-21 13:51:52,065] ERROR Error in handling batch of 1 events (kafka.producer.async.ProducerSendThread)
kafka.common.FailedToSendMessageException: Failed to send messages after 3 tries.
at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:91)
at kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:105)
at kafka.producer.async.ProducerSendThread$anonfun$processEvents$3.apply(ProducerSendThread.scala:88)
at kafka.producer.async.ProducerSendThread$anonfun$processEvents$3.apply(ProducerSendThread.scala:68)
at scala.collection.immutable.Stream.foreach(Stream.scala:547)
at kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:67)
at kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:45)
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Kafka
03-19-2018
10:09 AM
Thanks Harsh. I have set Topic Whitelist { '|' }. Kafka started working.
... View more
03-16-2018
08:54 AM
Process followed steps:
1. Activated CDH5 and kafka parcel from Hosts -> Packets
2 .Ran below commands
$ sudo yum clean all $ sudo yum install kafka $ sudo yum install kafka-server
3. Added Kafka service from Cloudera Quickstart.
4. Started service, kafka broker is started but kafka mirror maker is not started
Error log:
Supervisor returned FATAL. Please check the role log file, stderr, or stdout. Kafka MirrorMaker (quickstart) Mar 16, 8:35:17 AM 15.39s $> csd/csd.sh ["start"] abort.on.send.failure: true offset.commit.interval.ms: 60000 consumer.rebalance.listener: consumer.rebalance.listener.args: message.handler: message.handler.args: SOURCE_SECURITY_PROTOCOL: PLAINTEXT DESTINATION_SECURITY_PROTOCOL: PLAINTEXT KAFKA_MIRROR_MAKER_PRINCIPAL: SOURCE_SSL_CLIENT_AUTH: false DESTINATION_SSL_CLIENT_AUTH: false Kafka version found: 0.11.0-kafka3.0.0 Fri Mar 16 08:35:29 PDT 2018 JAVA_HOME=/usr/java/jdk1.7.0_67-cloudera Using -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/kafka_kafka-KAFKA_MIRROR_MAKER-6f7e9fa2e076f5332a81ffe11109ab36_pid23448.hprof -XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/killparent.sh as CSD_JAVA_OPTS Using /var/run/cloudera-scm-agent/process/125-kafka-KAFKA_MIRROR_MAKER as conf dir Using scripts/mirrormaker_control.sh as process script CONF_DIR=/var/run/cloudera-scm-agent/process/125-kafka-KAFKA_MIRROR_MAKER CMF_CONF_DIR=/etc/cloudera-scm-agent Date: Fri Mar 16 08:35:29 PDT 2018 Host: quickstart.cloudera Pwd: /var/run/cloudera-scm-agent/process/125-kafka-KAFKA_MIRROR_MAKER CONF_DIR: /var/run/cloudera-scm-agent/process/125-kafka-KAFKA_MIRROR_MAKER KAFKA_HOME: /opt/cloudera/parcels/KAFKA-3.0.0-1.3.0.0.p0.40/lib/kafka Zookeeper Quorum: quickstart.cloudera:2181 Zookeeper Chroot: no.data.loss: true whitelist: blacklist: num.producers: 1 num.streams: 1 queue.size: 10000 queue.byte.size: 100000000 JMX_PORT: 9394 MM_HEAP_SIZE: 256 MM_JAVA_OPTS: -server -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:+CMSScavengeBeforeRemark -XX:+DisableExplicitGC -Djava.awt.headless=true abort.on.send.failure: true offset.commit.interval.ms: 60000 consumer.rebalance.listener: consumer.rebalance.listener.args: message.handler: message.handler.args: SOURCE_SECURITY_PROTOCOL: PLAINTEXT DESTINATION_SECURITY_PROTOCOL: PLAINTEXT KAFKA_MIRROR_MAKER_PRINCIPAL: SOURCE_SSL_CLIENT_AUTH: false DESTINATION_SSL_CLIENT_AUTH: false Kafka version found: 0.11.0-kafka3.0.0
--------------------------------------------------------------------------------------------------
Role log:
7:41:33.111 PM INFO MirrorMaker$ Starting mirror maker 7:41:33.181 PM ERROR MirrorMaker$ whitelist must be specified when using new consumer in mirror maker.
----------------------------------------------------------------------------------------------------
STRERR Log:
++ pwd + export LOG_DIR=/var/run/cloudera-scm-agent/process/125-kafka-KAFKA_MIRROR_MAKER + LOG_DIR=/var/run/cloudera-scm-agent/process/125-kafka-KAFKA_MIRROR_MAKER + '[' -z '' ']' + export KAFKA_HEAP_OPTS=-Xmx256M + KAFKA_HEAP_OPTS=-Xmx256M + '[' -z '' ']' + export 'KAFKA_JVM_PERFORMANCE_OPTS=-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/kafka_kafka-KAFKA_MIRROR_MAKER-6f7e9fa2e076f5332a81ffe11109ab36_pid23448.hprof -XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/killparent.sh -server -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:+CMSScavengeBeforeRemark -XX:+DisableExplicitGC -Djava.awt.headless=true' + KAFKA_JVM_PERFORMANCE_OPTS='-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/kafka_kafka-KAFKA_MIRROR_MAKER-6f7e9fa2e076f5332a81ffe11109ab36_pid23448.hprof -XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/killparent.sh -server -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:+CMSScavengeBeforeRemark -XX:+DisableExplicitGC -Djava.awt.headless=true' + [[ 3 < 2 ]] + exec /opt/cloudera/parcels/KAFKA-3.0.0-1.3.0.0.p0.40/lib/kafka/bin/kafka-mirror-maker.sh --abort.on.send.failure true --new.consumer --num.streams 1 --offset.commit.interval.ms 60000 --consumer.config /var/run/cloudera-scm-agent/process/125-kafka-KAFKA_MIRROR_MAKER/mirror_maker_consumers.properties --producer.config /var/run/cloudera-scm-agent/process/125-kafka-KAFKA_MIRROR_MAKER/mirror_maker_producers.properties SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/KAFKA-3.0.0-1.3.0.0.p0.40/lib/kafka/libs/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/KAFKA-3.0.0-1.3.0.0.p0.40/lib/kafka/libs/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
... View more
Labels: