Reply
Highlighted
New Contributor
Posts: 1
Registered: ‎05-17-2018

adding a Kafka service failes

[ Edited ]

Hi,

We have Cloudera Express 5.11.0 cluster and I'm trying to add Kafka 3.0 as a service in the cloudera manager but I'm getting error that it failed to start the broekr on all nodes but I dont see any error. I downloaded the percel and distributed and activated it successfully.

 

I have a few questions : 

1)What value should I set in the ZooKeeper Root ? Is it something that I should decide  or it depends on the installation of the zookeeper?? I saw that the most common is /kafka so I set it to /kafka.

2) Our zookeeper runs as a stand alone and got an alert about maximum request latency, might be connected ? 

3)During 4th step of adding kafka as a service it fails on starting the broker in the nodes and I'm not sure what is the error. I saw a few messages about OutOfMemory but I'm not sure if its checks or errors.

 

I'll add the last lines of the logs I found : 

stdout : 

AUTHENTICATE_ZOOKEEPER_CONNECTION: true
SUPER_USERS: kafka
Kafka version found: 0.11.0-kafka3.0.0
Sentry version found: 1.5.1-cdh5.11.0
ZK_PRINCIPAL_NAME: zookeeper
Final Zookeeper Quorum is VMClouderaMasterDev01:2181/kafka
security.inter.broker.protocol inferred as PLAINTEXT
LISTENERS=listeners=PLAINTEXT://VMClouderaWorkerDev03:9092,
java.lang.OutOfMemoryError: Java heap space
Dumping heap to /tmp/kafka_kafka-KAFKA_BROKER-933a1dc0c29ca08ffe475da27d5b13d4_pid113208.hprof ...
Heap dump file created [12122526 bytes in 0.086 secs]
#
# java.lang.OutOfMemoryError: Java heap space
# -XX:OnOutOfMemoryError="/usr/lib64/cmf/service/common/killparent.sh"
#   Executing /bin/sh -c "/usr/lib64/cmf/service/common/killparent.sh"...

stderr:

+ export 'KAFKA_JVM_PERFORMANCE_OPTS=-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/kafka_kafka-KAFKA_BROKER-933a1dc0c29ca08ffe475da27d5b13d4_pid113208.hprof -XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/killparent.sh -server -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:+CMSScavengeBeforeRemark -XX:+DisableExplicitGC -Djava.awt.headless=true'
+ KAFKA_JVM_PERFORMANCE_OPTS='-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/kafka_kafka-KAFKA_BROKER-933a1dc0c29ca08ffe475da27d5b13d4_pid113208.hprof -XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/killparent.sh -server -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:+CMSScavengeBeforeRemark -XX:+DisableExplicitGC -Djava.awt.headless=true'
+ [[ false == \t\r\u\e ]]
+ exec /opt/cloudera/parcels/KAFKA-3.0.0-1.3.0.0.p0.40/lib/kafka/bin/kafka-server-start.sh /var/run/cloudera-scm-agent/process/1177-kafka-KAFKA_BROKER/kafka.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/KAFKA-3.0.0-1.3.0.0.p0.40/lib/kafka/libs/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/KAFKA-3.0.0-1.3.0.0.p0.40/lib/kafka/libs/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
+ grep -q OnOutOfMemoryError /proc/113208/cmdline
+ RET=0
+ '[' 0 -eq 0 ']'
+ TARGET=113208
++ date
+ echo Thu May 17 10:36:08 CDT 2018
+ kill -9 113208

/var/log/kafka/*.log : 

50.1.22:2181, initiating session
2018-05-17 10:36:08,028 INFO org.apache.zookeeper.ClientCnxn: Session establishment complete on server VMClouderaMasterDev01/10.150.1.22:2181, sessionid = 0x1626c7087e729cb, negotiated timeout = 6000
2018-05-17 10:36:08,028 INFO org.I0Itec.zkclient.ZkClient: zookeeper state changed (SyncConnected)
2018-05-17 10:36:08,183 INFO kafka.server.KafkaServer: Cluster ID = cM_4kCm6TZWxttCAXDo4GQ
2018-05-17 10:36:08,185 WARN kafka.server.BrokerMetadataCheckpoint: No meta.properties file under dir /var/local/kafka/data/meta.properties
2018-05-17 10:36:08,222 INFO kafka.server.ClientQuotaManager$ThrottledRequestReaper: [ThrottledRequestReaper-Fetch]: Starting
2018-05-17 10:36:08,224 INFO kafka.server.ClientQuotaManager$ThrottledRequestReaper: [ThrottledRequestReaper-Produce]: Starting
2018-05-17 10:36:08,226 INFO kafka.server.ClientQuotaManager$ThrottledRequestReaper: [ThrottledRequestReaper-Request]: Starting
2018-05-17 10:36:08,279 INFO kafka.log.LogManager: Loading logs.
2018-05-17 10:36:08,287 INFO kafka.log.LogManager: Logs loading complete in 8 ms.

 

Any idea ? Thanks !

Cloudera Employee
Posts: 9
Registered: ‎06-21-2016

Re: adding a Kafka service failes

Some questions regarding the out of memory exception: 
- What value is set for broker_max_heap_size and BROKER_HEAP_SIZE in /run/cloudera-scm-agent/process/<GENERATED_ID>-kafka-KAFKA_BROKER/proc.json? Do you have sufficient physical memory?
- Can you adjust Java Heap Size of Broker (broker_max_heap_size) and try to restart the Kafka service? 

Announcements