Support Questions
Find answers, ask questions, and share your expertise
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

adding a Kafka service failes

adding a Kafka service failes

New Contributor


We have Cloudera Express 5.11.0 cluster and I'm trying to add Kafka 3.0 as a service in the cloudera manager but I'm getting error that it failed to start the broekr on all nodes but I dont see any error. I downloaded the percel and distributed and activated it successfully.


I have a few questions : 

1)What value should I set in the ZooKeeper Root ? Is it something that I should decide  or it depends on the installation of the zookeeper?? I saw that the most common is /kafka so I set it to /kafka.

2) Our zookeeper runs as a stand alone and got an alert about maximum request latency, might be connected ? 

3)During 4th step of adding kafka as a service it fails on starting the broker in the nodes and I'm not sure what is the error. I saw a few messages about OutOfMemory but I'm not sure if its checks or errors.


I'll add the last lines of the logs I found : 

stdout : 

Kafka version found: 0.11.0-kafka3.0.0
Sentry version found: 1.5.1-cdh5.11.0
Final Zookeeper Quorum is VMClouderaMasterDev01:2181/kafka inferred as PLAINTEXT
java.lang.OutOfMemoryError: Java heap space
Dumping heap to /tmp/kafka_kafka-KAFKA_BROKER-933a1dc0c29ca08ffe475da27d5b13d4_pid113208.hprof ...
Heap dump file created [12122526 bytes in 0.086 secs]
# java.lang.OutOfMemoryError: Java heap space
# -XX:OnOutOfMemoryError="/usr/lib64/cmf/service/common/"
#   Executing /bin/sh -c "/usr/lib64/cmf/service/common/"...


+ export 'KAFKA_JVM_PERFORMANCE_OPTS=-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/kafka_kafka-KAFKA_BROKER-933a1dc0c29ca08ffe475da27d5b13d4_pid113208.hprof -XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/ -server -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:+CMSScavengeBeforeRemark -XX:+DisableExplicitGC -Djava.awt.headless=true'
+ KAFKA_JVM_PERFORMANCE_OPTS='-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/kafka_kafka-KAFKA_BROKER-933a1dc0c29ca08ffe475da27d5b13d4_pid113208.hprof -XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/ -server -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:+CMSScavengeBeforeRemark -XX:+DisableExplicitGC -Djava.awt.headless=true'
+ [[ false == \t\r\u\e ]]
+ exec /opt/cloudera/parcels/KAFKA-3.0.0- /var/run/cloudera-scm-agent/process/1177-kafka-KAFKA_BROKER/
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/KAFKA-3.0.0-!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/KAFKA-3.0.0-!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
+ grep -q OnOutOfMemoryError /proc/113208/cmdline
+ RET=0
+ '[' 0 -eq 0 ']'
+ TARGET=113208
++ date
+ echo Thu May 17 10:36:08 CDT 2018
+ kill -9 113208

/var/log/kafka/*.log : 

50.1.22:2181, initiating session
2018-05-17 10:36:08,028 INFO org.apache.zookeeper.ClientCnxn: Session establishment complete on server VMClouderaMasterDev01/, sessionid = 0x1626c7087e729cb, negotiated timeout = 6000
2018-05-17 10:36:08,028 INFO org.I0Itec.zkclient.ZkClient: zookeeper state changed (SyncConnected)
2018-05-17 10:36:08,183 INFO kafka.server.KafkaServer: Cluster ID = cM_4kCm6TZWxttCAXDo4GQ
2018-05-17 10:36:08,185 WARN kafka.server.BrokerMetadataCheckpoint: No file under dir /var/local/kafka/data/
2018-05-17 10:36:08,222 INFO kafka.server.ClientQuotaManager$ThrottledRequestReaper: [ThrottledRequestReaper-Fetch]: Starting
2018-05-17 10:36:08,224 INFO kafka.server.ClientQuotaManager$ThrottledRequestReaper: [ThrottledRequestReaper-Produce]: Starting
2018-05-17 10:36:08,226 INFO kafka.server.ClientQuotaManager$ThrottledRequestReaper: [ThrottledRequestReaper-Request]: Starting
2018-05-17 10:36:08,279 INFO kafka.log.LogManager: Loading logs.
2018-05-17 10:36:08,287 INFO kafka.log.LogManager: Logs loading complete in 8 ms.


Any idea ? Thanks !


Re: adding a Kafka service failes

Cloudera Employee

Some questions regarding the out of memory exception: 
- What value is set for broker_max_heap_size and BROKER_HEAP_SIZE in /run/cloudera-scm-agent/process/<GENERATED_ID>-kafka-KAFKA_BROKER/proc.json? Do you have sufficient physical memory?
- Can you adjust Java Heap Size of Broker (broker_max_heap_size) and try to restart the Kafka service? 

Don't have an account?
Coming from Hortonworks? Activate your account here