Reply
New Contributor
Posts: 2
Registered: ‎04-29-2016

CDH 5.7 kafka parcel add service throws java.lang.OutOfMemoryError: Java heap space

[ Edited ]

ver info

Kafka: KAFKA-0.8.2.0-1.kafka1.4.0.p0.56

CDH:QuickStart 5.7

OS: VM Player guest Centos 6.7

 

on quickstart 5.7, while adding kafka as service, start throws java.lang.OutOfMemoryError: Java heap space.

 

Noticed, BROKER_HEAP_SIZE is set to 50M, but could see that prop on UI (on the prev screen).

This in turn does not kick in the kafka-server-start.sh $KAFKA_HEAP_OPTS, so had to force this export as a workaround.

 

Q1: Where does the 50M come from during initial config?

Q2: Any better hack than I put to get it doing?

 

kafka-server-start.sh

if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
fi

 

Sun May 1 14:54:28 PDT 2016
JAVA_HOME=/usr/java/jdk1.7.0_67-cloudera
Using /var/run/cloudera-scm-agent/process/22-kafka-KAFKA_BROKER as conf dir
Using scripts/control.sh as process script

Date: Sun May 1 14:54:28 PDT 2016
Host: quickstart.cloudera
Pwd: /var/run/cloudera-scm-agent/process/22-kafka-KAFKA_BROKER
CONF_DIR: /var/run/cloudera-scm-agent/process/22-kafka-KAFKA_BROKER
KAFKA_HOME: /opt/cloudera/parcels/KAFKA-0.8.2.0-1.kafka1.4.0.p0.56/lib/kafka
Zoookeper Quorum: quickstart.cloudera:2181
Chroot: /
PORT: 9092
JMX_PORT: 9393
SSL_PORT: 9093
ENABLE_MONITORING: true
METRIC_REPORTERS: nl.techop.kafka.KafkaHttpMetricsReporter
BROKER_HEAP_SIZE: 50
KERBEROS_AUTH_ENABLED: false
KAFKA_PRINCIPAL:
SECURITY_INTER_BROKER_PROTOCOL: PLAINTEXT
AUTHENTICATE_ZOOKEEPER_CONNECTION: true
ZK_PRINCIPAL_NAME: zookeeper
Final Zookeeper Quorum is quickstart.cloudera:2181/
LISTENERS=listeners=PLAINTEXT://quickstart.cloudera:9092,
Sun May 1 14:54:31 PDT 2016
JAVA_HOME=/usr/java/jdk1.7.0_67-cloudera
Using /var/run/cloudera-scm-agent/process/22-kafka-KAFKA_BROKER as conf dir
Using scripts/control.sh as process script

Explorer
Posts: 11
Registered: ‎01-26-2015

Re: CDH 5.7 kafka parcel add service throws java.lang.OutOfMemoryError: Java heap space

Have you tried the option listed in this link: http://stackoverflow.com/questions/36400366/kafka-configuration-on-existing-cdh-5-5-2-cluster?

 

I have had the same issue. Just restarted the cluster and Kafka start seems to have gone ahead.

 

HTH

Announcements