- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Cannot add service Kafka: JVM BROKER_HEAP_SIZE zero
- Labels:
-
Apache Kafka
-
Apache Zookeeper
Created on ‎07-27-2015 02:34 PM - edited ‎09-16-2022 02:35 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
+ echo 'Pwd: /var/run/cloudera-scm-agent/process/419-kafka-KAFKA_BROKER' + echo 'CONF_DIR: /var/run/cloudera-scm-agent/process/419-kafka-KAFKA_BROKER' + echo 'KAFKA_HOME: /opt/cloudera/parcels/KAFKA-0.8.2.0-1.kafka1.3.0.p0.29/lib/kafka' + echo 'Zoookeper Quorum: bichdp1.xxx.local:2181,bichdp3.xxx.local:2181,bichdp4.xxx.local:2181' + echo 'Chroot: ' + echo 'JMX_PORT: 9393' + echo 'ENABLE_MONITORING: true' + echo 'METRIC_REPORTERS: nl.techop.kafka.KafkaHttpMetricsReporter' + echo 'BROKER_HEAP_SIZE: 0' + QUORUM=bichdp1.xxx.local:2181,bichdp3.xxx.local:2181,bichdp4.xxx.local:2181 + [[ -n '' ]] + echo 'Final Zookeeper Quorum is bichdp1.xxx.local:2181,bichdp3.xxx.local:2181,bichdp4.xxx.local:2181' + grep zookeeper.connect= /var/run/cloudera-scm-agent/process/419-kafka-KAFKA_BROKER/kafka.properties + [[ true == \t\r\u\e ]] + echo kafka.metrics.reporters=nl.techop.kafka.KafkaHttpMetricsReporter + export KAFKA_LOG4J_OPTS=-Dlog4j.configuration=file:/var/run/cloudera-scm-agent/process/419-kafka-KAFKA_BROKER/log4j.properties + KAFKA_LOG4J_OPTS=-Dlog4j.configuration=file:/var/run/cloudera-scm-agent/process/419-kafka-KAFKA_BROKER/log4j.properties ++ pwd + export LOG_DIR=/var/run/cloudera-scm-agent/process/419-kafka-KAFKA_BROKER + LOG_DIR=/var/run/cloudera-scm-agent/process/419-kafka-KAFKA_BROKER + export KAFKA_HEAP_OPTS=-Xmx0M + KAFKA_HEAP_OPTS=-Xmx0M + exec /opt/cloudera/parcels/KAFKA-0.8.2.0-1.kafka1.3.0.p0.29/lib/kafka/bin/kafka-server-start.sh /var/run/cloudera-scm-agent/process/419-kafka-KAFKA_BROKER/kafka.properties Invalid maximum heap size: -Xmx0M Error: Could not create the Java Virtual Machine. Error: A fatal exception has occurred. Program will exit.
Hi guys, after downloading, distributing and activating Kafka parcel I tried to add a Kafka as a new service to the cluster, but the installation process fails on this error.
The variable BROKER_HEAP_SIZE is not set, and I dont know why, and how to workaround this
Thanks
Tomas
Created ‎07-27-2015 02:55 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Within the Kafka service configuration, if you search for "Java Heap Size of Broker in Megabytes" is there anything set for this value? If not, try to set it to 256 MB (at least) and attempt to restart.
-PD
Created ‎07-27-2015 02:55 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Within the Kafka service configuration, if you search for "Java Heap Size of Broker in Megabytes" is there anything set for this value? If not, try to set it to 256 MB (at least) and attempt to restart.
-PD
Created ‎07-28-2015 03:26 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Running the latest Cloudera Manager and CDH 5.4.4
The workaround you suggested worked, after adding the service to the cluster with error result I returned to the configuration and found the Java Heap size option and set from 0 to 256.
After the restart the Kafka brokers are in green but the Kafka Mirror (provisioned on one host) still fails to start.
The error message is:L
Supervisor returned FATAL. Please check the role log file, stderr, or stdout.
STDERR: last few lines, no ERROR mentioned there.
+ find /var/run/cloudera-scm-agent/process/427-kafka-KAFKA_MIRROR_MAKER -name topology.py -exec perl -pi -e 's#{{CMF_CONF_DIR}}#/var/run/cloudera-scm-agent/process/427-kafka-KAFKA_MIRROR_MAKER#g' '{}' ';'
+ export COMMON_SCRIPT=/usr/lib64/cmf/service/common/cloudera-config.sh
+ COMMON_SCRIPT=/usr/lib64/cmf/service/common/cloudera-config.sh
+ chmod u+x /var/run/cloudera-scm-agent/process/427-kafka-KAFKA_MIRROR_MAKER/scripts/mirrormaker_control.sh
+ exec /var/run/cloudera-scm-agent/process/427-kafka-KAFKA_MIRROR_MAKER/scripts/mirrormaker_control.sh start
+ echo ''
++ date
+ echo 'Date: Tue Jul 28 12:14:55 CEST 2015'
++ hostname -f
+ echo 'Host: bichdp6.xxx.local'
++ pwd
+ echo 'Pwd: /var/run/cloudera-scm-agent/process/427-kafka-KAFKA_MIRROR_MAKER'
+ echo 'CONF_DIR: /var/run/cloudera-scm-agent/process/427-kafka-KAFKA_MIRROR_MAKER'
+ echo 'KAFKA_HOME: /opt/cloudera/parcels/KAFKA-0.8.2.0-1.kafka1.3.0.p0.29/lib/kafka'
+ echo 'Zoookeper Quorum: bichdp1.xxx.local:2181,bichdp3.xxx.local:2181,bichdp4.xxx.local:2181'
+ echo 'Chroot: '
+ echo 'no.data.loss: true'
+ echo 'whitelist: '
+ echo 'blacklist: '
+ echo 'num.producers: 1'
+ echo 'num.streams: 1'
+ echo 'queue.size: 10000'
+ echo 'JMX_PORT: 9394'
+ QUORUM=bichdp1.xxx.local:2181,bichdp3.xxx.local:2181,bichdp4.xxx.local:2181
+ [[ -n '' ]]
+ echo 'Final Zookeeper Quorum is bichdp1.xxx.local:2181,bichdp3.xxx.local:2181,bichdp4.xxx.local:2181'
+ grep zookeeper.connect= /var/run/cloudera-scm-agent/process/427-kafka-KAFKA_MIRROR_MAKER/mirror_maker_consumers.properties
+ [[ true == \t\r\u\e ]]
+ DATA_LOSS_PARAM=--no.data.loss
+ echo 'data loss param: --no.data.loss'
+ [[ -n '' ]]
+ [[ -n '' ]]
+ [[ -n 1 ]]
+ PRODUCER_PARAM='--num.producers 1'
+ [[ -n 1 ]]
+ STREAM_PARAM='--num.streams 1'
+ [[ -n 10000 ]]
+ QUEUE_SIZE_PARAM='--queue.size 10000'
+ [[ -n 100000000 ]]
+ QUEUE_BYTE_SIZE_PARAM='--queue.byte.size 100000000'
+ export KAFKA_LOG4J_OPTS=-Dlog4j.configuration=file:/var/run/cloudera-scm-agent/process/427-kafka-KAFKA_MIRROR_MAKER/log4j.properties
+ KAFKA_LOG4J_OPTS=-Dlog4j.configuration=file:/var/run/cloudera-scm-agent/process/427-kafka-KAFKA_MIRROR_MAKER/log4j.properties
++ pwd
+ export LOG_DIR=/var/run/cloudera-scm-agent/process/427-kafka-KAFKA_MIRROR_MAKER
+ LOG_DIR=/var/run/cloudera-scm-agent/process/427-kafka-KAFKA_MIRROR_MAKER
+ exec /opt/cloudera/parcels/KAFKA-0.8.2.0-1.kafka1.3.0.p0.29/lib/kafka/bin/kafka-mirror-maker.sh --new.producer --no.data.loss --num.producers 1 --num.streams 1 --queue.size 10000 --queue.byte.size 100000000 --consumer.config /var/run/cloudera-scm-agent/process/427-kafka-KAFKA_MIRROR_MAKER/mirror_maker_consumers.properties --producer.config /var/run/cloudera-scm-agent/process/427-kafka-KAFKA_MIRROR_MAKER/mirror_maker_producers.properties
STDOUT:
Date: Tue Jul 28 12:14:44 CEST 2015
Host: bichdp6.xxx.local
Pwd: /var/run/cloudera-scm-agent/process/427-kafka-KAFKA_MIRROR_MAKER
CONF_DIR: /var/run/cloudera-scm-agent/process/427-kafka-KAFKA_MIRROR_MAKER
KAFKA_HOME: /opt/cloudera/parcels/KAFKA-0.8.2.0-1.kafka1.3.0.p0.29/lib/kafka
Zoookeper Quorum: bichdp1.xxx.local:2181,bichdp3.xxx.local:2181,bichdp4.xxx.local:2181
Chroot:
no.data.loss: true
whitelist:
blacklist:
num.producers: 1
num.streams: 1
queue.size: 10000
JMX_PORT: 9394
Final Zookeeper Quorum is bichdp1.xxx.local:2181,bichdp3.xxx.local:2181,bichdp4.xxx.local:2181
data loss param: --no.data.loss
Exactly one of whitelist or blacklist is required.
Tue Jul 28 12:14:46 CEST 2015
JAVA_HOME=/usr/java/jdk1.7.0_67-cloudera
Using /var/run/cloudera-scm-agent/process/427-kafka-KAFKA_MIRROR_MAKER as conf dir
Using scripts/mirrormaker_control.sh as process script
ROLE LOG:
Jul 28, 12:13:31.061 PM | INFO | kafka.tools.MirrorMaker$ | Starting mirror maker |
Jul 28, 12:13:35.759 PM | INFO | kafka.tools.MirrorMaker$ | Starting mirror maker |
Jul 28, 12:14:45.452 PM | INFO | kafka.tools.MirrorMaker$ | Starting mirror maker |
Jul 28, 12:14:47.973 PM | INFO | kafka.tools.MirrorMaker$ | Starting mirror maker |
Jul 28, 12:14:51.621 PM | INFO | kafka.tools.MirrorMaker$ | Starting mirror maker |
Jul 28, 12:14:56.344 PM | INFO | kafka.tools.MirrorMaker$ | Starting mirror maker |
Created ‎07-28-2015 07:08 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Created ‎06-13-2016 10:20 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
But how would he fix it, if he did need it?? I am experiencing the same thing.
