Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2093 | 06-15-2020 05:23 AM | |
| 17446 | 01-30-2020 08:04 PM | |
| 2255 | 07-07-2019 09:06 PM | |
| 8725 | 01-27-2018 10:17 PM | |
| 4913 | 12-31-2017 10:12 PM |
05-14-2018
10:35 AM
regarding - https://issues.apache.org/jira/browse/KAFKA-4502 , not clearly what is the solution for this bug ( note - we have kafka on linux machine )
... View more
05-14-2018
10:33 AM
second in which aprotch we can delete the logs ?
... View more
05-14-2018
10:33 AM
@Harald under /var/kafka/kafka-logs we have many logs , how to know what are the logs that we need to delete ?
... View more
05-14-2018
07:26 AM
HI ALL, from kafka server.log , we see many messages about "FATAL Fatal error during KafkaServer startup. Prepare to shutdown" and "kafka.common.InvalidOffsetException: Attempt to append an offset (232884366) to position 203880 no larger than the last offset appended" dose this messages can explain the reason that we cant start the kafka broker on kafka machine? second - what is the best solution for this isshue ? FATAL Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka.common.InvalidOffsetException: Attempt to append an offset (232884366) to position 203880 no larger than the last offset appended (232884366) to /var/kafka/kafka-logs/wctpi.avro.pri.processed-59/00000000000124356738.index.
at kafka.log.OffsetIndex$anonfun$append$1.apply$mcV$sp(OffsetIndex.scala:132)
at kafka.log.OffsetIndex$anonfun$append$1.apply(OffsetIndex.scala:122)
at kafka.log.OffsetIndex$anonfun$append$1.apply(OffsetIndex.scala:122)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:233)
at kafka.log.OffsetIndex.append(OffsetIndex.scala:122)
at kafka.log.LogSegment.recover(LogSegment.scala:225)
at kafka.log.Log$anonfun$loadSegments$4.apply(Log.scala:218)
at kafka.log.Log$anonfun$loadSegments$4.apply(Log.scala:179)
at scala.collection.TraversableLike$WithFilter$anonfun$foreach$1.apply(TraversableLike.scala:772)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
at kafka.log.Log.loadSegments(Log.scala:179)
at kafka.log.Log.<init>(Log.scala:108)
at kafka.log.LogManager$anonfun$loadLogs$2$anonfun$3$anonfun$apply$10$anonfun$apply$1.apply$mcV$sp(LogManager.scala:151)
at kafka.utils.CoreUtils$anon$1.run(CoreUtils.scala:58)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
FATAL Fatal error during KafkaServerStartable startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
kafka.common.InvalidOffsetException: Attempt to append an offset (232884366) to position 203880 no larger than the last offset appended (232884366) to /var/kafka/kafka-logs/wctpi.avro.pri.processed-59/00000000000124356738.index.
at kafka.log.OffsetIndex$anonfun$append$1.apply$mcV$sp(OffsetIndex.scala:132)
at kafka.log.OffsetIndex$anonfun$append$1.apply(OffsetIndex.scala:122)
at kafka.log.OffsetIndex$anonfun$append$1.apply(OffsetIndex.scala:122)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:233)
at kafka.log.OffsetIndex.append(OffsetIndex.scala:122)
at kafka.log.LogSegment.recover(LogSegment.scala:225)
at kafka.log.Log$anonfun$loadSegments$4.apply(Log.scala:218)
at kafka.log.Log$anonfun$loadSegments$4.apply(Log.scala:179)
at scala.collection.TraversableLike$WithFilter$anonfun$foreach$1.apply(TraversableLike.scala:772)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
at kafka.log.Log.loadSegments(Log.scala:179)
at kafka.log.Log.<init>(Log.scala:108)
at kafka.log.LogManager$anonfun$loadLogs$2$anonfun$3$anonfun$apply$10$anonfun$apply$1.apply$mcV$sp(LogManager.scala:151)
at kafka.utils.CoreUtils$anon$1.run(CoreUtils.scala:58)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
... View more
Labels:
05-13-2018
03:20 PM
@Geoffrey thank you I will verify this in the next days , and I will update , this is very strange because this happens only on one kafka , while other two kafka's machine are ok
... View more
05-13-2018
03:11 PM
@Geoffrey just note - the machine isnt rebooted ! , kafka broker is restating - its means that from ambari GUI you can see that kafka broker is up / down and finnaly down , but kafka linux machine stay up without reboot
... View more
05-13-2018
11:12 AM
hi all we have hadoop cluster version - 2.6.4 , and separate kafka machines ( 3 kafka machines ) on of the kafka machines ( kafka03 ) is restarting all the time and in some time could be stooped from netstat we can see that kafka cant listening to port 6667 we check the /var/log/kafka.err file , but this file is empty after we clean it by cp /dev/null /var/log/kafka.err and start the kafka broker to see if any info will comes to /var/log/kafka.err any other ideas how to understand why kafka machine not listening to port 6667? 76.12.76.73 is the IP of kafka03 server
76.12.76.74 is the IP of kafka01 server netstat -tnlpa | grep 6667
tcp6 0 0 76.12.76.73:43612 76.12.76.74:6667 ESTABLISHED 94962/java
tcp6 0 0 76.12.76.73:43616 76.12.76.74:6667 ESTABLISHED 94962/java
tcp6 0 0 76.12.76.73:43613 76.12.76.74:6667 ESTABLISHED 94962/java
netstat -tnlpa | grep 6667
tcp6 0 0 76.12.76.73:43616 76.12.76.74:6667 ESTABLISHED 94962/java
tcp6 0 0 76.12.76.73:43613 76.12.76.74:6667 ESTABLISHED 94962/java
... View more
Labels:
05-11-2018
01:56 PM
@Jordan , another point , I forget to tell you that we also restart the kafka machine but this not help to resolve the kafka broker , so the option is to increase the value from 1G to 2G according to your solution ,
... View more
05-11-2018
12:29 PM
@Jordan , another quastion please regarding the file - /usr/hdp/2.6.4.0-91/kafka/bin/kafka-server-start.sh , I see that the default value - is KAFKA_HEAP_OPTS="-Xmx1G -Xms1G" , so my question is , based on both variables that set to 1G , is it logical that 1G isn't enough ?
... View more
05-11-2018
05:50 AM
I am not sure about the following , but do you mean to update the file - /usr/hdp/2.6.4.0-91/kafka/bin/kafka-server-start.sh and update the parameter - export KAFKA_HEAP_OPTS="-Xms2G -Xmx2G" ? ( according to the article - https://community.hortonworks.com/content/supportkb/151841/error-javalangoutofmemoryerror-direct-buffer-memor.html ) [root@kafka01 conf]# more /usr/hdp/2.6.4.0-91/kafka/bin/kafka-server-start.sh
#!/bin/bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
if [ $# -lt 1 ];
then
echo "USAGE: $0 [-daemon] server.properties [--override property=value]*"
exit 1
fi
base_dir=$(dirname $0)
if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then
export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/log4j.properties"
fi
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
fi
EXTRA_ARGS=${EXTRA_ARGS-'-name kafkaServer -loggc'}
COMMAND=$1
case $COMMAND in
-daemon)
EXTRA_ARGS="-daemon "$EXTRA_ARGS
shift
;;
*)
;;
esac
echo $KAFKA_HEAP_OPTS>>/tmp/uri
exec $base_dir/kafka-run-class.sh $EXTRA_ARGS kafka.Kafka "$@"
... View more