Created on 04-24-2016 08:30 AM
After a rolling or express upgrade of HDP from 2.2.x to 2.4 (and I'm told also to 2.3.2 and 2.3.4) you may face some issues with your Kafka as I did. In my case HDP was upgraded from HDP-2.2.6 (Kafka-0.8.1) to HDP-2.4 (Kafka-0.9.0) using Ambari-2.2.1.1. Here are 4 items to check and fix if needed.
$ ./kafka-topics.sh --zookeeper zk1.example.com:2181 --describe --topic k81c Topic:k81c PartitionCount:2 ReplicationFactor:2 Configs: Topic: k81c Partition: 0 Leader: 0 Replicas: 0,1 Isr: 0 Topic: k81c Partition: 1 Leader: 0 Replicas: 1,0 Isr: 0 $ ./kafka-topics.sh --zookeeper zk1.example.com:2181 --describe --topic k9a Topic:k9a PartitionCount:2 ReplicationFactor:1 Configs: Topic: k9a Partition: 0 Leader: 1002 Replicas: 1002 Isr: 1002 Topic: k9a Partition: 1 Leader: 1001 Replicas: 1001 Isr: 1001
Newly created topics work, but old ones don't. A solution which worked for me was to change topic.id in newly created kafka-logs/meta.properties to old values. This has to be done on all volumes of all brokers. If your Kafka log volumes are, for example /data-01, ..., /data-06 you can change them by running
$ sed -i 's/1001/0/' /data-*/kafka-logs/meta.properties # run this on each broker $ grep broker.id /data-*/kafka-logs/meta.properties # to confirm they changed
It's a good idea to mark original broker IDs before the upgrade. They can be found in /etc/kafka/conf/server.properties.
hdpver=2.4.0.0-169 # set your HDP target version mkdir -p /usr/hdp/$hdpver/hadoop mkdir -p /etc/hadoop/$hdpver/0 ln -s /etc/hadoop/$hdpver/0 /usr/hdp/$hdpver/hadoop/conf hdp-select set hadoop-client $hdpver # /usr/hdp/current/hadoop-client -> /usr/hdp/$hdpver/hadoop cp /etc/hadoop/conf/* /etc/hadoop/$hdpver/0 # copy conf files from the previous location ln -sfn /usr/hdp/current/hadoop-client/conf /etc/hadoop/conf # update /etc/hadoop/conf symlink
sed -i '/^export KAFKA_CLIENT_KERBEROS_PARAMS/s/^/# /' /usr/hdp/current/kafka-broker/bin/*.sh grep "export KAFKA_CLIENT_KERBEROS" /usr/hdp/current/kafka-broker/bin/*.sh # to confirm
Created on 05-13-2016 04:14 PM
Hi,
Did you see any issues with Kafka failing to read Snappy compressed messages?
I'm seeing lots of this since the upgrade..
ERROR [Replica Manager on Broker 1001]: Error processing append operation on partition [testTopic,1] (kafka.server.ReplicaManager) kafka.common.KafkaException: at kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:94) at kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:64) at kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66) at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
...
Caused by: java.io.IOException: failed to read chunk at org.xerial.snappy.SnappyInputStream.hasNextChunk(SnappyInputStream.java:416) at org.xerial.snappy.SnappyInputStream.rawRead(SnappyInputStream.java:182) at org.xerial.snappy.SnappyInputStream.read(SnappyInputStream.java:163) at java.io.DataInputStream.readFully(DataInputStream.java:195) at java.io.DataInputStream.readLong(DataInputStream.java:416) at kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:72)
Created on 05-19-2016 04:04 AM
Hi @Stephen Redmond, sorry I missed your comment. No, haven't done tests with compression. I'll let you know if I find something. Also, you can file a question on HCC, copying your comment, to get wider attention. Tnx.