Support Questions
Find answers, ask questions, and share your expertise

Apache Atlas error with Zookeeper Kafka configuration

Apache Atlas error with Zookeeper Kafka configuration

New Contributor

Apache Atlas, zookeeper , Kafka are all configured on the same node.

Seeing below errors in the Atlas Application Log

  • caught end of stream exception
  • client has closed socket
  • java.lang.OutOfMemoryError: Java heap space
  • kafka-network-thread-1-ListenerName(PLAINTEXT)-PLAINTEXT-0:] ~ Processor got uncaught exception.


@Ayub Khan @Jay Kumar SenSharma


019-05-16 00:45:22,769 WARN - [NIOServerCxn.Factory:localhost/127.0.0.1:9026:] ~ caught end of stream exception (NIOServerCnxn:357)

EndOfStreamException: Unable to read additional data from client sessionid 0x16abe0b44490008, likely client has closed socket

at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)

at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)

at java.lang.Thread.run(Thread.java:748)

2019-05-16 00:45:24,301 WARN - [kafka-request-handler-3:] ~ [Broker id=1] Ignoring LeaderAndIsr request from controller 1 with correlation id 1 epoch 51 for partition __consumer_offsets-13 since its associated leader epoch 8 is not higher than the current leader epoch 8 (Logging$class:87)

2019-05-16 00:45:24,302 WARN - [kafka-request-handler-3:] ~ [Broker id=1] Ignoring LeaderAndIsr request from controller 1 with correlation id 1 epoch 51 for partition __consumer_offsets-46 since its associated leader epoch 8 is not higher than the current leader epoch 8 (Logging$class:87)

2019-05-16 00:45:24,302 WARN - [kafka-request-handler-3:] ~ [Broker id=1] Ignoring LeaderAndIsr request from controller 1 with correlation id 1 epoch 51 for partition __consumer_offsets-9 since its associated leader epoch 8 is not higher than the current leader epoch 8 (Logging$class:87)

2019-05-16 00:45:24,302 WARN - [kafka-request-handler-3:] ~ [Broker id=1] Ignoring LeaderAndIsr request from controller 1 with correlation id 1 epoch 51 for partition __consumer_offsets-42 since its associated leader epoch 8 is not higher than the current leader epoch 8 (Logging$class:87)

2019-05-16 00:45:24,302 WARN - [kafka-request-handler-3:] ~ [Broker id=1] Ignoring LeaderAndIsr request from controller 1 with correlation id 1 epoch 51 for partition __consumer_offsets-21 since its associated leader epoch 8 is not higher than the current leader epoch 8 (Logging$class:87)

2019-05-16 00:45:24,302 WARN - [kafka-request-handler-3:] ~ [Broker id=1] Ignoring LeaderAndIsr request from controller 1 with correlation id 1 epoch 51 for partition __consumer_offsets-17 since its associated leader epoch 8 is not higher than the current leader epoch 8 (Logging$class:87)

2019-05-16 00:45:24,303 WARN - [kafka-request-handler-3:] ~ [Broker id=1] Ignoring LeaderAndIsr request from controller 1 with correlation id 1 epoch 51 for partition __consumer_offsets-30 since its associated leader epoch 8 is not higher than the current leader epoch 8 (Logging$class:87)

2019-05-16 00:45:24,303 WARN - [kafka-request-handler-3:] ~ [Broker id=1] Ignoring LeaderAndIsr request from controller 1 with correlation id 1 epoch 51 for partition __consumer_offsets-26 since its associated leader epoch 8 is not higher than the current leader epoch 8 (Logging$class:87)

2019-05-16 00:45:24,303 WARN - [kafka-request-handler-3:] ~ [Broker id=1] Ignoring LeaderAndIsr request from controller 1 with correlation id 1 epoch 51 for partition __consumer_offsets-5 since its associated leader epoch 8 is not higher than the current leader epoch 8 (Logging$class:87)

2019-05-16 00:45:24,303 WARN - [kafka-request-handler-3:] ~ [Broker id=1] Ignoring LeaderAndIsr request from controller 1 with correlation id 1 epoch 51 for partition __consumer_offsets-38 since its associated leader epoch 8 is not higher than the current leader epoch 8 (Logging$class:87)

2019-05-16 00:45:24,303 WARN - [kafka-request-handler-3:] ~ [Broker id=1] Ignoring LeaderAndIsr request from controller 1 with correlation id 1 epoch 51 for partition __consumer_offsets-1 since its associated leader epoch 8 is not higher than the current leader epoch 8 (Logging$class:87)

2019-05-16 00:45:24,303 WARN - [kafka-request-handler-3:] ~ [Broker id=1] Ignoring LeaderAndIsr request from controller 1 with correlation id 1 epoch 51 for partition __consumer_offsets-34 since its associated leader epoch 8 is not higher than the current leader epoch 8 (Logging$class:87)

2019-05-16 00:45:24,303 WARN - [kafka-request-handler-3:] ~ [Broker id=1] Ignoring LeaderAndIsr request from controller 1 with correlation id 1 epoch 51 for partition __consumer_offsets-16 since its associated leader epoch 8 is not higher than the current leader epoch 8 (Logging$class:87)

2019-05-16 00:45:24,303 WARN - [kafka-request-handler-3:] ~ [Broker id=1] Ignoring LeaderAndIsr request from controller 1 with correlation id 1 epoch 51 for partition __consumer_offsets-45 since its associated leader epoch 8 is not higher than the current leader epoch 8 (Logging$class:87)

2019-05-16 00:45:24,303 WARN - [kafka-request-handler-3:] ~ [Broker id=1] Ignoring LeaderAndIsr request from controller 1 with correlation id 1 epoch 51 for partition __consumer_offsets-12 since its associated leader epoch 8 is not higher than the current leader epoch 8 (Logging$class:87)

2019-05-16 00:45:24,303 WARN - [kafka-request-handler-3:] ~ [Broker id=1] Ignoring LeaderAndIsr request from controller 1 with correlation id 1 epoch 51 for partition __consumer_offsets-41 since its associated leader epoch 8 is not higher than the current leader epoch 8 (Logging$class:87)

2019-05-16 00:45:24,303 WARN - [kafka-request-handler-3:] ~ [Broker id=1] Ignoring LeaderAndIsr request from controller 1 with correlation id 1 epoch 51 for partition __consumer_offsets-24 since its associated leader epoch 8 is not higher than the current leader epoch 8 (Logging$class:87)

2019-05-16 00:45:24,303 WARN - [kafka-request-handler-3:] ~ [Broker id=1] Ignoring LeaderAndIsr request from controller 1 with correlation id 1 epoch 51 for partition __consumer_offsets-20 since its associated leader epoch 8 is not higher than the current leader epoch 8 (Logging$class:87)

2019-05-16 00:45:24,303 WARN - [kafka-request-handler-3:] ~ [Broker id=1] Ignoring LeaderAndIsr request from controller 1 with correlation id 1 epoch 51 for partition __consumer_offsets-49 since its associated leader epoch 8 is not higher than the current leader epoch 8 (Logging$class:87)

2019-05-16 00:45:24,303 WARN - [kafka-request-handler-3:] ~ [Broker id=1] Ignoring LeaderAndIsr request from controller 1 with correlation id 1 epoch 51 for partition __consumer_offsets-0 since its associated leader epoch 8 is not higher than the current leader epoch 8 (Logging$class:87)

2019-05-16 00:45:24,303 WARN - [kafka-request-handler-3:] ~ [Broker id=1] Ignoring LeaderAndIsr request from controller 1 with correlation id 1 epoch 51 for partition __consumer_offsets-29 since its associated leader epoch 8 is not higher than the current leader epoch 8 (Logging$class:87)

2019-05-16 00:45:24,303 WARN - [kafka-request-handler-3:] ~ [Broker id=1] Ignoring LeaderAndIsr request from controller 1 with correlation id 1 epoch 51 for partition __consumer_offsets-25 since its associated leader epoch 8 is not higher than the current leader epoch 8 (Logging$class:87)

2019-05-16 00:45:24,303 WARN - [kafka-request-handler-3:] ~ [Broker id=1] Ignoring LeaderAndIsr request from controller 1 with correlation id 1 epoch 51 for partition __consumer_offsets-8 since its associated leader epoch 8 is not higher than the current leader epoch 8 (Logging$class:87)

2019-05-16 00:45:24,303 WARN - [kafka-request-handler-3:] ~ [Broker id=1] Ignoring LeaderAndIsr request from controller 1 with correlation id 1 epoch 51 for partition __consumer_offsets-37 since its associated leader epoch 8 is not higher than the current leader epoch 8 (Logging$class:87)


2019-05-16 00:45:47,169 WARN - [NIOServerCxn.Factory:localhost/127.0.0.1:9026:] ~ caught end of stream exception (NIOServerCnxn:357)

EndOfStreamException: Unable to read additional data from client sessionid 0x16abe0b44490009, likely client has closed socket

at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)

at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)

at java.lang.Thread.run(Thread.java:748)

2019-05-16 00:45:48,829 ERROR - [kafka-network-thread-1-ListenerName(PLAINTEXT)-PLAINTEXT-0:] ~ Processor got uncaught exception. (Logging$class:107)

java.lang.OutOfMemoryError: Java heap space

at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)

at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)

at org.apache.kafka.common.memory.MemoryPool$1.tryAllocate(MemoryPool.java:30)

at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:140)

at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:93)

at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:231)

at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:192)

at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:528)

at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:469)

at org.apache.kafka.common.network.Selector.poll(Selector.java:398)

at kafka.network.Processor.poll(SocketServer.scala:535)

at kafka.network.Processor.run(SocketServer.scala:452)

at java.lang.Thread.run(Thread.java:748)

2019-05-16 00:45:55,760 WARN - [NIOServerCxn.Factory:localhost/127.0.0.1:9026:] ~ caught end of stream exception (NIOServerCnxn:357)

EndOfStreamException: Unable to read additional data from client sessionid 0x16abe0b4449000a, likely client has closed socket

at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)

at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)

at java.lang.Thread.run(Thread.java:748)

2019-05-16 00:45:56,147 ERROR - [kafka-network-thread-1-ListenerName(PLAINTEXT)-PLAINTEXT-0:] ~ Processor got uncaught exception. (Logging$class:107)

java.lang.OutOfMemoryError: Java heap space

at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)

at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)

at org.apache.kafka.common.memory.MemoryPool$1.tryAllocate(MemoryPool.java:30)

at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:140)

at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:93)

at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:231)

at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:192)

at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:528)

at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:469)

at org.apache.kafka.common.network.Selector.poll(Selector.java:398)

at kafka.network.Processor.poll(SocketServer.scala:535)

at kafka.network.Processor.run(SocketServer.scala:452)

at java.lang.Thread.run(Thread.java:748)

2019-05-16 00:46:03,368 ERROR - [kafka-network-thread-1-ListenerName(PLAINTEXT)-PLAINTEXT-2:] ~ Processor got uncaught exception. (Logging$class:107)

java.lang.OutOfMemoryError: Java heap space



Don't have an account?