Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Hiveserver2 ATLAS_HOOK Error

avatar
Expert Contributor

I use HDP 2.6.3. I have a Spark application that runs per five minutes and appends hive tables.

In hiveserver2.log I get following error:

org.apache.atlas.notification.NotificationException: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for ATLAS_HOOK-0 due to 30065 ms has passed since last append
at org.apache.atlas.kafka.KafkaNotification.sendInternalToProducer(KafkaNotification.java:239)
at org.apache.atlas.kafka.KafkaNotification.sendInternal(KafkaNotification.java:212)
at org.apache.atlas.notification.AbstractNotification.send(AbstractNotification.java:114)
at org.apache.atlas.hook.AtlasHook.notifyEntitiesInternal(AtlasHook.java:143)
at org.apache.atlas.hook.AtlasHook.notifyEntities(AtlasHook.java:128)
at org.apache.atlas.hook.AtlasHook.notifyEntities(AtlasHook.java:181)
at org.apache.atlas.hive.hook.HiveHook.access$300(HiveHook.java:85)
at org.apache.atlas.hive.hook.HiveHook$3.run(HiveHook.java:225)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at org.apache.atlas.hive.hook.HiveHook.notifyAsPrivilegedAction(HiveHook.java:234)
at org.apache.atlas.hive.hook.HiveHook$2.run(HiveHook.java:207)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for ATLAS_HOOK-0 due to 30065 ms has passed since last append
at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:65)
at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:52)
at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:25)
at org.apache.atlas.kafka.KafkaNotification.sendInternalToProducer(KafkaNotification.java:230)
... 17 more
Caused by: org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for ATLAS_HOOK-0 due to 30065 ms has passed since last append

I am fed up with AtlasHook errors.

1 REPLY 1

avatar
Expert Contributor

There are a few possible causes for this, but with only one message in the queue and a fairly short (which I'm reading here as low-impact) job, my first bet would be that the permissions are not set up in Ranger for the Hive user to be able to write to the Atlas Kafka topic. (https://community.hortonworks.com/questions/60564/atlas-api-times-out-on-entity-creation.html)