So’ I’m facing a problem that I don’t really know how to solve. The
core problem is that Atlas don’t process the information in the ATLAS_HOOK
topic fast enough. So we have a backlog that is growing every day.
As we want to use tag-based security in combination with Ranger,
we moved away from dropping and recreating the tables in Hive every night when
we do our sqoop imports to instead do the sqoop import into a temporary table,
and then truncate and “insert into” the target table. We are doing this for
many 1000’s of tables every night, and many of them have over 1000 columns. This
in combination with the Column level lineage in Atlas creates a huge workload
that Atlas needs to process, and it just doesn’t handle it.
What I’ve been trying to do is increase the HBase and Kafka
performance to make sure that there is no
bottlenecks tere. Like HBase atlas_titan table is right now running evenly distributed
over 287 partitions, and I can read all messages in the topic in roughly 10-15
minutes. So I don’t think that the problem is within those two systems.
I like to get some pointer on what to do to increase the
performance on how fast Atlas is processing the data in the ATLAS_HOOK topic.
For example, it looks like the NotificationHookConsumer is only running with
one thread. Is it possible to run this in a multi-threaded setup to be able to
process the data in parallel? Anything else you can think of that can help me
here?