Support Questions
Find answers, ask questions, and share your expertise

unable to sqoop from rdbs getting the kafka and atlas error

sqoop.png

I am trying to sqoop from RDBMS to hive but I am getting strange error , Please find attached image for the error.

6 REPLIES 6

Contributor
@Anurag Mishra

The problem could possibly because of Kafka topic permissions, You may want to check permissions for Kafka topic ATLAS_HOOK, if you are using ranger please follow below instructions.

Create following Kafka policies:

  • topic=ATLAS_HOOK

    permission=publish, create; group=public

    permission=consume, create; user=atlas (for non-kerberized environments, set group=public)

  • topic=ATLAS_ENTITIES

    permission=publish, create; user=atlas (for non-kerberized environments, set group=public)

    permission=consume, create; group=public

Also adding to that if Ranger is not in use then you may want to run below commands as Kafka User to give permissions.

/usr/hdp/current/kafka-broker/bin/kafka-acls.sh
--add --group * --allow-principal User:* --operation All
--authorizer-properties "zookeeper.connect=<ZOOKEEPER_HOST>:2181" 

/usr/hdp/current/kafka-broker/bin/kafka-acls.sh
--add --topic ATLAS_ENTITIES --allow-principal User:* --operation All
--authorizer-properties "zookeeper.connect=<ZOOKEEPER_HOST>:2181" 

/usr/hdp/current/kafka-broker/bin/kafka-acls.sh
--add --topic ATLAS_HOOK --allow-principal User:* --operation All
--authorizer-properties "zookeeper.connect=<ZOOKEEPER_HOST>:2181" 

@Chiran Ravani

thanks for your reply , but can you please explain why these atlas and kafka related errors are coming when I am trying to sqoop .

It is because Sqoop Atlas hook is enabled.

in other cluster I do not have any such issue neither i have gave permission this way . however this cluster I am getting the issue and it is kerberized cluster . does these properties are relevant when cluster is kerberized ?

Yes, we'd need the proper acl's to access topics in kerberised environment.

Contributor

@Anurag Mishra By Default Ambati should take care while starting Atlas service, it runs acl script to provide the access below. if the same is not run you can re-run it.

/usr/hdp/current/kafka-broker/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=<ZOOKEEPER_HOSTNAME>:2181 --add  --topic ATLAS_HOOK --allow-principal User:* --producer


/usr/hdp/current/kafka-broker/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=<ZOOKEEPER_HOSTNAME>:2181 --add  --topic ATLAS_HOOK --allow-principal User:atlas --consumer --group atlas


/usr/hdp/current/kafka-broker/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=<ZOOKEEPER_HOSTNAME>:2181 --add  --topic ATLAS_ENTITIES --allow-principal User:atlas --producer


/usr/hdp/current/kafka-broker/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=<ZOOKEEPER_HOSTNAME>:2181 --add  --topic ATLAS_ENTITIES --allow-principal User:rangertagsync --consumer --group ranger_entities_consumer
Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.