Created 09-17-2018 06:33 AM
I am trying to sqoop from RDBMS to hive but I am getting strange error , Please find attached image for the error.
Created 09-17-2018 06:46 AM
The problem could possibly because of Kafka topic permissions, You may want to check permissions for Kafka topic ATLAS_HOOK, if you are using ranger please follow below instructions.
Create following Kafka policies:
permission=publish, create; group=public
permission=consume, create; user=atlas (for non-kerberized environments, set group=public)
permission=publish, create; user=atlas (for non-kerberized environments, set group=public)
permission=consume, create; group=public
Also adding to that if Ranger is not in use then you may want to run below commands as Kafka User to give permissions.
/usr/hdp/current/kafka-broker/bin/kafka-acls.sh --add --group * --allow-principal User:* --operation All --authorizer-properties "zookeeper.connect=<ZOOKEEPER_HOST>:2181" /usr/hdp/current/kafka-broker/bin/kafka-acls.sh --add --topic ATLAS_ENTITIES --allow-principal User:* --operation All --authorizer-properties "zookeeper.connect=<ZOOKEEPER_HOST>:2181" /usr/hdp/current/kafka-broker/bin/kafka-acls.sh --add --topic ATLAS_HOOK --allow-principal User:* --operation All --authorizer-properties "zookeeper.connect=<ZOOKEEPER_HOST>:2181"
Created 09-17-2018 06:51 AM
thanks for your reply , but can you please explain why these atlas and kafka related errors are coming when I am trying to sqoop .
Created 09-17-2018 08:12 AM
It is because Sqoop Atlas hook is enabled.
Created 09-17-2018 11:45 AM
in other cluster I do not have any such issue neither i have gave permission this way . however this cluster I am getting the issue and it is kerberized cluster . does these properties are relevant when cluster is kerberized ?
Created 09-17-2018 12:25 PM
Yes, we'd need the proper acl's to access topics in kerberised environment.
Created 09-19-2018 06:09 AM
@Anurag Mishra By Default Ambati should take care while starting Atlas service, it runs acl script to provide the access below. if the same is not run you can re-run it.
/usr/hdp/current/kafka-broker/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=<ZOOKEEPER_HOSTNAME>:2181 --add --topic ATLAS_HOOK --allow-principal User:* --producer /usr/hdp/current/kafka-broker/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=<ZOOKEEPER_HOSTNAME>:2181 --add --topic ATLAS_HOOK --allow-principal User:atlas --consumer --group atlas /usr/hdp/current/kafka-broker/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=<ZOOKEEPER_HOSTNAME>:2181 --add --topic ATLAS_ENTITIES --allow-principal User:atlas --producer /usr/hdp/current/kafka-broker/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=<ZOOKEEPER_HOSTNAME>:2181 --add --topic ATLAS_ENTITIES --allow-principal User:rangertagsync --consumer --group ranger_entities_consumer