Member since
09-11-2015
269
Posts
281
Kudos Received
55
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3155 | 03-15-2017 07:12 AM | |
1766 | 03-14-2017 07:08 PM | |
2190 | 03-14-2017 03:36 PM | |
1799 | 02-28-2017 04:32 PM | |
1290 | 02-28-2017 10:02 AM |
10-26-2016
01:12 AM
The article: Modify Atlas Entity properties using REST API commands contains a full description for how to update both the comment and description entity properties for Atlas managed hive_table types.
... View more
10-23-2016
04:58 PM
Thank you ayub 🙂 this error resolved after add these properties , now i am seeing other error that seems some thing related to SSL
... View more
09-13-2018
12:53 PM
At least for version 2.6.3 and above the section "Running import script on kerberized cluster" is wrong. You don't need to provide any of the options (properties) indicated (except maybe the debug one If you want to have debug output) because they are automatically detected and included in the script. Also at least in 2.6.5 a direct execution of the script in a Kerberized cluster will fail because of the CLASSPATH generated into the script. I had to edit this replacing many single JAR files by a glob inside their parent folder in order for the command to run without error. If you have this problem see answer o "Atlas can't see Hive tables" question.
... View more
10-12-2016
11:45 AM
Glad that you tried resetting the property to check the actual issue. I think the actual issue was, the way you tried to access the hive entities from UI. You could have tried DSL search like below.
... View more
08-29-2017
03:03 PM
You have to login to HBase and remove master table atlas_titan as below And restart service. hbase(main):003:0> list TABLE ATLAS_ENTITY_AUDIT_EVENTS atlas_titan 2 row(s) in 0.0070 seconds => ["ATLAS_ENTITY_AUDIT_EVENTS", "atlas_titan"] hbase(main):005:0> disable 'atlas_titan' 0 row(s) in 2.5060 seconds hbase(main):006:0> drop 'atlas_titan' 0 row(s) in 1.2730 seconds hbase(main):007:0> exit Restart Atlas service from Ambari UI
... View more
09-26-2016
01:13 PM
7 Kudos
This short post concentrates on solving most common issue found while publishing metadata to kafka topic for Atlas server over a secure(kerberized) cluster. Issue: With AtlasHook configured for Hive/Storm/Falcon, if you are seeing below stack trace in the logs of the corresponding component. This means, AtlasHook is not able to publish metadata to kafka for Atlas consumption. The reason for this failure could be
Kafka topic to which the hook is trying to publish does not exist. OR Kafka topic does not have proper access control lists(ACL) configured for the user. org.apache.kafka.common.KafkaException: Failed to construct kafka producer
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:335)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:188)
at org.apache.atlas.kafka.KafkaNotification.createProducer(KafkaNotification.java:312)
at org.apache.atlas.kafka.KafkaNotification.sendInternal(KafkaNotification.java:220)
at org.apache.atlas.notification.AbstractNotification.send(AbstractNotification.java:84)
at org.apache.atlas.hook.AtlasHook.notifyEntitiesInternal(AtlasHook.java:126)
at org.apache.atlas.hook.AtlasHook.notifyEntities(AtlasHook.java:111)
at org.apache.atlas.hook.AtlasHook.notifyEntities(AtlasHook.java:157)
at org.apache.atlas.hive.hook.HiveHook.fireAndForget(HiveHook.java:274)
at org.apache.atlas.hive.hook.HiveHook.access$200(HiveHook.java:81)
at org.apache.atlas.hive.hook.HiveHook$2.run(HiveHook.java:185)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.kafka.common.KafkaException: javax.security.auth.login.LoginException: Could not login: the client is being asked for a password, but the Kafka client code does not currently support obtaining a password from the user. not available to garner authentication information from the user
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:86)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:71)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:83)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:277)
... 15 more
Caused by: javax.security.auth.login.LoginException: Could not login: the client is being asked for a password, but the Kafka client code does not currently support obtaining a password from the user. not available to garner authentication information from the user
at com.sun.security.auth.module.Krb5LoginModule.promptForPass(Krb5LoginModule.java:940)
at com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:760)
at com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:617)
at sun.reflect.GeneratedMethodAccessor54.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at javax.security.auth.login.LoginContext.invoke(LoginContext.java:755)
at javax.security.auth.login.LoginContext.access$000(LoginContext.java:195)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:682)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:680)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680)
at javax.security.auth.login.LoginContext.login(LoginContext.java:587)
at org.apache.kafka.common.security.authenticator.AbstractLogin.login(AbstractLogin.java:69)
at org.apache.kafka.common.security.kerberos.KerberosLogin.login(KerberosLogin.java:110)
at org.apache.kafka.common.security.authenticator.LoginManager.<init>(LoginManager.java:46)
at org.apache.kafka.common.security.authenticator.LoginManager.acquireLoginManager(LoginManager.java:68)
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:78)
... 18 more
Resolution: Below are the steps required in secure environments to setup Kafka topics used by Atlas:
Login with Kafka service user identity Create Kafka topics ATLAS_HOOK and ATLAS_ENTITIES with the following commands: $KAFKA_HOME/bin/kafka-topics.sh --zookeeper $ZK_ENDPOINT --topic ATLAS_HOOK --create --partitions 1 --replication-factor $KAFKA_REPL_FACTOR
$KAFKA_HOME/bin/kafka-topics.sh --zookeeper $ZK_ENDPOINT --topic ATLAS_ENTITIES --create --partitions 1 --replication-factor $KAFKA_REPL_FACTOR
Setup ACLs on these topics with following commands: $KAFKA_HOME/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=$ZK_ENDPOINT --add --topic ATLAS_HOOK --allow-principal User:* --producer
$KAFKA_HOME/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=$ZK_ENDPOINT --add --topic ATLAS_HOOK --allow-principal User:$ATLAS_USER --consumer --group atlas
$KAFKA_HOME/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=$ZK_ENDPOINT --add --topic ATLAS_ENTITIES --allow-principal User:$ATLAS_USER --producer
$KAFKA_HOME/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=$ZK_ENDPOINT --add --topic ATLAS_ENTITIES --allow-principal User:$RANGER_USER --consumer --group ranger_entities_consumer
If Ranger authorization is enabled for Kafka, Ranger policies should be setup for the following accesses: topic: ATLAS_HOOK; { group=public; permission=publish }; { user=$ATLAS_USER; permission=consume }
topic: ATLAS_ENTITIES; { user=$ATLAS_USER; permission=publish}; { user=$RANGER_USER; permission=consume } Also check if the atlas-application.properties file under hook(storm/hive/falcon) component configuration directory(typically it is under /etc/storm/conf) have a right keytab and principal information. Below are the two properties you should look for.. atlas.jaas.KafkaClient.option.principal=<component_principal>
atlas.jaas.KafkaClient.option.keyTab=<component_keytab_path>
For example:
atlas.jaas.KafkaClient.option.principal=storm-cl1/_HOST@EXAMPLE.COM
atlas.jaas.KafkaClient.option.keyTab=/etc//keytabs/storm.headless.keytab KAFKA_HOME is typically /usr/hdp/current/kafka-broker ZK_ENDPOINT should be set to Zookeeper URL for Kafka KAFKA_REPL_FACTOR should be set to value of Atlas configuration 'atlas.notification.replicas' ATLAS_USER should the kerberos identity of the Atlas server, typically 'atlas' RANGER_USER should be the kerberos identity of Ranger Tagsync process, typically 'rangertagsync'
... View more
Labels:
09-27-2016
10:32 AM
@jk: There was some network slowness and it was getting timeout as well. So once that is resolved I ran your above give steps this problem resolved. It is being installed now without any other error.
... View more
09-26-2016
10:30 AM
1 Kudo
Good descriptive article on how to install Atlas HA via Ambari
... View more
07-19-2018
04:52 AM
Hi Alex It is Ranger user sync.
... View more
- « Previous
- Next »