Member since
09-11-2015
269
Posts
281
Kudos Received
55
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3220 | 03-15-2017 07:12 AM | |
1819 | 03-14-2017 07:08 PM | |
2245 | 03-14-2017 03:36 PM | |
1840 | 02-28-2017 04:32 PM | |
1319 | 02-28-2017 10:02 AM |
09-26-2016
01:39 PM
@Saurabh can you check if there is any proxy configured for your cluster? Similar issue is reported here, please check if this helps.
... View more
09-26-2016
01:13 PM
7 Kudos
This short post concentrates on solving most common issue found while publishing metadata to kafka topic for Atlas server over a secure(kerberized) cluster. Issue: With AtlasHook configured for Hive/Storm/Falcon, if you are seeing below stack trace in the logs of the corresponding component. This means, AtlasHook is not able to publish metadata to kafka for Atlas consumption. The reason for this failure could be
Kafka topic to which the hook is trying to publish does not exist. OR Kafka topic does not have proper access control lists(ACL) configured for the user. org.apache.kafka.common.KafkaException: Failed to construct kafka producer
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:335)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:188)
at org.apache.atlas.kafka.KafkaNotification.createProducer(KafkaNotification.java:312)
at org.apache.atlas.kafka.KafkaNotification.sendInternal(KafkaNotification.java:220)
at org.apache.atlas.notification.AbstractNotification.send(AbstractNotification.java:84)
at org.apache.atlas.hook.AtlasHook.notifyEntitiesInternal(AtlasHook.java:126)
at org.apache.atlas.hook.AtlasHook.notifyEntities(AtlasHook.java:111)
at org.apache.atlas.hook.AtlasHook.notifyEntities(AtlasHook.java:157)
at org.apache.atlas.hive.hook.HiveHook.fireAndForget(HiveHook.java:274)
at org.apache.atlas.hive.hook.HiveHook.access$200(HiveHook.java:81)
at org.apache.atlas.hive.hook.HiveHook$2.run(HiveHook.java:185)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.kafka.common.KafkaException: javax.security.auth.login.LoginException: Could not login: the client is being asked for a password, but the Kafka client code does not currently support obtaining a password from the user. not available to garner authentication information from the user
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:86)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:71)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:83)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:277)
... 15 more
Caused by: javax.security.auth.login.LoginException: Could not login: the client is being asked for a password, but the Kafka client code does not currently support obtaining a password from the user. not available to garner authentication information from the user
at com.sun.security.auth.module.Krb5LoginModule.promptForPass(Krb5LoginModule.java:940)
at com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:760)
at com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:617)
at sun.reflect.GeneratedMethodAccessor54.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at javax.security.auth.login.LoginContext.invoke(LoginContext.java:755)
at javax.security.auth.login.LoginContext.access$000(LoginContext.java:195)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:682)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:680)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680)
at javax.security.auth.login.LoginContext.login(LoginContext.java:587)
at org.apache.kafka.common.security.authenticator.AbstractLogin.login(AbstractLogin.java:69)
at org.apache.kafka.common.security.kerberos.KerberosLogin.login(KerberosLogin.java:110)
at org.apache.kafka.common.security.authenticator.LoginManager.<init>(LoginManager.java:46)
at org.apache.kafka.common.security.authenticator.LoginManager.acquireLoginManager(LoginManager.java:68)
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:78)
... 18 more
Resolution: Below are the steps required in secure environments to setup Kafka topics used by Atlas:
Login with Kafka service user identity Create Kafka topics ATLAS_HOOK and ATLAS_ENTITIES with the following commands: $KAFKA_HOME/bin/kafka-topics.sh --zookeeper $ZK_ENDPOINT --topic ATLAS_HOOK --create --partitions 1 --replication-factor $KAFKA_REPL_FACTOR
$KAFKA_HOME/bin/kafka-topics.sh --zookeeper $ZK_ENDPOINT --topic ATLAS_ENTITIES --create --partitions 1 --replication-factor $KAFKA_REPL_FACTOR
Setup ACLs on these topics with following commands: $KAFKA_HOME/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=$ZK_ENDPOINT --add --topic ATLAS_HOOK --allow-principal User:* --producer
$KAFKA_HOME/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=$ZK_ENDPOINT --add --topic ATLAS_HOOK --allow-principal User:$ATLAS_USER --consumer --group atlas
$KAFKA_HOME/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=$ZK_ENDPOINT --add --topic ATLAS_ENTITIES --allow-principal User:$ATLAS_USER --producer
$KAFKA_HOME/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=$ZK_ENDPOINT --add --topic ATLAS_ENTITIES --allow-principal User:$RANGER_USER --consumer --group ranger_entities_consumer
If Ranger authorization is enabled for Kafka, Ranger policies should be setup for the following accesses: topic: ATLAS_HOOK; { group=public; permission=publish }; { user=$ATLAS_USER; permission=consume }
topic: ATLAS_ENTITIES; { user=$ATLAS_USER; permission=publish}; { user=$RANGER_USER; permission=consume } Also check if the atlas-application.properties file under hook(storm/hive/falcon) component configuration directory(typically it is under /etc/storm/conf) have a right keytab and principal information. Below are the two properties you should look for.. atlas.jaas.KafkaClient.option.principal=<component_principal>
atlas.jaas.KafkaClient.option.keyTab=<component_keytab_path>
For example:
atlas.jaas.KafkaClient.option.principal=storm-cl1/_HOST@EXAMPLE.COM
atlas.jaas.KafkaClient.option.keyTab=/etc//keytabs/storm.headless.keytab KAFKA_HOME is typically /usr/hdp/current/kafka-broker ZK_ENDPOINT should be set to Zookeeper URL for Kafka KAFKA_REPL_FACTOR should be set to value of Atlas configuration 'atlas.notification.replicas' ATLAS_USER should the kerberos identity of the Atlas server, typically 'atlas' RANGER_USER should be the kerberos identity of Ranger Tagsync process, typically 'rangertagsync'
... View more
Labels:
09-26-2016
11:53 AM
1 Kudo
@Saurabh Can you please verify the BASE URL is correct in the repo files and is accessible from all the nodes? Also, please if check if there any issues with network.
... View more
09-25-2016
05:41 AM
1 Kudo
@Avijeet Dash Yes, Atlas UI should work in kerberized environment as well. Can you try logging in as Atlas user?
... View more
09-24-2016
12:20 PM
6 Kudos
This article assumes that, you have a cluster with more than one node(which is a requirement for enabling HA on Atlas). Also make sure that atlas is up and running on that cluster. Please refer to this documentation link for deploying cluster with Atlas enabled. Prerequisites for High Availability feature in Atlas The following prerequisites must be met for setting up the High Availability feature.
Ensure that you install Apache Zookeeper on a cluster of machines (a minimum of 3 servers is recommended for production). Select 2 or more physical machines to run the Atlas Web Service instances on. These machines define what we refer to as a 'server ensemble' for Atlas. Step1: Verify from Ambari UI that Atlas is up and running.. Step2: Stop Atlas using Ambari Step3: Navigate to host page in Ambari where Atlas service is not installed and add one more atlas service. Step4: If Infra solr client is not installed on the host where we are trying to install another instance of Atlas, then Ambari would display this pop up window. Add Infra solr client instance on the same host Step5: After successfully adding Infra solr client, add Atlas server instance by following Step3. Step6: Start the Atlas service now Step7: Verify from ambari UI that both the atlas services should be up and running. Step8: Check which instance of Atlas is Active and which one is passive.. HTTP get request on one of the Atlas instances showing its status as "Active" HTTP get request on one of the Atlas instances showing its status as "Passive" Step9: Now Atlas is running in HA mode(Active-Passive mode). With this, you should be able to access Atlas UI(this can be pulled from Ambari quick links). For more information on Atlas High Availability, Please refer to http://atlas.incubator.apache.org/HighAvailability.html.
... View more
Labels:
09-24-2016
10:35 AM
2 Kudos
@Vasilis Vagias HiveView is trying to read HiveHook class from the designated path because this class reference is there in hive-site.xml . Please check if the HiveHook class is available and it has proper permissions configured. I suspect this is related to the corrupt VM that you encountered earlier. With latest sandbox you should not see this issue. Do let me know if you face this with latest sandbox as well.
... View more
09-23-2016
04:08 PM
2 Kudos
@Vasilis Vagias Seems like Atlas service is not running.. Or can you check if the atlas.rest.address property in atlas-application.properties is pointing to Atlas?
... View more
09-23-2016
02:50 PM
2 Kudos
@Vasilis Vagias Issue seems to be with corrupted hive hook bin path. Copying the required jars from another location to the hive hook path and adding HADOOP CLASSPATH should fix the issue. With this import-hive.sh should import hive metadata to Atlas successfully.
... View more
09-23-2016
01:00 PM
1 Kudo
@Avijeet Dash The log you pasted is just a warning, that may be not the cause for the issue. Can you also clear the cookies from your browser and try, sometimes it happens that the stale cookie might result in this issue. Can you please attach the application log for debugging? Also, seems like your cluster is a secure one(HDP-2.4). HDP-2.5 has many enhancements with respect to security, recommendation is to upgrade the cluster to the latest HDP release. Thanks Ayub Khan
... View more
09-23-2016
07:52 AM
2 Kudos
@gkeys Looks like the permissions of this directory "/var/lib/ambari-agent/cache/custom_actions/scripts/" is messed up. From the output it seems like scripts directory is not given execute permissions(which is needed for all directories). To view permissions on that directory, just execute ls -ld /var/lib/ambari-agent/cache/custom_actions/scripts/ To set the read permission on files and the read and execute permissions on directories recursively, use this command: chmod -R a+rX /var/lib/ambari-agent/cache/custom_actions/scripts/
Here's an explanation of that command:
chmod is the name of the command, use for changing the permissions of files. -R is the recursive flag. It means apply this command to the directory, and all of its children, and of its children's children, and so on. a stands for all: apply these permissions the owner of the file, the group owner of the file, and all other users. + means add the following permissions if they aren't set already. r means the read permission. X means the execute permission, but only on directories. Lower-case x would mean the execute permission on both files and directories. More information is found in the manpage for the chmod command With the above steps executed, you should be able to test the ranger DB connection. Hope this will solve the issue.
... View more