Member since
06-06-2016
4
Posts
0
Kudos Received
0
Solutions
05-05-2021
04:47 PM
Hi, We recently installed CDSW latest version on our existing HDP cluster which is kerberized and when we try to add principal and password under Hadoop Authentication tab , we get following error Could not authenticate using provided Kerberos principal and password. And we try to upload a keytab it fails with error , keytab didn't upload succesfully, please try again later. Our cdsw UI works fine and cdsw validate command didn't show any errors. krb5.conf is present and its looking correct like we have in other hdp nodes and from the unix commandline user can do kinit and interact with hdfs, hive and spark without any issue. If anyone faced this issue or can provide some insight it will be a great help. Thanks,
... View more
Labels:
12-01-2020
05:05 AM
Hi, I have installed Atlas 2.0.0 using tarball on a Cloudera 6.1 Cluster, I am using Hbase and Solr as backend store, While importing both Hive and Kafka Lineage I see issue in the UI, For Hive only i can see Lineage for external tables, not for Managed table and also column level lineage is not visible, Just want to find where the issue is I tried with embedded solr and Kafka still the same issue persists, For Kafka Lineage too I am facing same issue. Following is the error I could see in the application.log, struggling with this issue for quite sometime any inputs is highly appreciated, I have one Kafka broker and my offsets.topic.replication.factor=1 and Cluster is an unsecured cluster and I could see message flowing in both the Kafka topics without any issue using console consumer and producer. ERROR - [pool-2-thread-6 - bd16ff99-871b-4248-a41b-2a022a6fabee:] ~ graph rollback due to exception AtlasBaseException:Instance hive_table with unique attribute {qualifiedName=vdr_db.customer_vw8@primary} does not exist (GraphTransactionInterceptor:166) The above error appears for hive_table and hive_process and for Kafka topic too. Thanks, Vishnu Ravi
... View more
Labels:
- Labels:
-
Apache Atlas
04-05-2017
09:19 AM
Hi , We have enabled KMS in our server, after enabling KMS and encryption , we are getting following error randomly , Is there anything we can try to resolve this issue ? org.apache.spark.SparkException: Job aborted due to stage failure: Task 58 in stage 12.0 failed 4 times, most recent failure: Lost task 58.3 in stage 12.0 (TID 4151, lpdn0307.): java.io.IOException: java.lang.IllegalArgumentException: java.net.UnknownHostException: edhpen1262 at org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:572) at org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:850) at org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:209) at org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:205) at org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:94) at org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:205) at org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:388) at org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1440) at org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:1510) We verified the DNS resolution and hostname , everything is looking good. What else may be the issue. Thanks, Vishnu Ravi
... View more
Labels:
- Labels:
-
Apache Spark