Member since
03-02-2021
43
Posts
1
Kudos Received
0
Solutions
02-01-2022
10:55 PM
Hello,
I'm facing "ERROR transport.TSaslTransport: SASL negotiation failure javax.security.sasl.SaslException: GSS initiate failed" issue while connecting to kerberized hive (CDH-6.3.4) using beeline installed on a remote machine.
I'm able to perform 'kinit' on a remote machine.
Beeline version on server side : Hive 2.1.1-cdh6.3.4
Beeline version on remote machine : Hive 2.1.1-cdh6.3.4
I'm able to connect to a non-kerberized hive using the same beeline.
Error message:
[root@localhost /]# beeline -u "jdbc:hive2://test-cdh.test.com:10000/;principal=hive/test-cdh.test.com@test.COM" which: no hbase in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin) SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/lib/hive/lib/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Connecting to jdbc:hive2://test-cdh.test.com:10000/;principal=hive/test-cdh.test.com@test.COM [main]: ERROR transport.TSaslTransport: SASL negotiation failure javax.security.sasl.SaslException: GSS initiate failed at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) ~[?:1.8.0_322] at org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94) ~[hive-exec-2.1.1-cdh6.3.4.jar:2.1.1-cdh6.3.4] at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271) [hive-exec-2.1.1-cdh6.3.4.jar:2.1.1-cdh6.3.4] at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37) [hive-exec-2.1.1-cdh6.3.4.jar:2.1.1-cdh6.3.4] at Caused by: org.ietf.jgss.GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt) at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:162) ~[?:1.8.0_322] [main]: WARN jdbc.HiveConnection: Failed to connect to test-cdh.test.com:10000 Unknown HS2 problem when communicating with Thrift server. Error: Could not open client transport with JDBC Uri: jdbc:hive2://test-cdh.test.com:10000/;principal=hive/test-cdh.test.com@test.COM: GSS initiate failed (state=08S01,code=0) Beeline version 2.1.1-cdh6.3.4 by Apache Hive
... View more
Labels:
09-20-2021
05:58 AM
@dmharshit 2 things going on here. 1. The link https://archive.cloudera.com/cm7/ is not valid because thus is showing Cloudera Manager not the CDH (Cloudera Runtime Version). 2. So what you need is to download the CDH binary for CDP7.1.6 version which can be found here: https://archive.cloudera.com/cdh7/latest/parcels/ Or for the Spark Standalone parcels https://archive.cloudera.com/p/spark3/3.1.7270.0/
... View more
07-12-2021
01:59 AM
@tarekabouzeid91 wrote: I assume you are using Capacity scheduler not fair scheduler. that's why queues wont take available resources from other queues, you can read more regarding that here Comparison of Fair Scheduler with Capacity Scheduler | CDP Public Cloud (cloudera.com) . Yes I am using Capacity scheduler. yarn.resourcemanager.scheduler.class = org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
... View more
06-17-2021
11:38 PM
@dmharshit , When you run this query, does a YARN application ID get generated? Or the query fails before triggering the YARN application? In case YARN application is triggered, please get the logs of that particular YARN application and check for errors. yarn logs -applicationId your_application_id > your_application_id.log 2>&1 Check and see if you're able to get any detailed errors in this log file and share. Thanks, Megh
... View more
06-14-2021
10:14 PM
If I edit oozie-site.xml & check path for following value /etc/oozie/conf/oozie-site.xml/oozie.service.SparkConfigurationService.spark.configurations , it is = /usr/hdp/current/spark-client/conf Actual path I see on my master server = /usr/hdp/current/spark2-client/conf If I try to edit and modify the path & restart the service, old value is getting replaced. As we see in the list, soft links for spark are in black colour as they don't exist. Regards, Amey.
... View more
06-13-2021
09:23 PM
As a work around I copied the existing directory structure on the server running HIVE server service and that helped. The query gets executed and the output is copied on HIVE server. I copy the output manually to the Edge node for developers. Not sure if there are any techniques so that we can copy the processed HDFS data directly on local fs of Edge node.
... View more
06-09-2021
04:24 AM
Problem is still there. 21/06/09 16:50:22 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
21/06/09 16:50:22 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
21/06/09 16:50:22 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
21/06/09 16:50:22 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
21/06/09 16:50:22 INFO zookeeper.ZooKeeper: Client environment:os.version=3.10.0-1127.19.1.el7.x86_64
21/06/09 16:50:22 INFO zookeeper.ZooKeeper: Client environment:user.name=eagledev
21/06/09 16:50:22 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/user1
21/06/09 16:50:22 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/user1
21/06/09 16:50:22 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=hdp-slave1.mydomain.com:2181,hdp-slave2.mydomain.com:2181,hdp-master.mydomain.com:2181 sessionTimeout=60000 watcher=org.apache.curator.ConnectionState@5ace1ed4
21/06/09 16:50:22 INFO zookeeper.ClientCnxn: Opening socket connection to server hdp-slave1.mydomain.com/10.200.104.188:2181. Will not attempt to authenticate using SASL (unknown error)
21/06/09 16:50:22 INFO zookeeper.ClientCnxn: Socket connection established to hdp-slave1.mydomain.com/10.200.104.188:2181, initiating session
21/06/09 16:50:22 INFO imps.CuratorFrameworkImpl: backgroundOperationsLoop exiting
21/06/09 16:50:22 INFO zookeeper.ClientCnxn: Session establishment complete on server hdp-slave1.mydomain.com/10.200.104.188:2181, sessionid = 0x279ef5fd2c3006b, negotiated timeout = 60000
21/06/09 16:50:22 INFO zookeeper.ZooKeeper: Session: 0x279ef5fd2c3006b closed
21/06/09 16:50:22 INFO zookeeper.ClientCnxn: EventThread shut down
org.apache.curator.CuratorZookeeperClient.startAdvancedTracer(Ljava/lang/String;)Lorg/apache/curator/drivers/OperationTrace;
Beeline version 3.1.0.3.1.4.0-315 by Apache Hive
0: jdbc:hive2://hdp-slave1.mydomain.com:2 (closed)>
... View more
06-07-2021
10:02 PM
I ensured that the path exists on local system i.e. home/user1/user1_Run_Jar/AB_HIVE_1805 2021/neo4j-graph-preparation/data-files/user_stats & has 777 permissions recursively. But still same error.
... View more
06-03-2021
11:54 PM
Hello You can take a look of the Cloudera Manager Agent logs, and get the details about the issue Reference: https://docs.cloudera.com/cdp-private-cloud-base/7.1.6/managing-clusters/topics/cm-manage-agent-logs.html
... View more
06-02-2021
01:50 PM
@dmharshit Please have a look at my other posting on keytabs https://community.cloudera.com/t5/Support-Questions/Headless-Keytab-Vs-User-Keytab-Vs-Service-Keytab/m-p/175277/highlight/true#M137536 Having said that you are switched to the hive user and attempting to use hdfs-headless-keytab. That's not possible. As the root user run the following steps # su - hdfs
[hdfs@server-hdp ~]$ kinit -kt /etc/security/keytabs/hdfs.headless.keytab Now you should have a valid ticket [hdfs@server-hdp ~]$ klist Happy hadooping !!!
... View more
- « Previous
-
- 1
- 2
- Next »