Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Unable to create roles in either beeline or Hue.

Unable to create roles in either beeline or Hue.

New Contributor

Hi Everyone,

 

I am using CDH 5.14.2 and the cluster is kerberized(integrated with LDAP). After enabling sentry I am unable to create roles using beeline or hue web UI.

 

For hadoop groups "shell based group mapping is used". I am getting below error in Sentry server:

 

2018-11-30 20:29:57,410 DEBUG org.apache.thrift.transport.TSaslServerTransport: transport map does not contain key
2018-11-30 20:29:57,410 DEBUG org.apache.thrift.transport.TSaslTransport: opening transport org.apache.thrift.transport.TSaslServerTransport@47626899
2018-11-30 20:29:57,410 DEBUG org.apache.thrift.transport.TSaslTransport: SERVER: Received message with status START and payload length 6
2018-11-30 20:29:57,410 DEBUG org.apache.thrift.transport.TSaslServerTransport: Received start message with status START
2018-11-30 20:29:57,410 DEBUG org.apache.thrift.transport.TSaslServerTransport: Received mechanism name 'GSSAPI'
2018-11-30 20:29:57,411 DEBUG org.apache.thrift.transport.TSaslTransport: SERVER: Start message handled
2018-11-30 20:29:57,411 DEBUG org.apache.thrift.transport.TSaslTransport: SERVER: Received message with status OK and payload length 1810
2018-11-30 20:29:57,412 DEBUG org.apache.thrift.transport.TSaslTransport: SERVER: Writing message with status OK and payload length 104
2018-11-30 20:29:57,412 DEBUG org.apache.thrift.transport.TSaslTransport: SERVER: Received message with status OK and payload length 0
2018-11-30 20:29:57,412 DEBUG org.apache.thrift.transport.TSaslTransport: SERVER: Writing message with status OK and payload length 50
2018-11-30 20:29:57,413 DEBUG org.apache.thrift.transport.TSaslTransport: SERVER: Received message with status COMPLETE and payload length 50
2018-11-30 20:29:57,413 ERROR org.apache.thrift.transport.TSaslTransport: SASL negotiation failure
javax.security.sasl.SaslException: Problem with callback handler [Caused by org.apache.sentry.service.thrift.ConnectionDeniedException: Connection to sentry service denied due to lack of client credentials]
        at com.sun.security.sasl.gsskerb.GssKrb5Server.doHandshake2(GssKrb5Server.java:333)
        at com.sun.security.sasl.gsskerb.GssKrb5Server.evaluateResponse(GssKrb5Server.java:161)
        at org.apache.thrift.transport.TSaslTransport$SaslParticipant.evaluateChallengeOrResponse(TSaslTransport.java:539)
        at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:283)
        at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
        at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
        at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:269)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.sentry.service.thrift.ConnectionDeniedException: Connection to sentry service denied due to lack of client credentials
        at org.apache.sentry.service.thrift.GSSCallback.handle(GSSCallback.java:103)
        at com.sun.security.sasl.gsskerb.GssKrb5Server.doHandshake2(GssKrb5Server.java:317)
        ... 9 more
2018-11-30 20:29:57,413 DEBUG org.apache.thrift.transport.TSaslTransport: SERVER: Writing message with status BAD and payload length 29
2018-11-30 20:29:57,414 DEBUG org.apache.thrift.transport.TSaslServerTransport: failed to open server transport
org.apache.thrift.transport.TTransportException: Problem with callback handler
        at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
        at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316)
        at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
        at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
        at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:269)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
2018-11-30 20:29:57,414 ERROR org.apache.thrift.server.TThreadPoolServer: Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: Problem with callback handler
        at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
        at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:269)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.thrift.transport.TTransportException: Problem with callback handler
        at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
        at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316)
        at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
        at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
        ... 4 more
2018-11-30 20:29:57,529 DEBUG org.apache.sentry.service.thrift.SentryStateBank: HMSFollower entered state STARTED
2018-11-30 20:29:57,529 DEBUG DataNucleus.Persistence: ExecutionContext "org.datanucleus.ExecutionContextThreadedImpl@2eee3069" opened for datastore "org.datanucleus.store.rdbms.RDBMSStoreManager@2da59753" with txn="org.datanucleus.TransactionImpl@21c4ce6a"
2018-11-30 20:29:57,529 DEBUG DataNucleus.Transaction: Transaction created [DataNucleus Transaction, ID=Xid=^@^@^\�, enlisted resources=[]]
2018-11-30 20:29:57,529 DEBUG DataNucleus.Transaction: Transaction begun for ExecutionContext org.datanucleus.ExecutionContextThreadedImpl@2eee3069 (optimistic=false)
2018-11-30 20:29:57,530 DEBUG DataNucleus.Connection: Connection "com.jolbox.bonecp.ConnectionHandle@76e16c62" opened with isolation level "repeatable-read" and auto-commit=false
2018-11-30 20:29:57,530 DEBUG DataNucleus.Transaction: Running enlist operation on resource: org.datanucleus.store.rdbms.ConnectionFactoryImpl$EmulatedXAResource@50086e7b, error code TMNOFLAGS and transaction: [DataNucleus Transaction, ID=Xid=^@^@^\�, enlisted resources=[]]
2018-11-30 20:29:57,530 DEBUG DataNucleus.Connection: Managed connection org.datanucleus.store.rdbms.ConnectionFactoryImpl$EmulatedXAResource@50086e7b is starting for transaction Xid=^@^@^\� with flags 0
2018-11-30 20:29:57,530 DEBUG DataNucleus.Connection: Connection added to the pool : org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl@794cbcd1 [conn=com.jolbox.bonecp.ConnectionHandle@76e16c62, commitOnRelease=false, closeOnRelease=false, closeOnTxnEnd=true] for key=org.datanucleus.ExecutionContextThreadedImpl@2eee3069 in factory=ConnectionFactory:tx[org.datanucleus.store.rdbms.ConnectionFactoryImpl@346a361]
2018-11-30 20:29:57,530 DEBUG DataNucleus.Datastore: Closing PreparedStatement "com.jolbox.bonecp.PreparedStatementHandle@222d3b9d"
2018-11-30 20:29:57,530 DEBUG DataNucleus.Datastore.Native: SELECT MAX("A0"."NOTIFICATION_ID") FROM "SENTRY_HMS_NOTIFICATION_ID" "A0"
2018-11-30 20:29:57,530 DEBUG DataNucleus.Datastore.Retrieve: Execution Time = 0 ms
2018-11-30 20:29:57,530 DEBUG DataNucleus.Transaction: Transaction committing for ExecutionContext org.datanucleus.ExecutionContextThreadedImpl@2eee3069
2018-11-30 20:29:57,530 DEBUG DataNucleus.Persistence: ExecutionContext.internalFlush() process started using ordered flush - 0 enlisted objects
2018-11-30 20:29:57,530 DEBUG DataNucleus.Persistence: ExecutionContext.internalFlush() process finished
2018-11-30 20:29:57,530 DEBUG DataNucleus.Persistence: Performing check of objects for "persistence-by-reachability" (commit) ...
2018-11-30 20:29:57,530 DEBUG DataNucleus.Persistence: Completed check of objects for "persistence-by-reachability" (commit).
2018-11-30 20:29:57,530 DEBUG DataNucleus.Transaction: Committing [DataNucleus Transaction, ID=Xid=^@^@^\�, enlisted resources=[org.datanucleus.store.rdbms.ConnectionFactoryImpl$EmulatedXAResource@50086e7b]]
2018-11-30 20:29:57,530 DEBUG DataNucleus.Connection: Managed connection org.datanucleus.store.rdbms.ConnectionFactoryImpl$EmulatedXAResource@50086e7b is committing for transaction Xid=^@^@^\� with onePhase=true
2018-11-30 20:29:57,531 DEBUG DataNucleus.Connection: Managed connection org.datanucleus.store.rdbms.ConnectionFactoryImpl$EmulatedXAResource@50086e7b committed connection for transaction Xid=^@^@^\� with onePhase=true
2018-11-30 20:29:57,531 DEBUG DataNucleus.Connection: Connection "com.jolbox.bonecp.ConnectionHandle@76e16c62" closed
2018-11-30 20:29:57,531 DEBUG DataNucleus.Connection: Connection removed from the pool : org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl@794cbcd1 [conn=com.jolbox.bonecp.ConnectionHandle@76e16c62, commitOnRelease=false, closeOnRelease=false, closeOnTxnEnd=true] for key=org.datanucleus.ExecutionContextThreadedImpl@2eee3069 in factory=ConnectionFactory:tx[org.datanucleus.store.rdbms.ConnectionFactoryImpl@346a361]
2018-11-30 20:29:57,531 DEBUG DataNucleus.Transaction: Transaction committed in 1 ms
2018-11-30 20:29:57,531 DEBUG DataNucleus.Cache: Level 1 Cache cleared
2018-11-30 20:29:57,531 DEBUG DataNucleus.Persistence: ExecutionContext "org.datanucleus.ExecutionContextThreadedImpl@2eee3069" closed
2018-11-30 20:29:57,531 DEBUG org.apache.sentry.provider.db.service.persistent.SentryStore: Retrieving Last Processed Notification ID 396
2018-11-30 20:29:57,531 DEBUG org.apache.sentry.service.thrift.HMSFollower: wakeUpWaitingClientsForSync: eventId = 396, hmsImageId = 0
2018-11-30 20:29:57,531 DEBUG DataNucleus.Persistence: ExecutionContext "org.datanucleus.ExecutionContextThreadedImpl@2eee3069" opened for datastore "org.datanucleus.store.rdbms.RDBMSStoreManager@2da59753" with txn="org.datanucleus.TransactionImpl@31c5f215"
2018-11-30 20:29:57,531 DEBUG DataNucleus.Transaction: Transaction created [DataNucleus Transaction, ID=Xid=^@^@^\�, enlisted resources=[]]
2018-11-30 20:29:57,531 DEBUG DataNucleus.Transaction: Transaction begun for ExecutionContext org.datanucleus.ExecutionContextThreadedImpl@2eee3069 (optimistic=false)
2018-11-30 20:29:57,531 DEBUG DataNucleus.Connection: Connection "com.jolbox.bonecp.ConnectionHandle@554e1b48" opened with isolation level "repeatable-read" and auto-commit=false
2018-11-30 20:29:57,531 DEBUG DataNucleus.Transaction: Running enlist operation on resource: org.datanucleus.store.rdbms.ConnectionFactoryImpl$EmulatedXAResource@730832a3, error code TMNOFLAGS and transaction: [DataNucleus Transaction, ID=Xid=^@^@^\�, enlisted resources=[]]
2018-11-30 20:29:57,531 DEBUG DataNucleus.Connection: Managed connection org.datanucleus.store.rdbms.ConnectionFactoryImpl$EmulatedXAResource@730832a3 is starting for transaction Xid=^@^@^\� with flags 0
2018-11-30 20:29:57,531 DEBUG DataNucleus.Connection: Connection added to the pool : org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl@6d2bab56 [conn=com.jolbox.bonecp.ConnectionHandle@554e1b48, commitOnRelease=false, closeOnRelease=false, closeOnTxnEnd=true] for key=org.datanucleus.ExecutionContextThreadedImpl@2eee3069 in factory=ConnectionFactory:tx[org.datanucleus.store.rdbms.ConnectionFactoryImpl@346a361]
 
======
 
Here is the log for Hive server2:
 
2018-11-30 20:21:24,207 DEBUG org.apache.hive.service.server.HiveServer2: [main]: Setting hive.aux.jars.path=file:///opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/hive/auxlib/hive-exec-1.1.0-cdh5.14.2-c...-core.jar;
 
2018-11-30 20:21:24,207 INFO  org.apache.hive.service.server.HiveServer2: [main]: Starting HiveServer2
2018-11-30 20:21:24,307 WARN  org.apache.hadoop.hive.conf.HiveConf: [main]: HiveConf of name hive.server2.idle.session.timeout_check_operation does not exist
2018-11-30 20:21:24,307 WARN  org.apache.hadoop.hive.conf.HiveConf: [main]: HiveConf of name hive.sentry.conf.url does not exist
2018-11-30 20:21:24,307 WARN  org.apache.hadoop.hive.conf.HiveConf: [main]: HiveConf of name hive.entity.capture.input.URI does not exist
2018-11-30 20:21:24,986 DEBUG org.apache.hadoop.security.UserGroupInformation: [main]: hadoop login
2018-11-30 20:21:24,988 DEBUG org.apache.hadoop.security.UserGroupInformation: [main]: hadoop login commit
2018-11-30 20:21:24,988 DEBUG org.apache.hadoop.security.UserGroupInformation: [main]: using kerberos user:clouderahiveservice/xxx
2018-11-30 20:21:24,988 DEBUG org.apache.hadoop.security.UserGroupInformation: [main]: Using user: "clouderahiveservice/xxx" with name clouderahiveservice/xxx
2018-11-30 20:21:24,988 DEBUG org.apache.hadoop.security.UserGroupInformation: [main]: User entry: "clouderahiveservice/xxx"
2018-11-30 20:21:24,989 INFO  org.apache.hadoop.security.UserGroupInformation: [main]: Login successful for user clouderahiveservice/xxx using keytab file hive.keytab
2018-11-30 20:21:24,992 INFO  org.apache.hive.service.cli.CLIService: [main]: SPNego httpUGI not created, spNegoPrincipal: , ketabFile:
2018-11-30 20:21:25,004 DEBUG org.apache.hadoop.util.NativeCodeLoader: [Timer-0]: Trying to load the custom-built native-hadoop library...
2018-11-30 20:21:25,005 DEBUG org.apache.hadoop.util.NativeCodeLoader: [Timer-0]: Loaded the native-hadoop library
2018-11-30 20:21:25,048 DEBUG org.apache.hadoop.io.nativeio.NativeIO: [Timer-0]: Initialized cache for IDs to User/Group mapping with a  cache timeout of 14400 seconds.
2018-11-30 20:21:25,198 DEBUG org.apache.hadoop.hdfs.BlockReaderLocal: [main]: dfs.client.use.legacy.blockreader.local = false
2018-11-30 20:21:25,198 DEBUG org.apache.hadoop.hdfs.BlockReaderLocal: [main]: dfs.client.read.shortcircuit = false
2018-11-30 20:21:25,198 DEBUG org.apache.hadoop.hdfs.BlockReaderLocal: [main]: dfs.client.domain.socket.data.traffic = false
2018-11-30 20:21:25,198 DEBUG org.apache.hadoop.hdfs.BlockReaderLocal: [main]: dfs.domain.socket.path = /var/run/hdfs-sockets/dn
2018-11-30 20:21:25,214 DEBUG org.apache.hadoop.hdfs.DFSClient: [main]: Sets dfs.client.block.write.replace-datanode-on-failure.min-replication to 0
2018-11-30 20:21:25,243 DEBUG org.apache.hadoop.io.retry.RetryUtils: [main]: multipleLinearRandomRetry = null
2018-11-30 20:21:25,275 DEBUG org.apache.hadoop.ipc.Server: [main]: rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine$RpcRequestWrapper, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker@39e67516
2018-11-30 20:21:25,281 DEBUG org.apache.hadoop.ipc.Client: [main]: getting client out of cache: org.apache.hadoop.ipc.Client@26f7cdf8
2018-11-30 20:21:25,633 DEBUG org.apache.hadoop.net.unix.DomainSocketWatcher: [Thread-7]: org.apache.hadoop.net.unix.DomainSocketWatcher$2@53e1893f: starting with interruptCheckPeriodMs = 60000
2018-11-30 20:21:25,649 DEBUG org.apache.hadoop.util.PerformanceAdvisory: [main]: Both short-circuit local reads and UNIX domain socket are disabled.
2018-11-30 20:21:25,657 DEBUG org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil: [main]: DataTransferProtocol not using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection
2018-11-30 20:21:25,679 DEBUG org.apache.hadoop.ipc.Client: [main]: The ping interval is 60000 ms.
2018-11-30 20:21:25,679 DEBUG org.apache.hadoop.ipc.Client: [main]: Connecting to xxx
2018-11-30 20:21:25,693 DEBUG org.apache.hadoop.security.UserGroupInformation: [main]: PrivilegedAction as:clouderahiveservice/xxx (auth:KERBEROS) from:org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:756)
2018-11-30 20:21:25,760 DEBUG org.apache.hadoop.security.SaslRpcClient: [main]: Sending sasl message state: NEGOTIATE
 
2018-11-30 20:21:25,770 DEBUG org.apache.hadoop.security.SaslRpcClient: [main]: Get token info proto:interface org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolPB info:@org.apache.hadoop.security.token.TokenInfo(value=class org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenSelector)
2018-11-30 20:21:25,777 DEBUG org.apache.hadoop.security.SaslRpcClient: [main]: Get kerberos info proto:interface org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolPB info:@org.apache.hadoop.security.KerberosInfo(clientPrincipal=, serverPrincipal=dfs.namenode.kerberos.principal)
2018-11-30 20:21:25,778 DEBUG org.apache.hadoop.security.SaslRpcClient: [main]: RPC Server's Kerberos principal name for protocol=org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolPB is clouderahdfsservice/xxx
2018-11-30 20:21:25,778 DEBUG org.apache.hadoop.security.SaslRpcClient: [main]: Creating SASL GSSAPI(KERBEROS)  client to authenticate to service at xxx
2018-11-30 20:21:25,783 DEBUG org.apache.hadoop.security.SaslRpcClient: [main]: Use KERBEROS authentication for protocol ClientNamenodeProtocolPB
2018-11-30 20:21:25,852 DEBUG org.apache.hadoop.security.SaslRpcClient: [main]: Sending sasl message state: INITIATE
 
 
2018-11-30 20:21:25,852 DEBUG org.apache.hadoop.security.SaslRpcClient: [main]: Sending sasl message state: INITIATE
token: "`\202\a\017\006\t*\206H\206\367\022\001\002\002\001\000n\202\006\3760\202\006\372\240\003\002\001\005\241\003\002\001\016\242\a\003\005\000 \000\000\000\243\202\005\322a\202\005\3160\202\005\312\240\003\002\001\005\241\023\033\021INHOUSE.OPERS.ORG\242]0[\240\003\002\001\000\241T0R\033\023clouderahdfsservice\033;xxx\243\202\005M0\202\005I\240\003\002\001\027\241\003\002\001\001\242\202\005;\004\202\0057q\226/f\021^`\"s\033<\277\002\037\353Us\301\262S|\240\n+x(n~\271\227\302\226\305{\343\215\273\0008x\265\023J\343\026\302\237\246\'MJ\273\253\027a\353\37346I\326\304\244T\2313\315h\366\341H\023\261X\250\272\364\n\323\236\252\037\026F9\347k\275\351\346\302f\377\'\347\356?uI~\345\3231u\240\216\364X\220#Y6\242sG\245\220\230\357\261\3767B!\tC\n\205\205\311\246}#Ch\323/\212\227j\277?\200\234C\030\216\034y|\024I$\037Md\376\276t\334>Z@\325\232\202\323\345\01693\001p\310\0027\302\035\265\305F\266\252EM\253\375\326\305|-\317\303\310EO\241\374\306\250\302v\221\214\212\227H\005\3704\003\351\022\230\026\351\303\375\"l\026\032>\312!\211\311\344\232\352P\027\363\n,?\357\275\340<^?\306n8\273S\254\f\a4f\302\t\342M\001K\330k\250\361\021l\210#_\266Y\322\022\031r\370\b\025F\201q\036qr%\207:\204\346\210\f\257\027\372\365(^?2H\0242\022;q\016\335\354\236\02140\020U7\\;\306\201&!\375\311\234\301\210:\220\342\305\234\005j\3330\200\220\232\270\252T^\027\273\317hw\235\272\216\3755)4\324j\032E\345\305L\232XW\303\b\237\343sV\334\303\221J\202JQ\031\343\211\343\3352%\236\'\314\n\216\023\004\250^?\003\265\210M\304d\216,\361\373\212W03\025\370\302\203\020)\211@\361P\020\336\342\"\2422\325k4=?tp\225\270Gg6ch\306t\320\b\031J\357s\276\002\017\201\206\2029\346\272c\365S\250]\232\347\343\241\361q\304:\327\342\267\256\006P!k\255}\320\025cV\031Q\352D\204r\v\220\347\245d\215\330&|q\267 0\026\221\251\350hJ\310\361\347\237*\002\253\312\232\004\351\v\261q\307\341G%?\000\256\375<jQa\217$\331v(\213N*\304\314;\264\317\\`t\316\204!06\\\340\033\342\222\362.\\\224\r\222YZn\305\313U\204\340\023\307\034[\234,g:\2522\244\342/\331\271S\362\0270 \346g.\254\314J\351%\005t\a\023W\366/|\0067\237\255~\226(\022\'\321\236b\306c\204{[\366\277\030y2\336\264mK\251\025h\020F\377\f\254\256\00529\362E>r\203\313\034N\300<\313\205\325\323\022\300\217\312Tx\337\f\276\016\302\035\\\022\222\227\023+?\346=\340p\305\006\221^?R\363deM\337\020\356\363~\375\b\362\342\277\000<-^\207\267\"\033Nt\307\244tCU\021K\310q\272Z\222\355\327\334+\367\037\0050\254$\276\311&)\211h\244\304F\274\202^?@n\360^?N\343\377 KSu\372t\026\276\304x\204W%\372?\037\234\200p^?\270\027\354\221\230O\351\305\335a09P\333\233\205\017\023\257\326\271\032\330\210\254\207\204:\031\006\023\264+\326\261T\271\252l\345\'Y\314F\325\024\323c\221\2045l\312\206wp\353\336\027Q\241\341\274\214>^?R\316\0275\313\225Y\031\004\003\377\371\336v`fD\022\020\225\b\326F\253\301Z\357\v\246\262\0203\311\314>\363\301\364\330\314\323H^?\"\024?\265k\244\252\2774^?\205\245;@D;9\333\303\246L\000\346\216\271\0306\265\b\226\211\371\033\033\r\350rs\344\0031T2\254P\216\332\225\034\325\253; ;W{\312\245\314\227G\373\233Q\355\f\272\370\245\357\215\257\227\265\001P\277L\330\304\034\237^?\fN\316\333gT\322\224@T\305\224\255,\253\272\024\314\211S\353\025\006\312]r\3623\020\372\345\003\306\027\330\273\031+\341\370\031\305\"\215\332\\\365\260|\334Sv0t\224\252}\353BQA\022\262=\207sd\244\021\336B\275\274\026\250\234A0A\0173\341\262\323\375\236\005\316&4\356\b\312\377:\n\335\312\222Y\301\020\005\366\256\334\277\024\203Q\223\244\250*\356}\222lV\243\0358c\370)*\327\274\222}\276\bK\'\300\021\255\250\337^?\350#\231S\361\220\306\262u#\025h\336\374\017O\256\353\315\346\344\006\n(\212V`\026\371\231\323%H\361\362\230\325\355~U\226\004\376_\344m\344\024GS-O#IzZ\265^?,<\212\244\3423(\317\v\300\022\024\211\\\354\344\334\bK\021\303\304\314\341k\372=\\\\qit(0\027\031\206-\000\250\036\000\362\227\322\202\f\305u\360!*\275\374\212S\271nF\225\002)\200\241\024E_\376E\364\201\016\344o\n\357\022\222\"\323\003\257\315\361\275\300\340z\305\305\345\r\024\277\'\347\2407\354\343\357\305!F\277\213\nP\2515\025\277s\v\000O>.\205a\266\220\023\367|k/:\315\001\022\3736\211\311\235\233\037\321\264\323\237\267\000\"}\024\f\374(\004<\022\301\370\000\f\250k\342\327\235\212\021\332\ah4`\005\252\322\361\v\003-\244\202\001\r0\202\001\t\240\003\002\001\027\242\202\001\000\004\201\375!@\351\215\333\206\033\376~g\213X\312\224\355\221\251\020.\370\360\005\371\326H\270\311\325\305\223\v\006\251\002\212\022F\302\001\246\257\320\254\262\300\3760\335\024\336\017\t\273r\aB\313\220\341@\340~\033@Z\305\271\345\024\326Z\001E/\226\236\320\335\322\3353\252\356\337\364x\302{XI\324\rjX\201\004\342C.\357y\203^?\2752\367o;\200\366\307\233l;n\265\316\344\362\322V-\347\264[\357\226I\236\362(\312\360\241W\230#\037\277\210\367\210\226O\334\221\245\316\340\332&\324\246}\003/a=*\027\314\t\3518k!\360\n\v\335\333\346\375\211g\3663u\361v\371\321\273U\217\334g\303\346\202Y\033`\235L#\355\364[Q[\254\\J\357\354i\266w\306\317Q\335j\353\272\216\270\336\364\362t\200\3063\331\310\031\314Q\a8\207\211\340\221\027\253\317\350>*\300\336\025?\206\216\237\026\352D\322"
auths {
  method: "KERBEROS"
  mechanism: "GSSAPI"
  protocol: "clouderahdfsservice"
  serverId: "xxx"
}
 
2018-11-30 20:21:25,856 DEBUG org.apache.hadoop.security.SaslRpcClient: [main]: Sending sasl message state: RESPONSE
token: ""
 
2018-11-30 20:21:25,858 DEBUG org.apache.hadoop.security.SaslRpcClient: [main]: Sending sasl message state: RESPONSE
token: "`0\006\t*\206H\206\367\022\001\002\002\002\001\021\000\377\377\377\377\221y\002%\20054\027\350\255t\317\aL\202\216\323\244\255+J\363\204%\001\001\000\000\001"
 
2018-11-30 20:21:25,858 DEBUG org.apache.hadoop.ipc.Client: [main]: Negotiated QOP is :auth
2018-11-30 20:21:25,864 DEBUG org.apache.hadoop.ipc.Client: [IPC Client (1045397707) connection to xxx from clouderahiveservice/xxx]: IPC Client (1045397707) connection to xxx from clouderahiveservice/xxx: starting, having connections 1
2018-11-30 20:21:25,867 DEBUG org.apache.hadoop.ipc.Client: [IPC Parameter Sending Thread #0]: IPC Client (1045397707) connection to xxx from clouderahiveservice/xxx sending #0 org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo
:
 
2018-11-30 20:21:25,932 DEBUG org.apache.hadoop.ipc.ProtobufRpcEngine: [main]: Call: getFileInfo took 2ms
2018-11-30 20:21:25,932 DEBUG org.apache.hadoop.hdfs.DFSClient: [main]: /tmp/hive/clouderahiveservice/11560190-1a19-4c18-bb97-e0525e8987fa/_tmp_space.db: masked={ masked: rwx------, unmasked: rwx------ }
2018-11-30 20:21:25,932 DEBUG org.apache.hadoop.ipc.Client: [IPC Parameter Sending Thread #0]: IPC Client (1045397707) connection to xxx from clouderahiveservice/xxx sending #7 org.apache.hadoop.hdfs.protocol.ClientProtocol.mkdirs
2018-11-30 20:21:25,938 DEBUG org.apache.hadoop.ipc.Client: [IPC Client (1045397707) connection to xxx from clouderahiveservice/xxx]: IPC Client (1045397707) connection to xxx from clouderahiveservice/xxx got value #7
2018-11-30 20:21:25,939 DEBUG org.apache.hadoop.ipc.ProtobufRpcEngine: [main]: Call: mkdirs took 7ms
 
2018-11-30 20:21:28,627 WARN  hive.metastore: [main]: Failed to connect to the MetaStore Server...
org.apache.thrift.transport.TTransportException: java.net.ConnectException: Connection refused (Connection refused)
        at org.apache.thrift.transport.TSocket.open(TSocket.java:226)
        at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:266)
        at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
        at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
        at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
        at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:464)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:244)
        at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1560)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:67)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:82)
        at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3411)
        at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3430)
        at org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:3655)
        at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:231)
        at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:215)
        at org.apache.hadoop.hive.ql.metadata.Hive.<init>(Hive.java:338)
        at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:299)
        at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:274)
        at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:256)
        at org.apache.hadoop.hive.ql.security.authorization.DefaultHiveAuthorizationProvider.init(DefaultHiveAuthorizationProvider.java:29)
        at org.apache.hadoop.hive.ql.security.authorization.HiveAuthorizationProviderBase.setConf(HiveAuthorizationProviderBase.java:112)
        at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
        at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
        at org.apache.hadoop.hive.ql.metadata.HiveUtils.getAuthorizeProviderManager(HiveUtils.java:388)
        at org.apache.hadoop.hive.ql.session.SessionState.setupAuth(SessionState.java:817)
        at org.apache.hadoop.hive.ql.session.SessionState.getAuthorizationMode(SessionState.java:1686)
        at org.apache.hadoop.hive.ql.session.SessionState.isAuthorizationModeV2(SessionState.java:1697)
        at org.apache.hadoop.hive.ql.session.SessionState.applyAuthorizationPolicy(SessionState.java:1745)
        at org.apache.hive.service.cli.CLIService.applyAuthorizationConfigPolicy(CLIService.java:125)
        at org.apache.hive.service.cli.CLIService.init(CLIService.java:111)
        at org.apache.hive.service.CompositeService.init(CompositeService.java:59)
        at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:125)
        at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:542)
        at org.apache.hive.service.server.HiveServer2.access$700(HiveServer2.java:89)
        at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:793)
        at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:666)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
        at java.net.PlainSocketImpl.socketConnect(Native Method)
        at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
        at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
        at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
        at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
        at java.net.Socket.connect(Socket.java:589)
        at org.apache.thrift.transport.TSocket.open(TSocket.java:221)
        ... 50 more
2018-11-30 20:21:28,630 INFO  hive.metastore: [main]: Waiting 1 seconds before next connection attempt.
2018-11-30 20:21:29,634 WARN  hive.ql.metadata.Hive: [main]: Failed to register all functions.
java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
        at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1562)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:67)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:82)
        at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3411)
        at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3430)
        at org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:3655)
        at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:231)
        at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:215)
        at org.apache.hadoop.hive.ql.metadata.Hive.<init>(Hive.java:338)
        at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:299)
        at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:274)
        at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:256)
        at org.apache.hadoop.hive.ql.security.authorization.DefaultHiveAuthorizationProvider.init(DefaultHiveAuthorizationProvider.java:29)
        at org.apache.hadoop.hive.ql.security.authorization.HiveAuthorizationProviderBase.setConf(HiveAuthorizationProviderBase.java:112)
        at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
        at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
        at org.apache.hadoop.hive.ql.metadata.HiveUtils.getAuthorizeProviderManager(HiveUtils.java:388)
        at org.apache.hadoop.hive.ql.session.SessionState.setupAuth(SessionState.java:817)
        at org.apache.hadoop.hive.ql.session.SessionState.getAuthorizationMode(SessionState.java:1686)
        at org.apache.hadoop.hive.ql.session.SessionState.isAuthorizationModeV2(SessionState.java:1697)
        at org.apache.hadoop.hive.ql.session.SessionState.applyAuthorizationPolicy(SessionState.java:1745)
        at org.apache.hive.service.cli.CLIService.applyAuthorizationConfigPolicy(CLIService.java:125)
        at org.apache.hive.service.cli.CLIService.init(CLIService.java:111)
        at org.apache.hive.service.CompositeService.init(CompositeService.java:59)
        at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:125)
        at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:542)
        at org.apache.hive.service.server.HiveServer2.access$700(HiveServer2.java:89)
        at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:793)
        at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:666)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
 
 
Caused by: MetaException(message:Could not connect to meta store using any of the URIs provided. Most recent failure: org.apache.thrift.transport.TTransportException: java.net.ConnectException: Connection refused (Connection refused)
 
The user I am using to create roles is a sentry admin user and defined in the "sentry.connection.allowed.users" as well.
 
Can someone guide me in the correct direction? Whatcould be the problem here? any help is highly appreciated.
1 REPLY 1

Re: Unable to create roles in either beeline or Hue.

Community Manager

Please ensure that the hue, hive, impala, hue, solr, kakfa, and hbase group have not been removed from sentry.service.admin.group, and that the hue, hive, impala, hue, hdfs, solr, kakfa, and hbase users have not been removed in sentry.service.allow.connect.  This is often the cause of the "Connection to sentry service denied due to lack of client credentials" exception.   Please see the following documentation:

 

https://www.cloudera.com/documentation/enterprise/latest/topics/hue_sec_sentry_auth.html#hue_sec_sen...

 

skitch.png



Robert Justice, Technical Resolution Manager


Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.

Learn more about the Cloudera Community:

Terms of Service