Support Questions

Find answers, ask questions, and share your expertise

BEELINE: can CREATE DB/TABLE but cannot INSERT/SELECT

avatar
New Contributor

Hello there,
is possible to create/drop databases and tables but not tables content via hive shell.

I've a small cluster (pre-production) but I'm literally stuck on this. I can accomplish INSERT operations through pyhive but I cannot even verify the content because SELECTs shows me only information schema of the table (at best!).

And cannot manipulate the DB manually from the Beeline.

How to replicate the error:
on de-fra-hadmaster01(02) (my masters)
- switch to user hive
- get kerberos ticket through hive keytab:

$ kinit -kt /etc/security/keytabs/hive.service.keytab hive/de-fra-hadmaster01.MYDOMAIN.net@MYDOMAIN.NET


- CONNECT TO HIVE DATABASE (OK😞

$ beeline -u "jdbc:hive2://de-fra-hadmaster02.mydomain.net:2181,de-fra-hadmaster01.mydomain.net:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;principal=hive/de-fra-hadmaster01.mydomain.net@mydomain.NET;auth-kerberos" -n hive@mydomain.NET

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/3.0.0.0-1634/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/3.0.0.0-1634/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Connecting to jdbc:hive2://de-fra-hadmaster02.mydomain.net:2181,de-fra-hadmaster01.mydomain.net:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;principal=hive/de-fra-hadmaster01.mydomain.net@mydomain.NET;auth-kerberos
18/12/13 10:25:58 [main]: INFO jdbc.HiveConnection: Connected to de-fra-hadmaster01.mydomain.net:10000
Connected to: Apache Hive (version 3.1.0.3.0.0.0-1634)
Driver: Hive JDBC (version 3.1.0.3.0.0.0-1634)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 3.1.0.3.0.0.0-1634 by Apache Hive

- CREATE A NEW DATABASE (OK😞

0: jdbc:hive2://de-fra-hadmaster01.mydoma> CREATE DATABASE test;

INFO : Compiling command(queryId=hive_20181209200957_42aa6da5-da1f-4d25-b6bb-b2daff147d9a): CREATE DATABASE test
INFO : Semantic Analysis Completed (retrial = false)
INFO : Returning Hive schema: Schema(fieldSchemas:null, properties:null)
INFO : Completed compiling command(queryId=hive_20181209200957_42aa6da5-da1f-4d25-b6bb-b2daff147d9a); Time taken: 0.021 seconds
INFO : Executing command(queryId=hive_20181209200957_42aa6da5-da1f-4d25-b6bb-b2daff147d9a): CREATE DATABASE test
INFO : Starting task [Stage-0:DDL] in serial mode
INFO : Completed executing command(queryId=hive_20181209200957_42aa6da5-da1f-4d25-b6bb-b2daff147d9a); Time taken: 2.093 seconds
INFO : OK
No rows affected (2.697 seconds)


- CREATE A NEW TABLE test1 INTO test DB (OK😞

CREATE TABLE test.test1 (id int, name string);

INFO : Compiling command(queryId=hive_20181209201114_bddfa871-10ea-4ebf-8abd-d73db9ea5f18): CREATE TABLE test.test1 (id int, name string)
INFO : Semantic Analysis Completed (retrial = false)
INFO : Returning Hive schema: Schema(fieldSchemas:null, properties:null)
INFO : Completed compiling command(queryId=hive_20181209201114_bddfa871-10ea-4ebf-8abd-d73db9ea5f18); Time taken: 0.039 seconds
INFO : Executing command(queryId=hive_20181209201114_bddfa871-10ea-4ebf-8abd-d73db9ea5f18): CREATE TABLE test.test1 (id int, name string)
INFO : Starting task [Stage-0:DDL] in serial mode
INFO : Completed executing command(queryId=hive_20181209201114_bddfa871-10ea-4ebf-8abd-d73db9ea5f18); Time taken: 2.17 seconds
INFO : OK
No rows affected (2.603 seconds)

- VERIFY TABLE CREATION (OK😞

SELECT * FROM test.test1;

INFO : Compiling command(queryId=hive_20181209203116_860e9354-a70d-4726-a6df-27ae58f4d07f): SELECT * FROM test.test1
INFO : Semantic Analysis Completed (retrial = false)
INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:test1.id, type:int, comment:null), FieldSchema(name:test1.name, type:string, comment:null)], properties:null)
INFO : Completed compiling command(queryId=hive_20181209203116_860e9354-a70d-4726-a6df-27ae58f4d07f); Time taken: 0.256 seconds
INFO : Executing command(queryId=hive_20181209203116_860e9354-a70d-4726-a6df-27ae58f4d07f): SELECT * FROM test.test1
INFO : Completed executing command(queryId=hive_20181209203116_860e9354-a70d-4726-a6df-27ae58f4d07f); Time taken: 0.001 seconds
INFO : OK
+-----------+-------------+
| test1.id | test1.name |
+-----------+-------------+
+-----------+-------------+
No rows selected (0.397 seconds)


- POPULATING TABLE test1 INTO test DB WILL FAIL JUST AS ANY RELATED SELECT (if table was populated through python script) DUE TO TEZ ERROR:

INSERT INTO test.test1 VALUES (1, 'sigmund');
INFO : Compiling command(queryId=hive_20181213104023_9ab64bdc-dbad-4d1c-b0b7-e82985c4ba10): INSERT INTO test.test1 VALUES (1, 'sigmund')

INFO : Semantic Analysis Completed (retrial = false)
INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:col1, type:int, comment:null), FieldSchema(name:col2, type:string, comment:null)], properties:null)
INFO : Completed compiling command(queryId=hive_20181213104023_9ab64bdc-dbad-4d1c-b0b7-e82985c4ba10); Time taken: 0.353 seconds
INFO : Executing command(queryId=hive_20181213104023_9ab64bdc-dbad-4d1c-b0b7-e82985c4ba10): INSERT INTO test.test1 VALUES (1, 'sigmund')
INFO : Query ID = hive_20181213104023_9ab64bdc-dbad-4d1c-b0b7-e82985c4ba10
INFO : Total jobs = 1
INFO : Launching Job 1 out of 1
INFO : Starting task [Stage-1:MAPRED] in serial mode
WARN : The session: sessionId=73797b43-a07e-4ec5-b2c8-9eda005124fd, queueName=null, user=hive, doAs=true, isOpen=false, isDefault=false has not been opened
INFO : Subscribed to counters: [] for queryId: hive_20181213104023_9ab64bdc-dbad-4d1c-b0b7-e82985c4ba10
INFO : Tez session hasn't been created yet. Opening session
ERROR : Failed to execute tez graph.
java.io.FileNotFoundException: DestHost:destPort de-fra-hadmaster01.mydomain.net:9292 , LocalHost:localPort null:0. Failed on local exception: java.io.FileNotFoundException: Error while authenticating with endpoint: http://de-fra-hadmaster01.mydomain.net:9292/kms/v1/?op=GETDELEGATIONTOKEN&doAs=hive&renewer=rm%2Fde-...
at sun.reflect.GeneratedConstructorAccessor52.newInstance(Unknown Source) ~[?:?]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_112]
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_112]
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831) ~[hadoop-common-3.1.0.3.0.0.0-1634.jar:?]
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:806) ~[hadoop-common-3.1.0.3.0.0.0-1634.jar:?]
at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:149) ~[hadoop-common-3.1.0.3.0.0.0-1634.jar:?]
at org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:348) ~[hadoop-auth-3.1.0.3.0.0.0-1634.jar:?]
at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:321) ~[hadoop-common-3.1.0.3.0.0.0-1634.jar:?]
at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.getDelegationToken(DelegationTokenAuthenticator.java:193) ~[hadoop-common-3.1.0.3.0.0.0-1634.jar:?]
at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:384) ~[hadoop-common-3.1.0.3.0.0.0-1634.jar:?]
at org.apache.hadoop.crypto.key.kms.KMSClientProvider$4.run(KMSClientProvider.java:1043) ~[hadoop-common-3.1.0.3.0.0.0-1634.jar:?]
at org.apache.hadoop.crypto.key.kms.KMSClientProvider$4.run(KMSClientProvider.java:1037) ~[hadoop-common-3.1.0.3.0.0.0-1634.jar:?]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_112]
at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_112]
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1688) ~[hadoop-common-3.1.0.3.0.0.0-1634.jar:?]
at org.apache.hadoop.crypto.key.kms.KMSClientProvider.addDelegationTokens(KMSClientProvider.java:1037) ~[hadoop-common-3.1.0.3.0.0.0-1634.jar:?]
at org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$1.call(LoadBalancingKMSClientProvider.java:193) ~[hadoop-common-3.1.0.3.0.0.0-1634.jar:?]
at org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$1.call(LoadBalancingKMSClientProvider.java:190) ~[hadoop-common-3.1.0.3.0.0.0-1634.jar:?]
at org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:123) ~[hadoop-common-3.1.0.3.0.0.0-1634.jar:?]
at org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.addDelegationTokens(LoadBalancingKMSClientProvider.java:190) ~[hadoop-common-3.1.0.3.0.0.0-1634.jar:?]
at org.apache.hadoop.crypto.key.KeyProviderDelegationTokenExtension.addDelegationTokens(KeyProviderDelegationTokenExtension.java:110) ~[hadoop-common-3.1.0.3.0.0.0-1634.jar:?]
at org.apache.hadoop.hdfs.HdfsKMSUtil.addDelegationTokensForKeyProvider(HdfsKMSUtil.java:84) ~[hadoop-hdfs-client-3.1.0.3.0.0.0-1634.jar:?]
at org.apache.hadoop.hdfs.DistributedFileSystem.addDelegationTokens(DistributedFileSystem.java:2821) ~[hadoop-hdfs-client-3.1.0.3.0.0.0-1634.jar:?]
at org.apache.tez.common.security.TokenCache.obtainTokensForFileSystemsInternal(TokenCache.java:121) ~[tez-api-0.9.1.3.0.0.0-1634.jar:0.9.1.3.0.0.0-1634]
at org.apache.tez.common.security.TokenCache.obtainTokensForFileSystemsInternal(TokenCache.java:100) ~[tez-api-0.9.1.3.0.0.0-1634.jar:0.9.1.3.0.0.0-1634]
at org.apache.tez.common.security.TokenCache.obtainTokensForFileSystems(TokenCache.java:76) ~[tez-api-0.9.1.3.0.0.0-1634.jar:0.9.1.3.0.0.0-1634]
at org.apache.tez.client.TezClientUtils.addLocalResources(TezClientUtils.java:305) ~[tez-api-0.9.1.3.0.0.0-1634.jar:0.9.1.3.0.0.0-1634]
at org.apache.tez.client.TezClientUtils.setupTezJarsLocalResources(TezClientUtils.java:184) ~[tez-api-0.9.1.3.0.0.0-1634.jar:0.9.1.3.0.0.0-1634]
at org.apache.tez.client.TezClient.getTezJarResources(TezClient.java:1156) ~[tez-api-0.9.1.3.0.0.0-1634.jar:0.9.1.3.0.0.0-1634]
at org.apache.tez.client.TezClient.setupApplicationContext(TezClient.java:473) ~[tez-api-0.9.1.3.0.0.0-1634.jar:0.9.1.3.0.0.0-1634]
at org.apache.tez.client.TezClient.start(TezClient.java:401) ~[tez-api-0.9.1.3.0.0.0-1634.jar:0.9.1.3.0.0.0-1634]
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.startSessionAndContainers(TezSessionState.java:516) ~[hive-exec-3.1.0.3.0.0.0-1634.jar:3.1.0.3.0.0.0-1634]
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:451) ~[hive-exec-3.1.0.3.0.0.0-1634.jar:3.1.0.3.0.0.0-1634]
at org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolSession.openInternal(TezSessionPoolSession.java:124) ~[hive-exec-3.1.0.3.0.0.0-1634.jar:3.1.0.3.0.0.0-1634]
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:373) ~[hive-exec-3.1.0.3.0.0.0-1634.jar:3.1.0.3.0.0.0-1634]
at org.apache.hadoop.hive.ql.exec.tez.TezTask.ensureSessionHasResources(TezTask.java:368) ~[hive-exec-3.1.0.3.0.0.0-1634.jar:3.1.0.3.0.0.0-1634]
at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:195) ~[hive-exec-3.1.0.3.0.0.0-1634.jar:3.1.0.3.0.0.0-1634]
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205) ~[hive-exec-3.1.0.3.0.0.0-1634.jar:3.1.0.3.0.0.0-1634]
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) ~[hive-exec-3.1.0.3.0.0.0-1634.jar:3.1.0.3.0.0.0-1634]
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2668) ~[hive-exec-3.1.0.3.0.0.0-1634.jar:3.1.0.3.0.0.0-1634]
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2339) ~[hive-exec-3.1.0.3.0.0.0-1634.jar:3.1.0.3.0.0.0-1634]
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2015) ~[hive-exec-3.1.0.3.0.0.0-1634.jar:3.1.0.3.0.0.0-1634]
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1713) ~[hive-exec-3.1.0.3.0.0.0-1634.jar:3.1.0.3.0.0.0-1634]
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1707) ~[hive-exec-3.1.0.3.0.0.0-1634.jar:3.1.0.3.0.0.0-1634]
at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) ~[hive-exec-3.1.0.3.0.0.0-1634.jar:3.1.0.3.0.0.0-1634]
at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:224) ~[hive-service-3.1.0.3.0.0.0-1634.jar:3.1.0.3.0.0.0-1634]
at org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87) ~[hive-service-3.1.0.3.0.0.0-1634.jar:3.1.0.3.0.0.0-1634]
at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:316) ~[hive-service-3.1.0.3.0.0.0-1634.jar:3.1.0.3.0.0.0-1634]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_112]
at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_112]
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1688) ~[hadoop-common-3.1.0.3.0.0.0-1634.jar:?]
at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:329) ~[hive-service-3.1.0.3.0.0.0-1634.jar:3.1.0.3.0.0.0-1634]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_112]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_112]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_112]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_112]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_112]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_112]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
Caused by: java.io.FileNotFoundException: Error while authenticating with endpoint: http://de-fra-hadmaster01.mydomain.net:9292/kms/v1/?op=GETDELEGATIONTOKEN&doAs=hive&renewer=rm%2Fde-...
at sun.reflect.GeneratedConstructorAccessor52.newInstance(Unknown Source) ~[?:?]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_112]
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_112]
at org.apache.hadoop.security.authentication.client.KerberosAuthenticator.wrapExceptionWithMessage(KerberosAuthenticator.java:232) ~[hadoop-auth-3.1.0.3.0.0.0-1634.jar:?]
at org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:216) ~[hadoop-auth-3.1.0.3.0.0.0-1634.jar:?]
at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:147) ~[hadoop-common-3.1.0.3.0.0.0-1634.jar:?]
... 53 more
Caused by: java.io.FileNotFoundException: DestHost:destPort de-fra-hadmaster01.mydomain.net:9292 , LocalHost:localPort null:0. Failed on local exception: java.io.FileNotFoundException: http://de-fra-hadmaster01.mydomain.net:9292/kms/v1/?op=GETDELEGATIONTOKEN&doAs=hive&renewer=rm%2Fde-...
at sun.reflect.GeneratedConstructorAccessor52.newInstance(Unknown Source) ~[?:?]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_112]
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_112]
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831) ~[hadoop-common-3.1.0.3.0.0.0-1634.jar:?]
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:806) ~[hadoop-common-3.1.0.3.0.0.0-1634.jar:?]
at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:149) ~[hadoop-common-3.1.0.3.0.0.0-1634.jar:?]
at org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:213) ~[hadoop-auth-3.1.0.3.0.0.0-1634.jar:?]
at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:147) ~[hadoop-common-3.1.0.3.0.0.0-1634.jar:?]
... 53 more
Caused by: java.io.FileNotFoundException: http://de-fra-hadmaster01.mydomain.net:9292/kms/v1/?op=GETDELEGATIONTOKEN&doAs=hive&renewer=rm%2Fde-...
at org.apache.hadoop.security.authentication.client.AuthenticatedURL.extractToken(AuthenticatedURL.java:394) ~[hadoop-auth-3.1.0.3.0.0.0-1634.jar:?]
at org.apache.hadoop.security.authentication.client.PseudoAuthenticator.authenticate(PseudoAuthenticator.java:74) ~[hadoop-auth-3.1.0.3.0.0.0-1634.jar:?]
at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:147) ~[hadoop-common-3.1.0.3.0.0.0-1634.jar:?]
at org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:213) ~[hadoop-auth-3.1.0.3.0.0.0-1634.jar:?]
at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:147) ~[hadoop-common-3.1.0.3.0.0.0-1634.jar:?]
... 53 more
ERROR : FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask
INFO : Completed executing command(queryId=hive_20181213104023_9ab64bdc-dbad-4d1c-b0b7-e82985c4ba10); Time taken: 0.194 seconds
Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask (state=08S01,code=1)


... which is actually related to the same error seen during any select which is in fact a kerberos token delegation error:
Caused by: java.io.FileNotFoundException: Error while authenticating with endpoint: http://de-fra-hadmaster01.MYDOMAIN.net:9292/kms/v1/?op=GETDELEGATIONTOKEN&renewer=rm%2Fde-fra-hadmas...

this may be related to RangerKMS which (as far as I understood) should take care of the kerberos token delegation and on which any Proxy host/user/group as been set to "*". Rules into Ranger are all permissive.
I've spent several hours without being able to figure out the root cause of this issue.
Already tried Ranger:

- plugin enabled -> disabled.

- various jdbc URIs

- launching beeline without parameters working with /etc/hive/conf/beeline-site.xml

Any help would be HIGHLY appreciated.
Thank you.

2 REPLIES 2

avatar
Expert Contributor

Hi @Sigmund Broele,

Would you please check if the below properties set properly.

In "Advanced core-site" 
hadoop.security.key.provider.path
In "Advanced hdfs-site"
dfs.encryption.key.provider.uri
In "Ranger KMS Service > Custom kms-site.xml
hadoop.kms.proxyuser.hive.hosts=*
hadoop.kms.proxyuser.hive.groups=*

avatar
New Contributor

Hi Sampath and thanks for your time.

I've the following already set:

hadoop.security.key.provider.path=kms://http@de-fra-hadmaster01.mydomain.net:9292/kms

dfs.encryption.key.provider.uri=kms://http@de-fra-hadmaster01.mydomain.net:9292/kms

hadoop.kms.proxyuser.hive.hosts=*

hadoop.kms.proxyuser.hive.users=*

so in fact I've added only

hadoop.kms.proxyuser.hive.groups and set it to * as you suggested.

Restarted all RangerKMS and Hive but nothing changed. 😞