Reply
New Contributor
Posts: 3
Registered: ‎10-26-2015
Accepted Solution

Exercise 3: Could not open client transport with JDBC

I have the original_access_logs in the correct directory:

 

[cloudera@quickstart ~]$ hadoop fs -ls /user/hive/warehouse
Found 7 items
drwxr-xr-x   - cloudera supergroup          0 2015-10-26 10:12 /user/hive/warehouse/categories
drwxr-xr-x   - cloudera supergroup          0 2015-10-26 10:12 /user/hive/warehouse/customers
drwxr-xr-x   - cloudera supergroup          0 2015-10-26 10:13 /user/hive/warehouse/departments
drwxr-xr-x   - cloudera supergroup          0 2015-10-26 10:13 /user/hive/warehouse/order_items
drwxr-xr-x   - cloudera supergroup          0 2015-10-26 10:13 /user/hive/warehouse/orders
drwxr-xr-x   - hdfs     supergroup          0 2015-10-26 12:36 /user/hive/warehouse/original_access_logs
drwxr-xr-x   - cloudera supergroup          0 2015-10-26 10:14 /user/hive/warehouse/products

But if I try the code:

 

[cloudera@quickstart ~]$ beeline -u jdbc:hive2://quickstart:10000/default -n admin -d org.apache.hive.jdbc.HiveDriver

I get the error:

 

Connecting to jdbc:hive2://quickstart:10000/default
Error: Could not open client transport with JDBC Uri: jdbc:hive2://quickstart:10000/default: java.net.ConnectException: Connection refused (state=08S01,code=0)
Beeline version 1.1.0-cdh5.4.2 by Apache Hive
0: jdbc:hive2://quickstart:10000/default (closed)> 

Could someone please help me?

Cloudera Employee
Posts: 435
Registered: ‎07-12-2013

Re: Exercise 3: Could not open client transport with JDBC

Check the Hive Server 2 is running: 'sudo service hive-server2 status'. If
it's not restart it with 'sudo service hive-server2 restart'. If you
continue having issues, have a log at Hive Server 2 logs in /var/log/hive
for any errors.

New Contributor
Posts: 3
Registered: ‎10-26-2015

Re: Exercise 3: Could not open client transport with JDBC

Hi Sean, it works! Thanks a lot!

New Contributor
Posts: 1
Registered: ‎01-30-2016

Re: Exercise 3: Could not open client transport with JDBC

Hi Sean,

 

I tried your solution but still having the issue: 

 

[cloudera@quickstart ~]$ sudo service hive-server2 status
Hive Server2 is dead and pid file exists                   [FAILED]
[cloudera@quickstart ~]$ sudo service hive-server2 restart
Stopped Hive Server2:                                      [  OK  ]
Started Hive Server2 (hive-server2):                       [  OK  ]
[cloudera@quickstart ~]$ beeline -u jdbc:hive2://quickstart:10000/default -n admin -d org.apache.hive.jdbc.HiveDriver
Connecting to jdbc:hive2://quickstart:10000/default
Error: Could not open client transport with JDBC Uri: jdbc:hive2://quickstart:10000/default: java.net.ConnectException: Connection refused (state=08S01,code=0)
Beeline version 1.1.0-cdh5.4.2 by Apache Hive
0: jdbc:hive2://quickstart:10000/default (closed)>

 

I have pasted the hive-server2.log from the /var/log/hive  folder for your review. 

2016-01-30 17:39:12,265 INFO  [main]: session.SessionState (SessionState.java:createPath(586)) - Created local directory: /tmp/152e254c-d379-4892-b45e-343f5da5b2c6_resources
2016-01-30 17:39:12,326 WARN  [main]: server.HiveServer2 (HiveServer2.java:startHiveServer2(339)) - Error starting HiveServer2 on attempt 1, will retry in 60 seconds
java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/hive/152e254c-d379-4892-b45e-343f5da5b2c6. Name node is in safe mode.
The reported blocks 430 needs additional 2 blocks to reach the threshold 0.9990 of total blocks 432.
The number of live datanodes 1 has reached the minimum number 0. Safe mode will be turned off automatically once the thresholds have been reached.
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1413)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4302)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4277)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:852)
	at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.mkdirs(AuthorizationProviderProxyClientProtocol.java:321)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:601)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038)

	at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
	at org.apache.hive.service.cli.CLIService.applyAuthorizationConfigPolicy(CLIService.java:124)
	at org.apache.hive.service.cli.CLIService.init(CLIService.java:111)
	at org.apache.hive.service.CompositeService.init(CompositeService.java:59)
	at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:92)
	at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:309)
	at org.apache.hive.service.server.HiveServer2.access$400(HiveServer2.java:68)
	at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:523)
	at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:396)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/hive/152e254c-d379-4892-b45e-343f5da5b2c6. Name node is in safe mode.
The reported blocks 430 needs additional 2 blocks to reach the threshold 0.9990 of total blocks 432.
The number of live datanodes 1 has reached the minimum number 0. Safe mode will be turned off automatically once the thresholds have been reached.
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1413)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4302)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4277)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:852)
	at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.mkdirs(AuthorizationProviderProxyClientProtocol.java:321)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:601)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038)

	at org.apache.hadoop.ipc.Client.call(Client.java:1468)
	at org.apache.hadoop.ipc.Client.call(Client.java:1399)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
	at com.sun.proxy.$Proxy17.mkdirs(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
	at com.sun.proxy.$Proxy18.mkdirs(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2760)
	at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2731)
	at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
	at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
	at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
	at org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
	at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
	at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
	... 14 more
2016-01-30 17:40:12,477 INFO  [main]: session.SessionState (SessionState.java:createPath(586)) - Created local directory: /tmp/8c30b4a1-7572-4fac-ab83-57759f295b38_resources
2016-01-30 17:40:12,481 WARN  [main]: server.HiveServer2 (HiveServer2.java:startHiveServer2(339)) - Error starting HiveServer2 on attempt 2, will retry in 60 seconds
java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/hive/8c30b4a1-7572-4fac-ab83-57759f295b38. Name node is in safe mode.
The reported blocks 430 needs additional 2 blocks to reach the threshold 0.9990 of total blocks 432.
The number of live datanodes 1 has reached the minimum number 0. Safe mode will be turned off automatically once the thresholds have been reached.
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1413)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4302)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4277)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:852)
	at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.mkdirs(AuthorizationProviderProxyClientProtocol.java:321)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:601)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038)

	at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
	at org.apache.hive.service.cli.CLIService.applyAuthorizationConfigPolicy(CLIService.java:124)
	at org.apache.hive.service.cli.CLIService.init(CLIService.java:111)
	at org.apache.hive.service.CompositeService.init(CompositeService.java:59)
	at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:92)
	at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:309)
	at org.apache.hive.service.server.HiveServer2.access$400(HiveServer2.java:68)
	at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:523)
	at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:396)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/hive/8c30b4a1-7572-4fac-ab83-57759f295b38. Name node is in safe mode.
The reported blocks 430 needs additional 2 blocks to reach the threshold 0.9990 of total blocks 432.
The number of live datanodes 1 has reached the minimum number 0. Safe mode will be turned off automatically once the thresholds have been reached.
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1413)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4302)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4277)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:852)
	at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.mkdirs(AuthorizationProviderProxyClientProtocol.java:321)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:601)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038)

	at org.apache.hadoop.ipc.Client.call(Client.java:1468)
	at org.apache.hadoop.ipc.Client.call(Client.java:1399)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
	at com.sun.proxy.$Proxy17.mkdirs(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
	at com.sun.proxy.$Proxy18.mkdirs(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2760)
	at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2731)
	at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
	at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
	at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
	at org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
	at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
	at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
	... 14 more


Thank you in advance,

Mayur J

Cloudera Employee
Posts: 435
Registered: ‎07-12-2013

Re: Exercise 3: Could not open client transport with JDBC

In your case it looks like you need to restart hadoop-hdfs-datanode.
New Contributor
Posts: 1
Registered: ‎03-19-2016

Re: Exercise 3: Could not open client transport with JDBC

I had the same problem and tried to check the status of the hive-server2 using the command you mentioned. It gave me an error saying that hive-server2 is an unrecognized service. Could you please help me solve this problem?

Explorer
Posts: 13
Registered: ‎12-07-2018

Re: Exercise 3: Could not open client transport with JDBC

Same problem happened with me

[root@ukfhbda1-db01 ~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: oracle@GDC.LOCAL

Valid starting Expires Service principal
12/13/18 12:39:27 12/14/18 12:39:27 krbtgt/GDC.LOCAL@GDC.LOCAL
renew until 12/20/18 12:39:27
[root@ukfhbda1-db01 ~]#
[root@ukfhbda1-db01 ~]# beeline
Beeline version 1.1.0-cdh5.14.2 by Apache Hive
beeline> !connect 'jdbc:hive2://ukfhbda1-db04.gdc.local:10000/default;principal=hive/_HOST@GDC.LOCAL'
scan complete in 2ms
Connecting to jdbc:hive2://ukfhbda1-db04.gdc.local:10000/default;principal=hive/_HOST@GDC.LOCAL
Connected to: Apache Hive (version 1.1.0-cdh5.14.2)
Driver: Hive JDBC (version 1.1.0-cdh5.14.2)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://ukfhbda1-db04.gdc.local:10000> oracle
. . . . . . . . . . . . . . . . . . . . . . .> Experian123
. . . . . . . . . . . . . . . . . . . . . . .> create role admin_role;
Error: Error while compiling statement: FAILED: ParseException line 1:0 cannot recognize input near 'oracle' 'Experian123' 'create' (state=42000,code=40000)
0: jdbc:hive2://ukfhbda1-db04.gdc.local:10000>
0: jdbc:hive2://ukfhbda1-db04.gdc.local:10000>
0: jdbc:hive2://ukfhbda1-db04.gdc.local:10000> Closing: 0: jdbc:hive2://ukfhbda1-db04.gdc.local:10000/default;principal=hive/_HOST@GDC.LOCAL
[root@ukfhbda1-db01 ~]# su - oracle
[oracle@ukfhbda1-db01 ~]$ beeline
Beeline version 1.1.0-cdh5.14.2 by Apache Hive
beeline> !connect 'jdbc:hive2://ukfhbda1-db04.gdc.local:10000/default;principal=hive/_HOST@GDC.LOCAL'
scan complete in 2ms
Connecting to jdbc:hive2://ukfhbda1-db04.gdc.local:10000/default;principal=hive/_HOST@GDC.LOCAL
Connected to: Apache Hive (version 1.1.0-cdh5.14.2)
Driver: Hive JDBC (version 1.1.0-cdh5.14.2)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://ukfhbda1-db04.gdc.local:10000> create role admin_role;
INFO : Compiling command(queryId=hive_20181213140707_3023a4fb-b861-469a-b271-f69482c8dd34): create role admin_role
INFO : Semantic Analysis Completed
INFO : Returning Hive schema: Schema(fieldSchemas:null, properties:null)
INFO : Completed compiling command(queryId=hive_20181213140707_3023a4fb-b861-469a-b271-f69482c8dd34); Time taken: 0.115 seconds
INFO : Executing command(queryId=hive_20181213140707_3023a4fb-b861-469a-b271-f69482c8dd34): create role admin_role
INFO : Starting task [Stage-0:DDL] in serial mode
ERROR : Error processing Sentry command: java.net.ConnectException: Connection refused (Connection refused).
ERROR : FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.SentryGrantRevokeTask. SentryUserException: java.net.ConnectException: Connection refused (Connection refused)
INFO : Completed executing command(queryId=hive_20181213140707_3023a4fb-b861-469a-b271-f69482c8dd34); Time taken: 15.015 seconds
Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.SentryGrantRevokeTask. SentryUserException: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=1)
0: jdbc:hive2://ukfhbda1-db04.gdc.local:10000> grant role admin_role to group hive;
INFO : Compiling command(queryId=hive_20181213141212_24f592d7-adcf-4a91-8d15-aa46a7220138): grant role admin_role to group hive
INFO : Semantic Analysis Completed
INFO : Returning Hive schema: Schema(fieldSchemas:null, properties:null)
INFO : Completed compiling command(queryId=hive_20181213141212_24f592d7-adcf-4a91-8d15-aa46a7220138); Time taken: 0.172 seconds
INFO : Executing command(queryId=hive_20181213141212_24f592d7-adcf-4a91-8d15-aa46a7220138): grant role admin_role to group hive
INFO : Starting task [Stage-0:DDL] in serial mode
ERROR : Error processing Sentry command: java.net.ConnectException: Connection refused (Connection refused).
ERROR : FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.SentryGrantRevokeTask. SentryUserException: java.net.ConnectException: Connection refused (Connection refused)
INFO : Completed executing command(queryId=hive_20181213141212_24f592d7-adcf-4a91-8d15-aa46a7220138); Time taken: 15.014 seconds
Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.SentryGrantRevokeTask. SentryUserException: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=1)
0: jdbc:hive2://ukfhbda1-db04.gdc.local:10000> grant all on server server1 to role admin_role;
INFO : Compiling command(queryId=hive_20181213141212_08a3e86b-4c85-4ed5-ae99-9c22ca937130): grant all on server server1 to role admin_role
INFO : Semantic Analysis Completed
INFO : Returning Hive schema: Schema(fieldSchemas:null, properties:null)
INFO : Completed compiling command(queryId=hive_20181213141212_08a3e86b-4c85-4ed5-ae99-9c22ca937130); Time taken: 0.079 seconds
INFO : Executing command(queryId=hive_20181213141212_08a3e86b-4c85-4ed5-ae99-9c22ca937130): grant all on server server1 to role admin_role
INFO : Starting task [Stage-0:DDL] in serial mode
ERROR : Error processing Sentry command: java.net.ConnectException: Connection refused (Connection refused).
ERROR : FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.SentryGrantRevokeTask. SentryUserException: java.net.ConnectException: Connection refused (Connection refused)
INFO : Completed executing command(queryId=hive_20181213141212_08a3e86b-4c85-4ed5-ae99-9c22ca937130); Time taken: 15.014 seconds
Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.SentryGrantRevokeTask. SentryUserException: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=1)

 

 

Announcements