<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Exercise 3: Could not open client transport with JDBC in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Exercise-3-Could-not-open-client-transport-with-JDBC/m-p/33440#M9404</link>
    <description>Check the Hive Server 2 is running: 'sudo service hive-server2 status'. If&lt;BR /&gt;it's not restart it with 'sudo service hive-server2 restart'. If you&lt;BR /&gt;continue having issues, have a log at Hive Server 2 logs in /var/log/hive&lt;BR /&gt;for any errors.&lt;BR /&gt;&lt;BR /&gt;</description>
    <pubDate>Mon, 26 Oct 2015 21:03:56 GMT</pubDate>
    <dc:creator>Sean</dc:creator>
    <dc:date>2015-10-26T21:03:56Z</dc:date>
    <item>
      <title>Exercise 3: Could not open client transport with JDBC</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Exercise-3-Could-not-open-client-transport-with-JDBC/m-p/33438#M9403</link>
      <description>&lt;P&gt;I have the original_access_logs in the correct directory:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;[cloudera@quickstart ~]$ hadoop fs -ls /user/hive/warehouse
Found 7 items
drwxr-xr-x   - cloudera supergroup          0 2015-10-26 10:12 /user/hive/warehouse/categories
drwxr-xr-x   - cloudera supergroup          0 2015-10-26 10:12 /user/hive/warehouse/customers
drwxr-xr-x   - cloudera supergroup          0 2015-10-26 10:13 /user/hive/warehouse/departments
drwxr-xr-x   - cloudera supergroup          0 2015-10-26 10:13 /user/hive/warehouse/order_items
drwxr-xr-x   - cloudera supergroup          0 2015-10-26 10:13 /user/hive/warehouse/orders
drwxr-xr-x   - hdfs     supergroup          0 2015-10-26 12:36 /user/hive/warehouse/original_access_logs
drwxr-xr-x   - cloudera supergroup          0 2015-10-26 10:14 /user/hive/warehouse/products&lt;/PRE&gt;&lt;P&gt;But if I try the code:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;[cloudera@quickstart ~]$ beeline -u jdbc:hive2://quickstart:10000/default -n admin -d org.apache.hive.jdbc.HiveDriver&lt;/PRE&gt;&lt;P&gt;I get the error:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;Connecting to jdbc:hive2://quickstart:10000/default
Error: Could not open client transport with JDBC Uri: jdbc:hive2://quickstart:10000/default: java.net.ConnectException: Connection refused (state=08S01,code=0)
Beeline version 1.1.0-cdh5.4.2 by Apache Hive
0: jdbc:hive2://quickstart:10000/default (closed)&amp;gt; &lt;/PRE&gt;&lt;P&gt;Could someone please help me?&lt;/P&gt;</description>
      <pubDate>Fri, 16 Sep 2022 09:46:04 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Exercise-3-Could-not-open-client-transport-with-JDBC/m-p/33438#M9403</guid>
      <dc:creator>jefa</dc:creator>
      <dc:date>2022-09-16T09:46:04Z</dc:date>
    </item>
    <item>
      <title>Re: Exercise 3: Could not open client transport with JDBC</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Exercise-3-Could-not-open-client-transport-with-JDBC/m-p/33440#M9404</link>
      <description>Check the Hive Server 2 is running: 'sudo service hive-server2 status'. If&lt;BR /&gt;it's not restart it with 'sudo service hive-server2 restart'. If you&lt;BR /&gt;continue having issues, have a log at Hive Server 2 logs in /var/log/hive&lt;BR /&gt;for any errors.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 26 Oct 2015 21:03:56 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Exercise-3-Could-not-open-client-transport-with-JDBC/m-p/33440#M9404</guid>
      <dc:creator>Sean</dc:creator>
      <dc:date>2015-10-26T21:03:56Z</dc:date>
    </item>
    <item>
      <title>Re: Exercise 3: Could not open client transport with JDBC</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Exercise-3-Could-not-open-client-transport-with-JDBC/m-p/33619#M9405</link>
      <description>&lt;P&gt;Hi Sean, it works! Thanks a lot!&lt;/P&gt;</description>
      <pubDate>Fri, 30 Oct 2015 15:05:39 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Exercise-3-Could-not-open-client-transport-with-JDBC/m-p/33619#M9405</guid>
      <dc:creator>jefa</dc:creator>
      <dc:date>2015-10-30T15:05:39Z</dc:date>
    </item>
    <item>
      <title>Re: Exercise 3: Could not open client transport with JDBC</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Exercise-3-Could-not-open-client-transport-with-JDBC/m-p/36871#M9406</link>
      <description>&lt;P&gt;Hi Sean,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I tried your solution but still having the issue:&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;[cloudera@quickstart ~]$ sudo service hive-server2 status&lt;BR /&gt;Hive Server2 is dead and pid file exists&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; [FAILED]&lt;BR /&gt;[cloudera@quickstart ~]$ sudo service hive-server2 restart&lt;BR /&gt;Stopped Hive Server2:&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; [&amp;nbsp; OK&amp;nbsp; ]&lt;BR /&gt;Started Hive Server2 (hive-server2):&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; [&amp;nbsp; OK&amp;nbsp; ]&lt;BR /&gt;[cloudera@quickstart ~]$ beeline -u jdbc:hive2://quickstart:10000/default -n admin -d org.apache.hive.jdbc.HiveDriver&lt;BR /&gt;Connecting to jdbc:hive2://quickstart:10000/default&lt;BR /&gt;Error: Could not open client transport with JDBC Uri: jdbc:hive2://quickstart:10000/default: java.net.ConnectException: Connection refused (state=08S01,code=0)&lt;BR /&gt;Beeline version 1.1.0-cdh5.4.2 by Apache Hive&lt;BR /&gt;0: jdbc:hive2://quickstart:10000/default (closed)&amp;gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have pasted the hive-server2.log from the /var/log/hive&amp;nbsp; folder for your review.&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;2016-01-30 17:39:12,265 INFO  [main]: session.SessionState (SessionState.java:createPath(586)) - Created local directory: /tmp/152e254c-d379-4892-b45e-343f5da5b2c6_resources
2016-01-30 17:39:12,326 WARN  [main]: server.HiveServer2 (HiveServer2.java:startHiveServer2(339)) - Error starting HiveServer2 on attempt 1, will retry in 60 seconds
java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/hive/152e254c-d379-4892-b45e-343f5da5b2c6. Name node is in safe mode.
The reported blocks 430 needs additional 2 blocks to reach the threshold 0.9990 of total blocks 432.
The number of live datanodes 1 has reached the minimum number 0. Safe mode will be turned off automatically once the thresholds have been reached.
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1413)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4302)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4277)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:852)
	at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.mkdirs(AuthorizationProviderProxyClientProtocol.java:321)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:601)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038)

	at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
	at org.apache.hive.service.cli.CLIService.applyAuthorizationConfigPolicy(CLIService.java:124)
	at org.apache.hive.service.cli.CLIService.init(CLIService.java:111)
	at org.apache.hive.service.CompositeService.init(CompositeService.java:59)
	at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:92)
	at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:309)
	at org.apache.hive.service.server.HiveServer2.access$400(HiveServer2.java:68)
	at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:523)
	at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:396)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/hive/152e254c-d379-4892-b45e-343f5da5b2c6. Name node is in safe mode.
The reported blocks 430 needs additional 2 blocks to reach the threshold 0.9990 of total blocks 432.
The number of live datanodes 1 has reached the minimum number 0. Safe mode will be turned off automatically once the thresholds have been reached.
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1413)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4302)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4277)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:852)
	at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.mkdirs(AuthorizationProviderProxyClientProtocol.java:321)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:601)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038)

	at org.apache.hadoop.ipc.Client.call(Client.java:1468)
	at org.apache.hadoop.ipc.Client.call(Client.java:1399)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
	at com.sun.proxy.$Proxy17.mkdirs(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
	at com.sun.proxy.$Proxy18.mkdirs(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2760)
	at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2731)
	at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
	at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
	at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
	at org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
	at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
	at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
	... 14 more
2016-01-30 17:40:12,477 INFO  [main]: session.SessionState (SessionState.java:createPath(586)) - Created local directory: /tmp/8c30b4a1-7572-4fac-ab83-57759f295b38_resources
2016-01-30 17:40:12,481 WARN  [main]: server.HiveServer2 (HiveServer2.java:startHiveServer2(339)) - Error starting HiveServer2 on attempt 2, will retry in 60 seconds
java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/hive/8c30b4a1-7572-4fac-ab83-57759f295b38. Name node is in safe mode.
The reported blocks 430 needs additional 2 blocks to reach the threshold 0.9990 of total blocks 432.
The number of live datanodes 1 has reached the minimum number 0. Safe mode will be turned off automatically once the thresholds have been reached.
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1413)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4302)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4277)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:852)
	at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.mkdirs(AuthorizationProviderProxyClientProtocol.java:321)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:601)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038)

	at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
	at org.apache.hive.service.cli.CLIService.applyAuthorizationConfigPolicy(CLIService.java:124)
	at org.apache.hive.service.cli.CLIService.init(CLIService.java:111)
	at org.apache.hive.service.CompositeService.init(CompositeService.java:59)
	at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:92)
	at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:309)
	at org.apache.hive.service.server.HiveServer2.access$400(HiveServer2.java:68)
	at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:523)
	at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:396)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/hive/8c30b4a1-7572-4fac-ab83-57759f295b38. Name node is in safe mode.
The reported blocks 430 needs additional 2 blocks to reach the threshold 0.9990 of total blocks 432.
The number of live datanodes 1 has reached the minimum number 0. Safe mode will be turned off automatically once the thresholds have been reached.
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1413)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4302)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4277)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:852)
	at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.mkdirs(AuthorizationProviderProxyClientProtocol.java:321)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:601)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038)

	at org.apache.hadoop.ipc.Client.call(Client.java:1468)
	at org.apache.hadoop.ipc.Client.call(Client.java:1399)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
	at com.sun.proxy.$Proxy17.mkdirs(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
	at com.sun.proxy.$Proxy18.mkdirs(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2760)
	at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2731)
	at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
	at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
	at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
	at org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
	at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
	at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
	... 14 more&lt;/PRE&gt;&lt;P&gt;&lt;BR /&gt;Thank you in advance,&lt;/P&gt;&lt;P&gt;Mayur J&lt;/P&gt;</description>
      <pubDate>Sun, 31 Jan 2016 01:42:48 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Exercise-3-Could-not-open-client-transport-with-JDBC/m-p/36871#M9406</guid>
      <dc:creator>mjainpmp</dc:creator>
      <dc:date>2016-01-31T01:42:48Z</dc:date>
    </item>
    <item>
      <title>Re: Exercise 3: Could not open client transport with JDBC</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Exercise-3-Could-not-open-client-transport-with-JDBC/m-p/36877#M9407</link>
      <description>In your case it looks like you need to restart hadoop-hdfs-datanode.&lt;BR /&gt;</description>
      <pubDate>Sun, 31 Jan 2016 14:16:52 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Exercise-3-Could-not-open-client-transport-with-JDBC/m-p/36877#M9407</guid>
      <dc:creator>Sean</dc:creator>
      <dc:date>2016-01-31T14:16:52Z</dc:date>
    </item>
    <item>
      <title>Re: Exercise 3: Could not open client transport with JDBC</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Exercise-3-Could-not-open-client-transport-with-JDBC/m-p/38854#M9408</link>
      <description>&lt;P&gt;I had the same problem and tried to check the status of the hive-server2 using the command you mentioned. It gave me an error saying that hive-server2 is an unrecognized service. Could you please help me solve this problem?&lt;/P&gt;</description>
      <pubDate>Sat, 19 Mar 2016 19:37:58 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Exercise-3-Could-not-open-client-transport-with-JDBC/m-p/38854#M9408</guid>
      <dc:creator>4everhemanth</dc:creator>
      <dc:date>2016-03-19T19:37:58Z</dc:date>
    </item>
    <item>
      <title>Re: Exercise 3: Could not open client transport with JDBC</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Exercise-3-Could-not-open-client-transport-with-JDBC/m-p/83850#M9409</link>
      <description>&lt;P&gt;Same problem happened with me&lt;/P&gt;&lt;P&gt;[root@ukfhbda1-db01 ~]# klist&lt;BR /&gt;Ticket cache: FILE:/tmp/krb5cc_0&lt;BR /&gt;Default principal: oracle@GDC.LOCAL&lt;/P&gt;&lt;P&gt;Valid starting Expires Service principal&lt;BR /&gt;12/13/18 12:39:27 12/14/18 12:39:27 krbtgt/GDC.LOCAL@GDC.LOCAL&lt;BR /&gt;renew until 12/20/18 12:39:27&lt;BR /&gt;[root@ukfhbda1-db01 ~]#&lt;BR /&gt;[root@ukfhbda1-db01 ~]# beeline&lt;BR /&gt;Beeline version 1.1.0-cdh5.14.2 by Apache Hive&lt;BR /&gt;beeline&amp;gt; !connect 'jdbc:hive2://ukfhbda1-db04.gdc.local:10000/default;principal=hive/_HOST@GDC.LOCAL'&lt;BR /&gt;scan complete in 2ms&lt;BR /&gt;Connecting to jdbc:hive2://ukfhbda1-db04.gdc.local:10000/default;principal=hive/_HOST@GDC.LOCAL&lt;BR /&gt;Connected to: Apache Hive (version 1.1.0-cdh5.14.2)&lt;BR /&gt;Driver: Hive JDBC (version 1.1.0-cdh5.14.2)&lt;BR /&gt;Transaction isolation: TRANSACTION_REPEATABLE_READ&lt;BR /&gt;0: jdbc:hive2://ukfhbda1-db04.gdc.local:10000&amp;gt; oracle&lt;BR /&gt;. . . . . . . . . . . . . . . . . . . . . . .&amp;gt; Experian123&lt;BR /&gt;. . . . . . . . . . . . . . . . . . . . . . .&amp;gt; create role admin_role;&lt;BR /&gt;Error: Error while compiling statement: FAILED: ParseException line 1:0 cannot recognize input near 'oracle' 'Experian123' 'create' (state=42000,code=40000)&lt;BR /&gt;0: jdbc:hive2://ukfhbda1-db04.gdc.local:10000&amp;gt;&lt;BR /&gt;0: jdbc:hive2://ukfhbda1-db04.gdc.local:10000&amp;gt;&lt;BR /&gt;0: jdbc:hive2://ukfhbda1-db04.gdc.local:10000&amp;gt; Closing: 0: jdbc:hive2://ukfhbda1-db04.gdc.local:10000/default;principal=hive/_HOST@GDC.LOCAL&lt;BR /&gt;[root@ukfhbda1-db01 ~]# su - oracle&lt;BR /&gt;[oracle@ukfhbda1-db01 ~]$ beeline&lt;BR /&gt;Beeline version 1.1.0-cdh5.14.2 by Apache Hive&lt;BR /&gt;beeline&amp;gt; !connect 'jdbc:hive2://ukfhbda1-db04.gdc.local:10000/default;principal=hive/_HOST@GDC.LOCAL'&lt;BR /&gt;scan complete in 2ms&lt;BR /&gt;Connecting to jdbc:hive2://ukfhbda1-db04.gdc.local:10000/default;principal=hive/_HOST@GDC.LOCAL&lt;BR /&gt;Connected to: Apache Hive (version 1.1.0-cdh5.14.2)&lt;BR /&gt;Driver: Hive JDBC (version 1.1.0-cdh5.14.2)&lt;BR /&gt;Transaction isolation: TRANSACTION_REPEATABLE_READ&lt;BR /&gt;0: jdbc:hive2://ukfhbda1-db04.gdc.local:10000&amp;gt; create role admin_role;&lt;BR /&gt;INFO : Compiling command(queryId=hive_20181213140707_3023a4fb-b861-469a-b271-f69482c8dd34): create role admin_role&lt;BR /&gt;INFO : Semantic Analysis Completed&lt;BR /&gt;INFO : Returning Hive schema: Schema(fieldSchemas:null, properties:null)&lt;BR /&gt;INFO : Completed compiling command(queryId=hive_20181213140707_3023a4fb-b861-469a-b271-f69482c8dd34); Time taken: 0.115 seconds&lt;BR /&gt;INFO : Executing command(queryId=hive_20181213140707_3023a4fb-b861-469a-b271-f69482c8dd34): create role admin_role&lt;BR /&gt;INFO : Starting task [Stage-0:DDL] in serial mode&lt;BR /&gt;ERROR : Error processing Sentry command: java.net.ConnectException: Connection refused (Connection refused).&lt;BR /&gt;ERROR : FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.SentryGrantRevokeTask. SentryUserException: java.net.ConnectException: Connection refused (Connection refused)&lt;BR /&gt;INFO : Completed executing command(queryId=hive_20181213140707_3023a4fb-b861-469a-b271-f69482c8dd34); Time taken: 15.015 seconds&lt;BR /&gt;Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.SentryGrantRevokeTask. SentryUserException: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=1)&lt;BR /&gt;0: jdbc:hive2://ukfhbda1-db04.gdc.local:10000&amp;gt; grant role admin_role to group hive;&lt;BR /&gt;INFO : Compiling command(queryId=hive_20181213141212_24f592d7-adcf-4a91-8d15-aa46a7220138): grant role admin_role to group hive&lt;BR /&gt;INFO : Semantic Analysis Completed&lt;BR /&gt;INFO : Returning Hive schema: Schema(fieldSchemas:null, properties:null)&lt;BR /&gt;INFO : Completed compiling command(queryId=hive_20181213141212_24f592d7-adcf-4a91-8d15-aa46a7220138); Time taken: 0.172 seconds&lt;BR /&gt;INFO : Executing command(queryId=hive_20181213141212_24f592d7-adcf-4a91-8d15-aa46a7220138): grant role admin_role to group hive&lt;BR /&gt;INFO : Starting task [Stage-0:DDL] in serial mode&lt;BR /&gt;ERROR : Error processing Sentry command: java.net.ConnectException: Connection refused (Connection refused).&lt;BR /&gt;ERROR : FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.SentryGrantRevokeTask. SentryUserException: java.net.ConnectException: Connection refused (Connection refused)&lt;BR /&gt;INFO : Completed executing command(queryId=hive_20181213141212_24f592d7-adcf-4a91-8d15-aa46a7220138); Time taken: 15.014 seconds&lt;BR /&gt;Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.SentryGrantRevokeTask. SentryUserException: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=1)&lt;BR /&gt;0: jdbc:hive2://ukfhbda1-db04.gdc.local:10000&amp;gt; grant all on server server1 to role admin_role;&lt;BR /&gt;INFO : Compiling command(queryId=hive_20181213141212_08a3e86b-4c85-4ed5-ae99-9c22ca937130): grant all on server server1 to role admin_role&lt;BR /&gt;INFO : Semantic Analysis Completed&lt;BR /&gt;INFO : Returning Hive schema: Schema(fieldSchemas:null, properties:null)&lt;BR /&gt;INFO : Completed compiling command(queryId=hive_20181213141212_08a3e86b-4c85-4ed5-ae99-9c22ca937130); Time taken: 0.079 seconds&lt;BR /&gt;INFO : Executing command(queryId=hive_20181213141212_08a3e86b-4c85-4ed5-ae99-9c22ca937130): grant all on server server1 to role admin_role&lt;BR /&gt;INFO : Starting task [Stage-0:DDL] in serial mode&lt;BR /&gt;ERROR : Error processing Sentry command: java.net.ConnectException: Connection refused (Connection refused).&lt;BR /&gt;ERROR : FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.SentryGrantRevokeTask. SentryUserException: java.net.ConnectException: Connection refused (Connection refused)&lt;BR /&gt;INFO : Completed executing command(queryId=hive_20181213141212_08a3e86b-4c85-4ed5-ae99-9c22ca937130); Time taken: 15.014 seconds&lt;BR /&gt;Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.SentryGrantRevokeTask. SentryUserException: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=1)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 13 Dec 2018 14:25:45 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Exercise-3-Could-not-open-client-transport-with-JDBC/m-p/83850#M9409</guid>
      <dc:creator>pra_big</dc:creator>
      <dc:date>2018-12-13T14:25:45Z</dc:date>
    </item>
  </channel>
</rss>

