Member since
02-02-2021
116
Posts
2
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
747 | 08-13-2021 09:44 AM | |
3710 | 04-27-2021 04:23 PM | |
1389 | 04-26-2021 10:47 AM | |
924 | 03-29-2021 06:01 PM | |
2758 | 03-17-2021 04:53 PM |
05-31-2021
09:05 PM
yeah we currently have 2 HS2 instances. For some reason our production seems to be working fine with Squirrel. Our dev seems to be timing out after running simple queries such as "show databases". Beeline seems to work fine on our dev cluster. The only difference I can think of is that our dev cluster has an external mysql server whereas the production cluster, mysql server is installed on one of the nodes. Am I missing some squirrel drivers or something? Wondering why it is just squirrel that seems to have issues running queries against our dev hiveserver2. Any help is much appreciated. Thanks,
... View more
05-28-2021
06:44 PM
Most of the time I get this time out, even after restarting hive. This happens on our dev cluster. I am able to use beeline to connect to our hive in our dev cluster. Our production cluster does not seem to have this issue. from the squirrel logs I see this... 2021-05-28 20:38:10,136 [Thread-1] WARN net.sourceforge.squirrel_sql.fw.sql.SQLDatabaseMetaData - DatabaseMetaData.getTables(...) threw an error when called with tableNamePattern = null. Trying tableNamePattern %. The error was: java.sql.SQLException: java.net.SocketTimeoutException: Read timed out 2021-05-28 20:38:40,145 [Thread-1] ERROR net.sourceforge.squirrel_sql.client.session.schemainfo.SchemaInfo - failed to load table names java.sql.SQLException: java.net.SocketTimeoutException: Read timed out at org.apache.hive.jdbc.HiveDatabaseMetaData.getTables(HiveDatabaseMetaData.java:656) at net.sourceforge.squirrel_sql.fw.sql.SQLDatabaseMetaData.getTables(SQLDatabaseMetaData.java:1008) at net.sourceforge.squirrel_sql.client.session.schemainfo.SchemaInfo.privateLoadTables(SchemaInfo.java:1212) at net.sourceforge.squirrel_sql.client.session.schemainfo.SchemaInfo.loadTables(SchemaInfo.java:412) at net.sourceforge.squirrel_sql.client.session.schemainfo.SchemaInfo.privateLoadAll(SchemaInfo.java:303) at net.sourceforge.squirrel_sql.client.session.schemainfo.SchemaInfo.initialLoad(SchemaInfo.java:179) at net.sourceforge.squirrel_sql.client.session.Session$1.run(Session.java:261) at net.sourceforge.squirrel_sql.fw.util.TaskExecuter.run(TaskExecuter.java:82) at java.base/java.lang.Thread.run(Thread.java:834) Caused by: org.apache.thrift.transport.TTransportException: java.net.SocketTimeoutException: Read timed out at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129) at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86) at org.apache.thrift.transport.TSaslTransport.readLength(TSaslTransport.java:376) at org.apache.thrift.transport.TSaslTransport.readFrame(TSaslTransport.java:453) at org.apache.thrift.transport.TSaslTransport.read(TSaslTransport.java:435) at org.apache.thrift.transport.TSaslClientTransport.read(TSaslClientTransport.java:37) at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86) at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429) at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318) at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:77) at org.apache.hive.service.cli.thrift.TCLIService$Client.recv_GetTables(TCLIService.java:321) at org.apache.hive.service.cli.thrift.TCLIService$Client.GetTables(TCLIService.java:308) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.apache.hive.jdbc.HiveConnection$SynchronizedHandler.invoke(HiveConnection.java:1363) at com.sun.proxy.$Proxy5.GetTables(Unknown Source) at org.apache.hive.jdbc.HiveDatabaseMetaData.getTables(HiveDatabaseMetaData.java:654) ... 8 more Caused by: java.net.SocketTimeoutException: Read timed out at java.base/java.net.SocketInputStream.socketRead0(Native Method) at java.base/java.net.SocketInputStream.socketRead(SocketInputStream.java:115) at java.base/java.net.SocketInputStream.read(SocketInputStream.java:168) at java.base/java.net.SocketInputStream.read(SocketInputStream.java:140) at java.base/java.io.BufferedInputStream.fill(BufferedInputStream.java:252) at java.base/java.io.BufferedInputStream.read1(BufferedInputStream.java:292) at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:351) at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127) ... 27 more
... View more
05-28-2021
04:25 PM
Even after I increased the hiveserver2 heap memory and restarted hive, I don't see any connections but still Squirrel is having issues connecting to the dev cluster. But our Prod cluster seems to work fine. Any help is much appreciated. Thanks,
... View more
05-28-2021
02:55 PM
So i just counted our connections and it seems like we have around 40 connections going on at the same time. So when should or how do we know how many hive instances we should have?
... View more
05-28-2021
02:05 PM
@LeetKrew93 We currently have 2 instances of HMS and HS2. I believe if the jdbc string includes all the zookeepers, doesn't the zookeeper help load balance for hive? Also I believe for our dev cluster, there are lots of connections. Is there a way to increase the max number of connections or should i create an extra hive instance?
... View more
05-28-2021
01:36 PM
Squirrel connecting to production cluster is a bit better but i noticed there are fewer existing connections. I will do more research on what the others suggested in the comments. Thanks for the response.
... View more
05-27-2021
02:04 PM
Hi experts, Many of the devs are getting the following error when trying to connect with squirrel: Error: org.apache.thrift.transport.TTransportException: java.net.SocketTimeoutException: Read timed out SQLState: 08S01 ErrorCode: 0 I believe the issue may be because there are too many concurrent hive connections. Can someone help suggest how should i configure this to allow more hive connections? Also I find it weird that I can connect through beeline using the edge node of that cluster but Squirrel is giving an error. Thanks, Any help is much appreciated.
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
-
Apache Hive
05-26-2021
03:26 PM
When running sqlline.py, I get the following error: Also the file "/usr/lib/phoenix/phoenix-server.jar" exists on all the hbase master and regionserver in the cluster. [root@test01 bin]# ./sqlline.py Picked up _JAVA_OPTIONS: -Xmx2048m -XX:MaxPermSize=512m -Djava.awt.headless=true OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0 Setting property: [incremental, false] Setting property: [isolation, TRANSACTION_READ_COMMITTED] issuing: !connect jdbc:phoenix: none none org.apache.phoenix.jdbc.PhoenixDriver Connecting to jdbc:phoenix: SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/lib/phoenix/phoenix-4.7.0.2.6.1.0-129-client.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 21/05/26 17:18:52 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 21/05/26 17:18:53 WARN shortcircuit.DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded. Error: org.apache.hadoop.hbase.DoNotRetryIOException: Class org.apache.phoenix.coprocessor.MetaDataEndpointImpl cannot be loaded Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:2051) at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1897) at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1799) at org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:488) at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2399) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:311) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:291) (state=08000,code=101) org.apache.phoenix.exception.PhoenixIOException: org.apache.hadoop.hbase.DoNotRetryIOException: Class org.apache.phoenix.coprocessor.MetaDataEndpointImpl cannot be loaded Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:2051) at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1897) at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1799) at org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:488) at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2399) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:311) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:291) at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111) at org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1135) at org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1427) at org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2190) at org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:872) at org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:194) at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:343) at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:331) at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:329) at org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1421) at org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2390) at org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2339) at org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78) at org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2339) at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:237) at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150) at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:205) at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157) at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203) at sqlline.Commands.connect(Commands.java:1064) at sqlline.Commands.connect(Commands.java:996) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36) at sqlline.SqlLine.dispatch(SqlLine.java:804) at sqlline.SqlLine.initArgs(SqlLine.java:588) at sqlline.SqlLine.begin(SqlLine.java:656) at sqlline.SqlLine.start(SqlLine.java:398) at sqlline.SqlLine.main(SqlLine.java:292) Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: Class org.apache.phoenix.coprocessor.MetaDataEndpointImpl cannot be loaded Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:2051) at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1897) at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1799) at org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:488) at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2399) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:311) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:291) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95) at org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:226) at org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:240) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:140) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4403) at org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsyncV2(HBaseAdmin.java:748) at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:669) at org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1067) ... 30 more Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException): org.apache.hadoop.hbase.DoNotRetryIOException: Class org.apache.phoenix.coprocessor.MetaDataEndpointImpl cannot be loaded Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:2051) at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1897) at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1799) at org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:488) at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2399) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:311) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:291) at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1225) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287) at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.createTable(MasterProtos.java:62907) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$4.createTable(ConnectionManager.java:1832) at org.apache.hadoop.hbase.client.HBaseAdmin$5.call(HBaseAdmin.java:757) at org.apache.hadoop.hbase.client.HBaseAdmin$5.call(HBaseAdmin.java:749) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126) ... 34 more sqlline version 1.1.8 0: jdbc:phoenix:> Any help is much appreciated. Thanks,
... View more
Labels:
05-14-2021
03:02 PM
Hi experts, So I recently installed Hbase via ambari. I am able to start hbase master and regionserver. However, when I start the phoenix queryserver, ambari does not throw any errors, however, within a couple seconds, the phoenix queryserver stops. This is the error I found below: [root@test02 ~]# cat /var/log/hbase/phoenix-hbase-queryserver.out Error: Could not find or load main class org.apache.phoenix.queryserver.server.QueryServer starting Query Server, logging to /var/log/hbase/phoenix-hbase-queryserver.log 2021-05-14 16:39:23.784483 launching /usr/jdk64/jdk1.8.0_112/bin/java -cp /etc/hbase/conf:/etc/hadoop/conf:/usr/lib/phoenix/bin/../phoenix-4.15.0-HBase-1.5-client.jar:::/etc/hadoop/conf:/usr/lib/hadoop/lib/*:/usr/lib/hadoop/.//*:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/*:/usr/lib/hadoop-hdfs/.//*:/usr/lib/hadoop-yarn/lib/*:/usr/lib/hadoop-yarn/.//*:/usr/lib/hadoop-mapreduce/lib/*:/usr/lib/hadoop-mapreduce/.//*::mysql-connector-java.jar:/usr/lib/hadoop-mapreduce/*:/usr/lib/tez/*:/usr/lib/tez/lib/*:/etc/tez/conf -Dproc_phoenixserver -Dlog4j.configuration=file:/usr/lib/phoenix/bin/log4j.properties -Dpsql.root.logger=INFO,DRFA -Dpsql.log.dir=/var/log/hbase -Dpsql.log.file=phoenix-hbase-queryserver.log org.apache.phoenix.queryserver.server.QueryServer close failed in file object destructor: IOError: [Errno 9] Bad file descriptor Any help is much appreciated. Thanks,
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache HBase
-
Apache Phoenix