Member since
02-02-2021
116
Posts
2
Kudos Received
5
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1309 | 08-13-2021 09:44 AM | |
| 5982 | 04-27-2021 04:23 PM | |
| 2333 | 04-26-2021 10:47 AM | |
| 1527 | 03-29-2021 06:01 PM | |
| 4200 | 03-17-2021 04:53 PM |
05-28-2021
02:55 PM
So i just counted our connections and it seems like we have around 40 connections going on at the same time. So when should or how do we know how many hive instances we should have?
... View more
05-28-2021
02:05 PM
@LeetKrew93 We currently have 2 instances of HMS and HS2. I believe if the jdbc string includes all the zookeepers, doesn't the zookeeper help load balance for hive? Also I believe for our dev cluster, there are lots of connections. Is there a way to increase the max number of connections or should i create an extra hive instance?
... View more
05-28-2021
01:36 PM
Squirrel connecting to production cluster is a bit better but i noticed there are fewer existing connections. I will do more research on what the others suggested in the comments. Thanks for the response.
... View more
05-27-2021
02:04 PM
Hi experts, Many of the devs are getting the following error when trying to connect with squirrel: Error: org.apache.thrift.transport.TTransportException: java.net.SocketTimeoutException: Read timed out SQLState: 08S01 ErrorCode: 0 I believe the issue may be because there are too many concurrent hive connections. Can someone help suggest how should i configure this to allow more hive connections? Also I find it weird that I can connect through beeline using the edge node of that cluster but Squirrel is giving an error. Thanks, Any help is much appreciated.
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
-
Apache Hive
05-26-2021
03:26 PM
When running sqlline.py, I get the following error: Also the file "/usr/lib/phoenix/phoenix-server.jar" exists on all the hbase master and regionserver in the cluster. [root@test01 bin]# ./sqlline.py Picked up _JAVA_OPTIONS: -Xmx2048m -XX:MaxPermSize=512m -Djava.awt.headless=true OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0 Setting property: [incremental, false] Setting property: [isolation, TRANSACTION_READ_COMMITTED] issuing: !connect jdbc:phoenix: none none org.apache.phoenix.jdbc.PhoenixDriver Connecting to jdbc:phoenix: SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/lib/phoenix/phoenix-4.7.0.2.6.1.0-129-client.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 21/05/26 17:18:52 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 21/05/26 17:18:53 WARN shortcircuit.DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded. Error: org.apache.hadoop.hbase.DoNotRetryIOException: Class org.apache.phoenix.coprocessor.MetaDataEndpointImpl cannot be loaded Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:2051) at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1897) at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1799) at org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:488) at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2399) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:311) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:291) (state=08000,code=101) org.apache.phoenix.exception.PhoenixIOException: org.apache.hadoop.hbase.DoNotRetryIOException: Class org.apache.phoenix.coprocessor.MetaDataEndpointImpl cannot be loaded Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:2051) at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1897) at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1799) at org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:488) at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2399) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:311) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:291) at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111) at org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1135) at org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1427) at org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2190) at org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:872) at org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:194) at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:343) at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:331) at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:329) at org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1421) at org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2390) at org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2339) at org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78) at org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2339) at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:237) at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150) at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:205) at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157) at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203) at sqlline.Commands.connect(Commands.java:1064) at sqlline.Commands.connect(Commands.java:996) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36) at sqlline.SqlLine.dispatch(SqlLine.java:804) at sqlline.SqlLine.initArgs(SqlLine.java:588) at sqlline.SqlLine.begin(SqlLine.java:656) at sqlline.SqlLine.start(SqlLine.java:398) at sqlline.SqlLine.main(SqlLine.java:292) Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: org.apache.hadoop.hbase.DoNotRetryIOException: Class org.apache.phoenix.coprocessor.MetaDataEndpointImpl cannot be loaded Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:2051) at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1897) at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1799) at org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:488) at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2399) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:311) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:291) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95) at org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:226) at org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:240) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:140) at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4403) at org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsyncV2(HBaseAdmin.java:748) at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:669) at org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1067) ... 30 more Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException): org.apache.hadoop.hbase.DoNotRetryIOException: Class org.apache.phoenix.coprocessor.MetaDataEndpointImpl cannot be loaded Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:2051) at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1897) at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1799) at org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:488) at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2399) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:311) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:291) at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1225) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287) at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.createTable(MasterProtos.java:62907) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$4.createTable(ConnectionManager.java:1832) at org.apache.hadoop.hbase.client.HBaseAdmin$5.call(HBaseAdmin.java:757) at org.apache.hadoop.hbase.client.HBaseAdmin$5.call(HBaseAdmin.java:749) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126) ... 34 more sqlline version 1.1.8 0: jdbc:phoenix:> Any help is much appreciated. Thanks,
... View more
Labels:
05-14-2021
03:02 PM
Hi experts, So I recently installed Hbase via ambari. I am able to start hbase master and regionserver. However, when I start the phoenix queryserver, ambari does not throw any errors, however, within a couple seconds, the phoenix queryserver stops. This is the error I found below: [root@test02 ~]# cat /var/log/hbase/phoenix-hbase-queryserver.out Error: Could not find or load main class org.apache.phoenix.queryserver.server.QueryServer starting Query Server, logging to /var/log/hbase/phoenix-hbase-queryserver.log 2021-05-14 16:39:23.784483 launching /usr/jdk64/jdk1.8.0_112/bin/java -cp /etc/hbase/conf:/etc/hadoop/conf:/usr/lib/phoenix/bin/../phoenix-4.15.0-HBase-1.5-client.jar:::/etc/hadoop/conf:/usr/lib/hadoop/lib/*:/usr/lib/hadoop/.//*:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/*:/usr/lib/hadoop-hdfs/.//*:/usr/lib/hadoop-yarn/lib/*:/usr/lib/hadoop-yarn/.//*:/usr/lib/hadoop-mapreduce/lib/*:/usr/lib/hadoop-mapreduce/.//*::mysql-connector-java.jar:/usr/lib/hadoop-mapreduce/*:/usr/lib/tez/*:/usr/lib/tez/lib/*:/etc/tez/conf -Dproc_phoenixserver -Dlog4j.configuration=file:/usr/lib/phoenix/bin/log4j.properties -Dpsql.root.logger=INFO,DRFA -Dpsql.log.dir=/var/log/hbase -Dpsql.log.file=phoenix-hbase-queryserver.log org.apache.phoenix.queryserver.server.QueryServer close failed in file object destructor: IOError: [Errno 9] Bad file descriptor Any help is much appreciated. Thanks,
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache HBase
-
Apache Phoenix
04-27-2021
04:23 PM
1 Kudo
Here is an update. So I finally was able to use ambari to get Hive installed and services started using the apache bigtop repo. Also was able to connect via hiveCLI as well as beeline(hiveserver2) and ran a simple "show databases;" which ran successfully. So after symlinking the following directories, hiveserver2 finally was able to start successfully. [root@test ~]# ll /usr/bgtp/current/ total 32 lrwxrwxrwx 1 root root 13 Apr 23 19:38 hive-client -> /usr/lib/hive lrwxrwxrwx 1 root root 13 Apr 23 19:37 hive-metastore -> /usr/lib/hive lrwxrwxrwx 1 root root 13 Apr 27 16:28 hive-server2 -> /usr/lib/hive lrwxrwxrwx 1 root root 22 Apr 27 16:29 hive-webhcat -> /usr/lib/hive-hcatalog I did not find any documentation on how to install hive with the apache bigtop packages using ambari. There was some documentation on how to install hive with the apache bigtop package using the command line. If anyone finds any documentation on how to install the different components in the apache bigtop package, please let me know. Thanks,
... View more
04-27-2021
02:23 PM
UPDATE: So I tried to reinstall hive again and to comment out those packages as i had mentioned earlier on this thread. I just noticed the error on the bottom in ambari resource_management.core.exceptions.Fail: Configuration parameter 'hiveserver2-site' was not found in configurations dictionary! So I ran the command cp /var/lib/ambari-server/resources/stacks/BigInsights/4.2.5/services/HIVE/configuration/hiveserver2-site.xml /var/lib/ambari-server/resources/common-services/HIVE/0.12.0.2.0/configuration restarted ambari and now it seems like I dont get any errors in ambari anymore. Per ambari, it is able to start all the hive components successfully, however my hiveserver2 seems to be crashing after ambari starts it. All the other hive components remain running.
... View more
04-27-2021
11:22 AM
When modifying the file "/var/lib/ambari-server/resources/common-services/HIVE/0.12.0.2.0/metainfo.xml" to bypass the above errors, I am able to successfully install the different hive components(metastore, hiveserver2, and webhcat). I am also able to successfully start the metastore. I commented out the following lines from the file "/var/lib/ambari-server/resources/common-services/HIVE/0.12.0.2.0/metainfo.xml" to bypass the error. <osSpecifics> <osSpecific> <osFamily>any</osFamily> <packages> <package> <name>hive</name> </package> <package> <name>hive-hcatalog</name> </package> <!-- <package> <name>webhcat-tar-hive</name> </package> <package> <name>webhcat-tar-pig</name> </package> <package> <name>mysql-connector-java</name> <skipUpgrade>true</skipUpgrade> <condition>should_install_mysql_connector</condition> </package> --> </packages> </osSpecific> <osSpecific> <osFamily>amazon2015,redhat6,suse11,suse12</osFamily> <packages> <package> <name>mysql</name> <skipUpgrade>true</skipUpgrade> </package> </packages> </osSpecific> However, I get the following errors when trying to start hiveserver2 and webhcat. 2021-04-27 13:18:33,875 - WARNING. Cannot copy pig tarball because file does not exist: /usr/bgtp/1.0/pig/pig.tar.gz . It is possible that this component is not installed on this host.
2021-04-27 13:18:33,877 - WARNING. Cannot copy hive tarball because file does not exist: /usr/bgtp/1.0/hive/hive.tar.gz . It is possible that this component is not installed on this host.
2021-04-27 13:18:33,878 - WARNING. Cannot copy sqoop tarball because file does not exist: /usr/bgtp/1.0/sqoop/sqoop.tar.gz . It is possible that this component is not installed on this host.
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_server.py", line 161, in <module>
HiveServer().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 375, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_server.py", line 77, in start
self.configure(env) # FOR SECURITY
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 120, in locking_configure
original_configure(obj, *args, **kw)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_server.py", line 51, in configure
hive(name='hiveserver2')
File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive.py", line 269, in hive
mode=0600)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/xml_config.py", line 66, in action_create
encoding = self.resource.encoding
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 123, in action_create
content = self._get_content()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 160, in _get_content
return content()
File "/usr/lib/python2.6/site-packages/resource_management/core/source.py", line 52, in __call__
return self.get_content()
File "/usr/lib/python2.6/site-packages/resource_management/core/source.py", line 144, in get_content
rendered = self.template.render(self.context)
File "/usr/lib/python2.6/site-packages/ambari_jinja2/environment.py", line 891, in render
return self.environment.handle_exception(exc_info, True)
File "<template>", line 2, in top-level template code
File "/usr/lib/python2.6/site-packages/ambari_jinja2/filters.py", line 176, in do_dictsort
return sorted(value.items(), key=sort_func)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/config_dictionary.py", line 73, in __getattr__
raise Fail("Configuration parameter '" + self.name + "' was not found in configurations dictionary!")
resource_management.core.exceptions.Fail: Configuration parameter 'hiveserver2-site' was not found in configurations dictionary!
... View more
04-27-2021
10:05 AM
@vidanimegh This is where I got the repos https://bigtop.apache.org/download.html#releases Also I have not found any documentation on how to deploy a hadoop cluster using ambari from the apache bigtop package. All I found was that in the newest version they added a mpack which makes it easier for users to implement through ambari. However, I have implemented the mpack, but it only consists of basic components such as hdfs, yarn, mapreduce, and zookeeper. I was able to implement other components such as tez or sqoop successfully, but having issues with hive.
... View more