Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Hive having trouble with metastore server

avatar
Contributor

anyone able to shed any light on this. I'm starting up a cluster using cloudbreak/ambari with a blueprint, and external metastoredb, but When I try to run very simple operations via beeline (eg "show schemas;") I get an error every second try

0: jdbc:hive2://x.x.x.x:10000> show schemas;
Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. com/mysql/jdbc/SQLError (state=08S01,code=1)" 

See hiveserver2 logs below but if I restart hiveserver2 it just goes away. I can reproduce this reliably any ideas??

2016-05-13 20:36:51,782 INFO  [HiveServer2-Background-Pool: Thread-268]: metastore.ObjectStore (ObjectStore.java:initialize(294)) - ObjectStore, initialize called
2016-05-13 20:36:51,786 ERROR [HiveServer2-Background-Pool: Thread-268]: metastore.RetryingHMSHandler (RetryingHMSHandler.java:invoke(159)) - java.lang.NoClassDefFoundError: com/mysql/jdbc/SQLError
        at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3575)
        at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3529)
        at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1990)
        at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2151)
        at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2619)
        at com.mysql.jdbc.StatementImpl.executeSimpleNonQuery(StatementImpl.java:1606)
        at com.mysql.jdbc.StatementImpl.executeQuery(StatementImpl.java:1503)
        at com.mysql.jdbc.ConnectionImpl.getTransactionIsolation(ConnectionImpl.java:3173)
        at com.jolbox.bonecp.ConnectionHandle.getTransactionIsolation(ConnectionHandle.java:825)
        at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:444)
        at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:378)
        at org.datanucleus.store.connection.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:328)
        at org.datanucleus.store.connection.AbstractConnectionFactory.getConnection(AbstractConnectionFactory.java:94)
        at org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:430)
        at org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:396)
        at org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:621)
        at org.datanucleus.store.query.Query.executeQuery(Query.java:1786)
        at org.datanucleus.store.query.Query.executeWithArray(Query.java:1672)
        at org.datanucleus.store.query.Query.execute(Query.java:1654)
        at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:221)
        at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.ensureDbInit(MetaStoreDirectSql.java:192)
        at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.<init>(MetaStoreDirectSql.java:138)
        at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:300)
        at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:263)
        at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
        at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
        at org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:57)
        at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:66)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:603)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:581)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_all_databases(HiveMetaStore.java:1188)
        at sun.reflect.GeneratedMethodAccessor32.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107)
        at com.sun.proxy.$Proxy11.get_all_databases(Unknown Source)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getAllDatabases(HiveMetaStoreClient.java:1037)
        at sun.reflect.GeneratedMethodAccessor31.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156)
        at com.sun.proxy.$Proxy12.getAllDatabases(Unknown Source)
        at org.apache.hadoop.hive.ql.metadata.Hive.getAllDatabases(Hive.java:1237)
        at org.apache.hadoop.hive.ql.exec.DDLTask.showDatabases(DDLTask.java:2262)
        at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:390)
        at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
        at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:89)
        at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1728)
        at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1485)
        at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1262)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1126)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1121)
        at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:154)
        at org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:71)
        at org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:206)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
        at org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:218)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)


2016-05-13 20:36:51,787 ERROR [HiveServer2-Background-Pool: Thread-268]: exec.DDLTask (DDLTask.java:failed(525)) - java.lang.NoClassDefFoundError: com/mysql/jdbc/SQLError
        at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3575)
        at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3529)
        at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1990)
        at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2151)
        at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2619)
        at com.mysql.jdbc.StatementImpl.executeSimpleNonQuery(StatementImpl.java:1606)
        at com.mysql.jdbc.StatementImpl.executeQuery(StatementImpl.java:1503)
        at com.mysql.jdbc.ConnectionImpl.getTransactionIsolation(ConnectionImpl.java:3173)
        at com.jolbox.bonecp.ConnectionHandle.getTransactionIsolation(ConnectionHandle.java:825)
        at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:444)
        at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:378)
        at org.datanucleus.store.connection.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:328)
        at org.datanucleus.store.connection.AbstractConnectionFactory.getConnection(AbstractConnectionFactory.java:94)
        at org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:430)
        at org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:396)
        at org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:621)
        at org.datanucleus.store.query.Query.executeQuery(Query.java:1786)
        at org.datanucleus.store.query.Query.executeWithArray(Query.java:1672)
        at org.datanucleus.store.query.Query.execute(Query.java:1654)
        at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:221)
        at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.ensureDbInit(MetaStoreDirectSql.java:192)
        at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.<init>(MetaStoreDirectSql.java:138)
        at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:300)
        at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:263)
        at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
        at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
        at org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:57)
        at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:66)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:603)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:581)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_all_databases(HiveMetaStore.java:1188)
        at sun.reflect.GeneratedMethodAccessor32.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107)
        at com.sun.proxy.$Proxy11.get_all_databases(Unknown Source)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getAllDatabases(HiveMetaStoreClient.java:1037)
        at sun.reflect.GeneratedMethodAccessor31.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156)
        at com.sun.proxy.$Proxy12.getAllDatabases(Unknown Source)
        at org.apache.hadoop.hive.ql.metadata.Hive.getAllDatabases(Hive.java:1237)
        at org.apache.hadoop.hive.ql.exec.DDLTask.showDatabases(DDLTask.java:2262)
        at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:390)
        at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
        at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:89)
        at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1728)
        at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1485)
        at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1262)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1126)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1121)
        at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:154)
        at org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:71)
        at org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:206)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
        at org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:218)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)


2016-05-13 20:36:51,787 INFO  [HiveServer2-Background-Pool: Thread-268]: hooks.ATSHook (ATSHook.java:<init>(90)) - Created ATS Hook
2016-05-13 20:36:51,787 INFO  [HiveServer2-Background-Pool: Thread-268]: log.PerfLogger (PerfLogger.java:PerfLogBegin(135)) - <PERFLOG method=FailureHook.org.apache.hadoop.hive.ql.hooks.ATSHook from=org.apache.hadoop.hive.ql.Driver>
2016-05-13 20:36:51,787 INFO  [HiveServer2-Background-Pool: Thread-268]: log.PerfLogger (PerfLogger.java:PerfLogEnd(162)) - </PERFLOG method=FailureHook.org.apache.hadoop.hive.ql.hooks.ATSHook start=1463171811787 end=1463171811787 duration=0 from=org.apache.hadoop.hive.ql.Driver>
2016-05-13 20:36:51,787 ERROR [HiveServer2-Background-Pool: Thread-268]: ql.Driver (SessionState.java:printError(962)) - FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. com/mysql/jdbc/SQLError
2016-05-13 20:36:51,788 INFO  [HiveServer2-Background-Pool: Thread-268]: ql.Driver (Driver.java:execute(1629)) - Resetting the caller context to 
2
1 ACCEPTED SOLUTION

avatar
Rising Star

Hi guys. This is due to an old SQL driver - as stated above you can either change the driver or wait until the next release where the driver will be updated. Sorry for the inconvenience.

View solution in original post

10 REPLIES 10

avatar
Guru

There seems to be some error but you jdbc driver is causing to not show the exact error. Check if can update mysql-connector-java.jar in /usr/hdp/current/hive-server2 with the correct one for your version of mysql server.

avatar
Contributor

Is there a "supported" way to do that with cloudbreak? I think it's baked into the docker images

avatar
Guru

My suggestion is to first do it outside cloudbreak to see if updating mysql jar fixes this issue.

avatar
Contributor

I can't recreate the issue outside of cloudbreak.

avatar
Super Guru
@Liam MacInnes

It probably a jar conflict issue or jar file is missing, may be you have two different versions of mysql-connector-java.jar file. Therefore please search inside /usr/hdp/current/hive* directory for mysql connector jar file.

avatar
Rising Star

@Liam MacInnes Are you using RDS on AWS? If so can you try it now - the drivers are updated on the hosted version. If you are using CBD than you should change this in your profile: export DOCKER_TAG_CLOUDBREAK=1.2.6-rc.3

And then restart cbd with:

cbd kill && cbd regenerate && cbd start

avatar
Contributor

I am using CBD

I just tried changing to "export DOCKER_TAG_CLOUDBREAK=1.2.6-rc.3" in my profile and it appears to be using the exact same 5.1.17 driver

cat ./META-INF/MANIFEST.MF
Manifest-Version: 1.0
Ant-Version: Apache Ant 1.7.1
Created-By: 4.4.6 20120305 (Red Hat 4.4.6-4) (Free Software Foundation
 , Inc.)
Built-By: mockbuild
Bundle-Vendor: Sun Microsystems Inc.
Bundle-Classpath: .
Bundle-Version: 5.1.17
Bundle-Name: Sun Microsystems' JDBC Driver for MySQL
Bundle-ManifestVersion: 2
Bundle-SymbolicName: com.mysql.jdbc
Export-Package: com.mysql.jdbc;version="5.1.17";uses:="com.mysql.jdbc.
 log,javax.naming,javax.net.ssl,javax.xml.transform,org.xml.sax",com.m
 ysql.jdbc.jdbc2.optional;version="5.1.17";uses:="com.mysql.jdbc,com.m
 ysql.jdbc.log,javax.naming,javax.sql,javax.transaction.xa",com.mysql.
 jdbc.log;version="5.1.17",com.mysql.jdbc.profiler;version="5.1.17";us
 es:="com.mysql.jdbc",com.mysql.jdbc.util;version="5.1.17";uses:="com.
 mysql.jdbc.log",com.mysql.jdbc.exceptions;version="5.1.17",com.mysql.
 jdbc.exceptions.jdbc4;version="5.1.17";uses:="com.mysql.jdbc",com.mys
 ql.jdbc.interceptors;version="5.1.17";uses:="com.mysql.jdbc",com.mysq
 l.jdbc.integration.c3p0;version="5.1.17",com.mysql.jdbc.integration.j
 boss;version="5.1.17",com.mysql.jdbc.configs;version="5.1.17",org.gjt
 .mm.mysql;version="5.1.17"
Import-Package: javax.net,javax.net.ssl;version="[1.0.1, 2.0.0)";resol
 ution:=optional,javax.xml.parsers, javax.xml.stream,javax.xml.transfo
 rm,javax.xml.transform.dom,javax.xml.transform.sax,javax.xml.transfor
 m.stax,javax.xml.transform.stream,org.w3c.dom,org.xml.sax,org.xml.sax
 .helpers;resolution:=optional,javax.naming,javax.naming.spi,javax.sql
 ,javax.transaction.xa;version="[1.0.1, 2.0.0)";resolution:=optional,c
 om.mchange.v2.c3p0;version="[0.9.1.2, 1.0.0)";resolution:=optional,or
 g.jboss.resource.adapter.jdbc;resolution:=optional,org.jboss.resource
 .adapter.jdbc.vendor;resolution:=optional


Name: common
Specification-Title: JDBC
Specification-Version: 4.0
Specification-Vendor: Sun Microsystems Inc.
Implementation-Title: MySQL Connector/J
Implementation-Version: 5.1.17-SNAPSHOT
Implementation-Vendor-Id: com.mysql
Implementation-Vendor: Oracle



avatar
Contributor

Confirmed this does not resolve the issue.

avatar
New Contributor

We had a very similar issue with a similar deployment architecture. We run and use HDP Clusters on both Azure and AWS with Cloudbreak. We use an external RDS MySQL 5.6.27 DB on our AWS HDP clusters, and had issues with dropping tables in Hive. We did try to use the export DOCKER_TAG_CLOUDBREAK=1.2.6-rc.3 and it didn't help.

Our fix to this issue (as of 6/10/16):

1. SSH into the host (in our case, the Docker container within the host) that runs the Hive Metastore - this is shown in Ambari on the hive tab.

2. While on the host, cd to this path: /usr/hdp/current/hive-server2/lib

3. If you're on the right host, in the right Docker container, you should find a jar there as follows (as of HDP 3.4.2 at least):

-rw-r--r-- 1 root root 819803 May 31 15:08 mysql-connector-java.jar

If you check the manifest on that jar, you'll notice it is the 5.1.17 MySQL Driver.

4. We renamed that jar to _old.

5. Download the MySQL Driver Version 5.1.35 (we tried that most recent driver and it didn't work, but this one does).

6. Get the jar from that download and pop it into the directory /usr/hdp/current/hive-server2/lib and rename it with the correct name (mysql-connector-java.jar).

7. Bounce all Hive components in Ambari and then everything worked well for us.

8. Figure out a way to deploy this custom jar as part of the Cloudbreak deployment mechanism (will post back here when we get this figured out, if I remember).

For google, this is how the error manifested for us:

FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Got exception: org.apache.thrift.transport.TTransportException java.net.SocketTimeoutException: Read timed out)

and

FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:javax.jdo.JDODataStoreException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'OPTION SQL_SELECT_LIMIT=DEFAULT' at line 1