Member since
11-21-2015
1
Post
1
Kudos Received
0
Solutions
06-10-2016
12:24 PM
1 Kudo
We had a very similar issue with a similar deployment architecture. We run and use HDP Clusters on both Azure and AWS with Cloudbreak. We use an external RDS MySQL 5.6.27 DB on our AWS HDP clusters, and had issues with dropping tables in Hive. We did try to use the export DOCKER_TAG_CLOUDBREAK=1.2.6-rc.3 and it didn't help. Our fix to this issue (as of 6/10/16): 1. SSH into the host (in our case, the Docker container within the host) that runs the Hive Metastore - this is shown in Ambari on the hive tab. 2. While on the host, cd to this path: /usr/hdp/current/hive-server2/lib 3. If you're on the right host, in the right Docker container, you should find a jar there as follows (as of HDP 3.4.2 at least): -rw-r--r-- 1 root root 819803 May 31 15:08 mysql-connector-java.jar If you check the manifest on that jar, you'll notice it is the 5.1.17 MySQL Driver. 4. We renamed that jar to _old. 5. Download the MySQL Driver Version 5.1.35 (we tried that most recent driver and it didn't work, but this one does). 6. Get the jar from that download and pop it into the directory /usr/hdp/current/hive-server2/lib and rename it with the correct name (mysql-connector-java.jar). 7. Bounce all Hive components in Ambari and then everything worked well for us. 8. Figure out a way to deploy this custom jar as part of the Cloudbreak deployment mechanism (will post back here when we get this figured out, if I remember). For google, this is how the error manifested for us: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Got exception: org.apache.thrift.transport.TTransportException java.net.SocketTimeoutException: Read timed out) and FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:javax.jdo.JDODataStoreException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'OPTION SQL_SELECT_LIMIT=DEFAULT' at line 1
... View more