Member since
04-03-2019
962
Posts
1743
Kudos Received
146
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 15001 | 03-08-2019 06:33 PM | |
| 6178 | 02-15-2019 08:47 PM | |
| 5098 | 09-26-2018 06:02 PM | |
| 12599 | 09-07-2018 10:33 PM | |
| 7446 | 04-25-2018 01:55 AM |
12-11-2016
10:04 AM
2 Kudos
My Environment details: HDP Version - 2.4.2.0 Ambari - 2.2.2.0 Mysql Version - 5.1.34 . Hive shell was taking lot of time to load and when I checked Hive Metastore logs, I found below exception: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'OPTION SQL_SELECT_LIMIT=DEFAULT' at line 1 . Complete StackTrace: NestedThrowablesStackTrace:
com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'OPTION SQL_SELECT_LIMIT=DEFAULT' at line 1
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
at com.mysql.jdbc.Util.getInstance(Util.java:386)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1052)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3597)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3529)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1990)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2151)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2619)
at com.mysql.jdbc.ConnectionImpl.unsetMaxRows(ConnectionImpl.java:5421)
at com.mysql.jdbc.StatementImpl.realClose(StatementImpl.java:2441)
at com.mysql.jdbc.PreparedStatement.realClose(PreparedStatement.java:3079)
at com.mysql.jdbc.PreparedStatement.close(PreparedStatement.java:1156)
at com.jolbox.bonecp.StatementHandle.close(StatementHandle.java:138)
at org.datanucleus.store.rdbms.ParamLoggingPreparedStatement.close(ParamLoggingPreparedStatement.java:318)
at org.datanucleus.store.rdbms.SQLController.closeStatement(SQLController.java:568)
at org.datanucleus.store.rdbms.query.SQLQuery.performExecute(SQLQuery.java:357)
at org.datanucleus.store.query.Query.executeQuery(Query.java:1786)
at org.datanucleus.store.query.AbstractSQLQuery.executeWithArray(AbstractSQLQuery.java:339)
at org.datanucleus.api.jdo.JDOQuery.executeWithArray(JDOQuery.java:312)
at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.executeWithArray(MetaStoreDirectSql.java:1628)
at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPartitionsViaSqlFilterInternal(MetaStoreDirectSql.java:466)
at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPartitions(MetaStoreDirectSql.java:393)
at org.apache.hadoop.hive.metastore.ObjectStore$2.getSqlResult(ObjectStore.java:1738)
at org.apache.hadoop.hive.metastore.ObjectStore$2.getSqlResult(ObjectStore.java:1734)
at org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:2394)
at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsInternal(ObjectStore.java:1734)
at org.apache.hadoop.hive.metastore.ObjectStore.getPartitions(ObjectStore.java:1728)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:114)
at com.sun.proxy.$Proxy10.getPartitions(Unknown Source)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.dropPartitionsAndGetLocations(HiveMetaStore.java:1700)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_core(HiveMetaStore.java:1539)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_with_environment_context(HiveMetaStore.java:1744)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107)
at com.sun.proxy.$Proxy11.drop_table_with_environment_context(Unknown Source)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.drop_table_with_environment_context(HiveMetaStoreClient.java:2062)
at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.drop_table_with_environment_context(SessionHiveMetaStoreClient.java:118) . Root cause: 1. MySql server version was patched(upgraded to 5.1.38) however mysql-java-connector was not upgraded. 2. Hive was configured to use Mysql as Metastore DB. 3. More details - http://bugs.mysql.com/bug.php?id=66659 . Resolution: After checking release notes on MySql community, found that this BUG has been fixed in mysql-java-connector-5.1.22 Please read more details here(Second last point) - https://dev.mysql.com/doc/relnotes/connector-j/5.1/en/news-5-1-22.html To fix this - I had to upgrade my mysql-java-connector from mysql-java-connector-5.1.17 to mysql-java-connector-5.1.22. Here is the download link for mysql-java-connector-5.1.22 . Where to place upgraded connector? On Ambari server(as root user): mkdir ~/backups
mv /var/lib/ambari-server/resources/mysql* ~/backups/
cp $some_location/mysql-connector-java-5.1.22.jar /var/lib/ambari-server/resources/
ln -s /var/lib/ambari-server/resources/mysql-connector-java-5.1.22.jar /var/lib/ambari-server/resources/mysql-jdbc-driver.jar
mv /usr/share/java/mysql* ~/backups/
cp /var/lib/ambari-server/resources/mysql-connector-java-5.1.22.jar /usr/share/java/
ln -s /usr/share/java/mysql-connector-java-5.1.22.jar /usr/share/java/mysql-connector-java.jar
#Please run below command carefully (Please Don't copy and paste, or copy it carefully!)
rm -rf /var/lib/ambari-agent/tmp/mysql-* . On HiveMetastore mkdir ~/backupsmv /usr/share/java/mysql* ~/backups/
cp $local_path/mysql-connector-java-5.1.22.jar /usr/share/java/
ln -s /usr/share/java/mysql-connector-java-5.1.22.jar /usr/share/java/mysql-connector-java.jar
#Please run below command carefully (Please Don't copy and paste, or copy it carefully!)
rm -rf /var/lib/ambari-agent/tmp/mysql-*
. On Hiveserver2 mkdir ~/backupsmv /usr/share/java/mysql* ~/backups/
cp $local_path/mysql-connector-java-5.1.22.jar /usr/share/java/
ln -s /usr/share/java/mysql-connector-java-5.1.22.jar /usr/share/java/mysql-connector-java.jar
#Please run below command carefully (Please Don't copy and paste, or copy it carefully!)
rm -rf /var/lib/ambari-agent/tmp/mysql-* . Restart HiveMestastore and Hiveserver2 via Ambari and this should fix your issue! 🙂 . Please comment if you have any quesiton or feedback. Happy Hadooping!! 🙂
... View more
Labels:
12-10-2016
08:55 AM
5 Kudos
This has been observed for Ambari 1.7. I know Ambari 1.7 is very old now however if some people are still using it and facing same issue then this post can save your lot of time! 🙂 . We know that Ambari 1.7 has Ganglia for reporting metrics. . Our issue was, we were unable to get service metrics for YARN service, below was the exact scenario: 1. In Ganglia UI, I was able to get graphs for the YARN Metrics. 2. In Ambari UI, I was able to see metrics for other services like HDFS etc. 3. This issue was observed on one of our customer's cluster. 4. I did setup same cluster on my local system however I was not able to reproduce this issue. 5. Only difference in customer's and my cluster was, customer was having RM HA and I had installed single node instance. 6. There was no error while getting metrics, please see below screenshot when metrics were not visible. . How to troubleshoot: 1. Click on any of the metrics, say 'Nodemanagers' --> This will open graph in big window. 2. Open the developer tools from Chrome/Firefox and inspect the network activities. 3. Notice the REST call from where its trying to fetch metrics. 4. Now do the RM failover and try to follow step 1-3 again. 5. Same REST call? No difference. 6. If you flip the RMs, Graphs will start populating the Data. . Root Cause: If you look at hadoop-metrics2.properties file, it has only one RM host(initial rm1) hardcoded for resourcemanager.sink.ganglia.servers resourcemanager.sink.ganglia.servers=rm1.crazyadmins.com:8664 . Workaround: Make same RM host as active RM which you get in REST API as a output of troubleshooting step 4. . Permanent fix: Edit /etc/hadoop/conf/hadoop-metrics2.properties file and add another RM host: E.g. resourcemanager.sink.ganglia.servers=rm1.crazyadmins.com:8664,rm2.crazyadmins.com:8664 Note - This file is not managed by Ambari 1.7. Feel free to modify on both the RM hosts and restart RMs via Ambari after the modifications. . Hope you enjoyed this article! Please comment if you have any questions. Happy Hadooping!! 🙂
... View more
Labels:
12-09-2016
10:44 PM
4 Kudos
@Jonas Bissmark Whenever you start any service in kerberos enabled cluster, let's say Namenode service, first Ambari initiates the kerberos ticket and once service is started, it has logic to re-login and get the fresh ticket. Comment from @Chris Nauroth from stackoverflow question on how Hadoop implements an automatic re-login mechanism directly inside the RPC client layer ( please read his awesome answer on http://stackoverflow.com/questions/34616676/should-i-call-ugi-checktgtandreloginfromkeytab-before-every-action-on-hadoop when you get a chance ) #### The code for this is visible in the RPC Client#handleSaslConnectionFailure method: // try re-login
if (UserGroupInformation.isLoginKeytabBased()) {
UserGroupInformation.getLoginUser().reloginFromKeytab();
} else if (UserGroupInformation.isLoginTicketBased()) {
UserGroupInformation.getLoginUser().reloginFromTicketCache();
} This explains answer to your question - "Strangely enough there are never any service related errors in Ambari." ## The effect of this is e.g that I can't list directories in HDFS as the Oozie user(in the shell), it fails with the following error message: --> This is expected as ticket gets expired after 24 hours. Hope this answers your question! 🙂
... View more
12-09-2016
09:32 PM
5 Kudos
@justlearning
You can use standard apache oozie examples and modify them as per your requirement. This is easiest way to get started writing Oozie workflows. It has example workflow.xml for each supported action. You can find Oozie examples on HDP cluster at below location(Provided you have installed Oozie client) /usr/hdp/current/oozie-client/doc/oozie-examples.tar.gz Hope this information helps!
... View more
12-09-2016
09:27 PM
7 Kudos
@subash sharma You need to add below properties to core-site.xml(custom core-site section of Ambari) hadoop.proxyuser.root.groups=*
hadoop.proxyuser.root.hosts=*
... View more
12-09-2016
08:02 PM
3 Kudos
@Dmitry Otblesk Login to Ambari --> Click on HDFS Service --> On the top right corner --> Click on Start to start the down services.
... View more
12-09-2016
08:01 PM
3 Kudos
@ANSARI FAHEEM AHMED I had written few blogs on performance tuning. Please have a look at below articles. http://crazyadmins.com/tune-hadoop-cluster-to-get-maximum-performance-part-1/ http://crazyadmins.com/tune-hadoop-cluster-to-get-maximum-performance-part-2/
... View more
12-09-2016
07:56 PM
3 Kudos
@Huahua Wei @aengineer has given a very good explanation of why NN needs safemode and why you should not leave the safemode forcefully until and unless its necessary. If you have NN A and B. Currently A is active and you need to restart A for some reason. You can always do the failover to B before restart and then once A becomes standby, you can restart it without any downtime. Command to failover is: sudo -u hdfs hdfs haadmin -failover nn1 nn2 Note - This will failover nn1 to nn2(nn2 will become active) Hope this information helps.
... View more
12-09-2016
07:43 PM
@Karan Alang - Can you please check if you have ranger plugin enabled for Hive? If so, please try to grant ambari-server access to default DB
... View more
12-09-2016
07:41 PM
@Sanaz Janbakhsh What error are you getting in Ambari when you click on start namenode?
... View more