Member since
03-25-2016
27
Posts
7
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1828 | 04-21-2016 08:42 PM | |
3153 | 03-28-2016 09:29 PM |
02-09-2017
11:00 PM
Last year I setup a HDP 2.4 cluster with four nodes and I installed HDF (Nifi). At that time I simple put NiFi on node1 of the HDP cluster, installing it on the same server as Ambari. Everything worked fine and it was simple. I have now installed HDP2.5 in a new four node cluster and I have read that you can no longer install HDF on the same cluster as HDP. The HDF install has it's own instance of Ambari and requires a separate cluster. Here are my questions: (1) Must the HDF instance be on a "cluster" or can I simply have a single server for the HDF instance? ...I am not certain what would have changed from a year ago that would require a full cluster for HDF? (2) With the new server/cluster for the HDF, does that instance of NiFi then connect and write data back to the HDP cluster? Sorry if these are obvious questions, but I was unclear and trying to gain an understanding of exactly what is needed. Thank you,
-Marc
... View more
Labels:
- Labels:
-
Apache NiFi
04-21-2016
08:42 PM
1 Kudo
The problem was the version of the mysql connector. I was using 5.1.38, I tried the dev relase 6.0.2, I tried the default for RHEL6.7 (5.1.17), and I finally ended up trying specifically 5.1.35. 5.1.35 was the magic version.
... View more
04-21-2016
07:27 PM
Interestingly enough, after posting this I turned to the mysql connector. However, I was already using: mysql-connector-java-5.1.38-bin.jar
... View more
04-21-2016
06:34 PM
I attempted a simple: drop table q_test ...it took nearly 5 minutes to return the below pasted error. Any thoughts or sugggestions? I am running HDP-2.3.4.0-3485 with MySQL 5.6 Attempting this from Hive View hoses HiveView to the point it will not show databases and throws errors. Here is the output from the command line: $ hive -e "drop table quotes_tst"
WARNING: Use "yarn jar" to launch YARN applications.
Logging initialized using configuration in file:/etc/hive/2.3.4.0-3485/0/hive-log4j.properties
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:javax.jdo.JDODataStoreException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'OPTION SQL_SELECT_LIMIT=DEFAULT' at line 1
at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:451)
at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:275)
at org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:1005)
at org.apache.hadoop.hive.metastore.ObjectStore.getTable(ObjectStore.java:937)
at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:114)
at com.sun.proxy.$Proxy2.getTable(Unknown Source)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table_core(HiveMetaStore.java:1804)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table(HiveMetaStore.java:1776)
at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107)
at com.sun.proxy.$Proxy4.get_table(Unknown Source)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_table.getResult(ThriftHiveMetastore.java:9330)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_table.getResult(ThriftHiveMetastore.java:9314)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
NestedThrowablesStackTrace:
com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'OPTION SQL_SELECT_LIMIT=DEFAULT' at line 1
at sun.reflect.GeneratedConstructorAccessor32.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
at com.mysql.jdbc.Util.getInstance(Util.java:386)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1052)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3597)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3529)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1990)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2151)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2619)
at com.mysql.jdbc.StatementImpl.executeSimpleNonQuery(StatementImpl.java:1606)
at com.mysql.jdbc.StatementImpl.executeQuery(StatementImpl.java:1503)
at com.mysql.jdbc.ConnectionImpl.getTransactionIsolation(ConnectionImpl.java:3173)
at com.jolbox.bonecp.ConnectionHandle.getTransactionIsolation(ConnectionHandle.java:825)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:444)
at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:378)
at org.datanucleus.store.connection.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:328)
at org.datanucleus.store.connection.AbstractConnectionFactory.getConnection(AbstractConnectionFactory.java:94)
at org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:430)
at org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:396)
at org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:621)
at org.datanucleus.store.query.Query.executeQuery(Query.java:1786)
at org.datanucleus.store.query.Query.executeWithArray(Query.java:1672)
at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:266)
at org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:1005)
at org.apache.hadoop.hive.metastore.ObjectStore.getTable(ObjectStore.java:937)
at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:114)
at com.sun.proxy.$Proxy2.getTable(Unknown Source)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table_core(HiveMetaStore.java:1804)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table(HiveMetaStore.java:1776)
at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107)
at com.sun.proxy.$Proxy4.get_table(Unknown Source)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_table.getResult(ThriftHiveMetastore.java:9330)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_table.getResult(ThriftHiveMetastore.java:9314)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
)
... View more
Labels:
- Labels:
-
Apache Hive
04-21-2016
05:12 PM
Did you ever find an underlying root cause of this issue? I am having the same problem and I have not found a solution yet. ...other than restarting all services. But the error is happening to my users multiple times per day. To the point I am restarting services every 30 mins.
... View more
04-07-2016
10:59 PM
"Hive expects a SASL Wrapper from the client." ^^^^ That was the ticket. Once I read into this a bit and configured accordingly, everything worked! Thank you again!
... View more
04-07-2016
10:13 PM
I have a default Ambari 2.3 install, meaning I have not changed much from the default.
From the Services->Hive page I can tell HiveServer2 is running and on which node it is running. Also if I go into Services->Hive->Config->Advanced I can see my security authorization is none and authentication is none.
When I attempt to attach Tableau to HiveServer2 with the proper node on port 10000 and no authentication it does not connect.
I can tell the Tableau workstation is attempting to connect to the HiveServer2 on port 10000 via 'netstat -peant'.
When I look in the HiveServer2 logs, as soon as I attempt to connect from Tableau I get this error:
----
2016-04-07 16:02:52,205 ERROR [HiveServer2-Handler-Pool: Thread-142]: server.TThreadPoolServer (TThreadPoolServer.java:run(296)) - Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: Invalid status -128
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:268)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TTransportException: Invalid status -128
at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:184)
at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
... 4 more
... View more
Labels:
- Labels:
-
Apache Hive
03-28-2016
09:29 PM
2 Kudos
It appears this was a bug that was filed with MySQL pertaining to the JDBC driver version I was using. I was originally using mysql-connector-java-5.1.17.jar. I have since upgraded to mysql-connector-java-5.1.38-bin.jar and this issue has gone away.
... View more
03-28-2016
07:48 PM
I have done some more reading and I have found out these queries are the result of the JDBC driver not caching the connection setting (which is the default). If I were to add "cacheServerConfiguration=true" to the connection string then these would go away. However, I am guessing that connection string is somewhere in the code for Ambari and there might be another fix for this. Currently, asking the driver to connect with default configuration results in those statements being issued for every connection. ...I am still uncertain of the solution/fix, but I wanted to share what I have learned as I make progress.
... View more
03-28-2016
06:45 PM
So, I am rather certain the problem somehow pertains to ambari or my setup/config. To give you an example, within MySQL I turned on SQL logging for 30 seconds to catch a snapshot of what was happening within MySQL. In that 30 seconds, 35,650 queries were ran against the DB from Ambari. Nearly all 35,650 queries look very similar to this: Connect ambari@hdptsrv1.test.rb.net on ambari
Query /* mysql-connector-java-5.1.17-SNAPSHOT ( Revision: ${bzr.revision-id} ) */SHOW VARIABLES WHERE Variable_name ='language' OR Variable_name = 'net_write_timeout' OR Variable_name = 'interactive_timeout' OR Variable_name = 'wait_timeout' OR Variable_name = 'character_set_client' OR Variable_name = 'character_set_connection' OR Variable_name = 'character_set' OR Variable_name = 'character_set_server' OR Variable_name = 'tx_isolation' OR Variable_name = 'transaction_isolation' OR Variable_name = 'character_set_results' OR Variable_name = 'timezone' OR Variable_name = 'time_zone' OR Variable_name = 'system_time_zone' OR Variable_name = 'lower_case_table_names' OR Variable_name = 'max_allowed_packet' OR Variable_name = 'net_buffer_length' OR Variable_name = 'sql_mode' OR Variable_name = 'query_cache_type' OR Variable_name = 'query_cache_size' OR Variable_name = 'init_connect'
Query /* mysql-connector-java-5.1.17-SNAPSHOT ( Revision: ${bzr.revision-id} ) */SELECT @@session.auto_increment_increment
Query SHOW COLLATION
Query SET NAMES latin1
Query SET character_set_results = NULL
Query SET autocommit=1
Query SET sql_mode='NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES'
Query SELECT @@session.tx_isolation
That is right at 1,200 queries per second that are being ran by Ambari.
... View more
- « Previous
-
- 1
- 2
- Next »