Member since
09-14-2015
111
Posts
28
Kudos Received
13
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1315 | 07-06-2017 08:16 PM | |
5580 | 07-05-2017 04:57 PM | |
3302 | 07-05-2017 04:52 PM | |
3812 | 12-30-2016 09:29 PM | |
1602 | 12-30-2016 09:14 PM |
12-30-2016
09:29 PM
I figured out what was wrong. I had no UserId/password setup on Linux and Hadoop for the Squirrel SQL service user. I was not passing userID/password, while making a connection using JDBC. SquirreL does not complaint if you do not pass a UserID and Password. It will let you connect to hive using JDBC URL. Running "SELECT * FROM <table>" does not need to start a job and that's why SquirreL can run this query using JDBC only without any issue. But, for running a query like "SELECT * FROM <table> WHERE <condition>" needs to start a map/reduce job, which, further needs a linux and hadoop account in the cluster. I created a service account for SquirreL client on Linux and Hadoop and used the credential in SquirreL configuration for the connection and everything worked as expected.
... View more
12-30-2016
08:03 PM
@Sundara Palanki Can you provide more inofrmation? Pehaps, some screenshots.
... View more
12-03-2016
02:21 AM
When I run a simple query with WHERE clause from hive, it returns result. But, the same query using Squirrel client throws error. If I remove where clause, it works on SquirreL and Hive both. Any idea? On SquirreL SQL Client:
select * from my_table where col_id=11500 limit 5;
Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask
SQLState: 08S01
ErrorCode: 1
hive> select * from table where col_id=11500 limit 5;
2013-07-01
01:15:00 2013-07-01 06:15:00 20130701 11500 5864449886 957877905
17.493334 0 17.493334 17.493334 3.936 0 3.936 47 4 4
2013-10-30 16:21:08.93995 NULL NULL NULL
2013-07-02 01:15:00
2013-07-02 06:15:00 20130702 11500 5864449886 957877905 14.364444 0
14.364444 16.517917 3.232 0 3.232 47 4 4 2013-10-30
16:21:36.220502 NULL NULL NULL
2013-07-03 01:15:00 2013-07-03
06:15:00 20130703 11500 5864449886 957877905 13.853334 0
13.853334 17.220324 3.117 0 3.117 47 4 4 2013-10-30
16:22:23.973718 NULL NULL NULL
2013-07-04 01:15:00 2013-07-04
06:15:00 20130704 11500 5864449886 957877905 12.426666 0
12.426666 19.591296 2.796 0 2.796 47 4 4 2013-10-30
16:23:08.96686 NULL NULL NULL
2013-07-05 01:15:00 2013-07-05
06:15:00 20130705 11500 5864449886 957877905 19.328889 0
19.328889 18.618565 4.349 0 4.349 47 4 4 2013-10-30
16:23:57.512115 NULL NULL NULL
Time taken: 31.885 seconds, Fetched: 5 row(s)
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Tez
12-02-2016
10:34 PM
Thanks @Kashif Khan.
... View more
12-02-2016
10:13 PM
@Ned Shawa I followed these steps, but still unable to connect: https://community.hortonworks.com/articles/3043/connecting-to-hive-thrift-server-on-hortonworks-us.html I did following: 1. Copied all required jar files from the cluster to Squirrel lib directory: commons-logging*.jar hadoop-common-*.jar hive-exec-*.jar hive-jdbc-*.jar httpclient-*.jar httpcore-*.jar libthrift-*.jar ojdbc7.jar sfl4j-api-*.jar sfl4j-log4j12-*.jar 2. Created a new driver using above jar files. 3. Created a new alias using the new driver with following URL: jdbc:hive2://<HiveServer2 SERVER>:10000/default UserName: <Hive Oracle User Name> Password: <Hive Oracle Password> Still, I'm seeing following error: =========================================================== java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/metastore/api/MetaException
at java.util.concurrent.FutureTask.report(Unknown Source)
at java.util.concurrent.FutureTask.get(Unknown Source)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.awaitConnection(OpenConnectionCommand.java:132)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.access$100(OpenConnectionCommand.java:45)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand$2.run(OpenConnectionCommand.java:115)
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.lang.RuntimeException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/metastore/api/MetaException
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.executeConnect(OpenConnectionCommand.java:175)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.access$000(OpenConnectionCommand.java:45)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand$1.run(OpenConnectionCommand.java:104)
... 5 more
Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/metastore/api/MetaException
at org.apache.hive.jdbc.HiveConnection.createBinaryTransport(HiveConnection.java:456)
at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:182)
at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:155)
at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105)
at net.sourceforge.squirrel_sql.fw.sql.SQLDriverManager.getConnection(SQLDriverManager.java:133)
at net.sourceforge.squirrel_sql.client.mainframe.action.OpenConnectionCommand.executeConnect(OpenConnectionCommand.java:167)
... 7 more
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.metastore.api.MetaException
at java.net.URLClassLoader.findClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
... 13 more
... View more
Labels:
- Labels:
-
Apache Hive
06-30-2016
08:12 PM
Can you give the complete output of the execution. You can find log in Oozie UI for this job.
... View more
05-19-2016
07:34 PM
yes, of course. But, it did not help the previous post and as well as me.
... View more
05-19-2016
07:33 PM
Here is ambari-server log: 19 May 2016 14:29:37,414 INFO [qtp-ambari-client-26] StackAdvisorRunner:71 - advisor script stderr:
19 May 2016 14:30:10,741 INFO [qtp-ambari-client-69] AmbariManagementControllerImpl:1355 - Received a updateCluster request, clusterId=2, clusterName=DEVHDPCLST, securityType=null, request={ clusterName=DEVHDPCLST, clusterId=2, provisioningState=null, securityType=null, stackVersion=HDP-2.2, desired_scv=null, hosts=[] }
19 May 2016 14:30:10,758 INFO [qtp-ambari-client-69] AmbariManagementControllerImpl:1474 - Applying configuration with tag 'version1463686209068' to cluster 'DEVHDPCLST' for configuration type hbase-env
19 May 2016 14:30:10,880 INFO [qtp-ambari-client-69] AmbariManagementControllerImpl:1474 - Applying configuration with tag 'version1463686209070' to cluster 'DEVHDPCLST' for configuration type hbase-site
19 May 2016 14:30:11,027 INFO [qtp-ambari-client-69] AmbariManagementControllerImpl:1474 - Applying configuration with tag 'version1463686209072' to cluster 'DEVHDPCLST' for configuration type ranger-hbase-plugin-properties
19 May 2016 14:30:11,392 ERROR [qtp-ambari-client-69] AmbariJpaLocalTxnInterceptor:180 - [DETAILED ERROR] Rollback reason:
Local Exception Stack:
Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.6.2.v20151217-774c696): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: java.sql.BatchUpdateException: Batch entry 3 INSERT INTO serviceconfigmapping (config_id, service_config_id) VALUES (22, 240) was aborted. Call getNextException to see the cause.
Error Code: 0
Call: INSERT INTO serviceconfigmapping (config_id, service_config_id) VALUES (?, ?)
bind => [2 parameters bound]
at org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:340)
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.processExceptionForCommError(DatabaseAccessor.java:1620)
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeJDK12BatchStatement(DatabaseAccessor.java:926)
at org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.executeBatch(ParameterizedSQLBatchWritingMechanism.java:179)
at org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.executeBatchedStatements(ParameterizedSQLBatchWritingMechanism.java:134)
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.writesCompleted(DatabaseAccessor.java:1845)
at org.eclipse.persistence.internal.sessions.AbstractSession.writesCompleted(AbstractSession.java:4300)
at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.writesCompleted(UnitOfWorkImpl.java:5592)
at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.acquireWriteLocks(UnitOfWorkImpl.java:1646)
at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.commitTransactionAfterWriteChanges(UnitOfWorkImpl.java:1614)
at org.eclipse.persistence.internal.sessions.RepeatableWriteUnitOfWork.commitRootUnitOfWork(RepeatableWriteUnitOfWork.java:285)
at org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.commitAndResume(UnitOfWorkImpl.java:1169)
at org.eclipse.persistence.internal.jpa.transaction.EntityTransactionImpl.commit(EntityTransactionImpl.java:134)
at org.apache.ambari.server.orm.AmbariJpaLocalTxnInterceptor.invoke(AmbariJpaLocalTxnInterceptor.java:153)
at com.google.inject.internal.InterceptorStackCallback$InterceptedMethodInvocation.proceed(InterceptorStackCallback.java:72)
at com.google.inject.internal.InterceptorStackCallback.intercept(InterceptorStackCallback.java:52)
at org.apache.ambari.server.state.cluster.ClusterImpl$$EnhancerByGuice$$90de4d20.applyConfigs(<generated>)
at org.apache.ambari.server.state.cluster.ClusterImpl.addDesiredConfig(ClusterImpl.java:2282)
at org.apache.ambari.server.controller.AmbariManagementControllerImpl.updateCluster(AmbariManagementControllerImpl.java:1488)
... View more