Support Questions

Find answers, ask questions, and share your expertise

Hive query with where clasue throwing error using Squirrel SQL client

avatar
Super Collaborator

When I run a simple query with WHERE clause from hive, it returns result. But, the same query using Squirrel client throws error. If I remove where clause, it works on SquirreL and Hive both. Any idea?

On SquirreL SQL Client:
select * from my_table where col_id=11500 limit 5;

Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask
SQLState:  08S01
ErrorCode: 1
hive> select * from table where col_id=11500 limit 5;
2013-07-01
 01:15:00  2013-07-01 06:15:00  20130701  11500  5864449886  957877905  
17.493334  0  17.493334  17.493334  3.936  0  3.936  47  4  4  
2013-10-30 16:21:08.93995  NULL  NULL  NULL
2013-07-02 01:15:00  
2013-07-02 06:15:00  20130702  11500  5864449886  957877905  14.364444  0
  14.364444  16.517917  3.232  0  3.232  47  4  4  2013-10-30 
16:21:36.220502  NULL  NULL  NULL
2013-07-03 01:15:00  2013-07-03 
06:15:00  20130703  11500  5864449886  957877905  13.853334  0  
13.853334  17.220324  3.117  0  3.117  47  4  4  2013-10-30 
16:22:23.973718  NULL  NULL  NULL
2013-07-04 01:15:00  2013-07-04 
06:15:00  20130704  11500  5864449886  957877905  12.426666  0  
12.426666  19.591296  2.796  0  2.796  47  4  4  2013-10-30 
16:23:08.96686  NULL  NULL  NULL
2013-07-05 01:15:00  2013-07-05 
06:15:00  20130705  11500  5864449886  957877905  19.328889  0  
19.328889  18.618565  4.349  0  4.349  47  4  4  2013-10-30 
16:23:57.512115  NULL  NULL  NULL
Time taken: 31.885 seconds, Fetched: 5 row(s)
1 ACCEPTED SOLUTION

avatar
Super Collaborator

I figured out what was wrong. I had no UserId/password setup on Linux and Hadoop for the Squirrel SQL service user. I was not passing userID/password, while making a connection using JDBC.

SquirreL does not complaint if you do not pass a UserID and Password. It will let you connect to hive using JDBC URL.

Running "SELECT * FROM <table>" does not need to start a job and that's why SquirreL can run this query using JDBC only without any issue. But, for running a query like "SELECT * FROM <table> WHERE <condition>" needs to start a map/reduce job, which, further needs a linux and hadoop account in the cluster.

I created a service account for SquirreL client on Linux and Hadoop and used the credential in SquirreL configuration for the connection and everything worked as expected.

View solution in original post

2 REPLIES 2

avatar
Master Mentor

I'm wondering whether you loaded all libraries correctly, check for missing jars, error code signifies it's coming from database https://dev.mysql.com/doc/refman/5.5/en/error-messages-server.html

avatar
Super Collaborator

I figured out what was wrong. I had no UserId/password setup on Linux and Hadoop for the Squirrel SQL service user. I was not passing userID/password, while making a connection using JDBC.

SquirreL does not complaint if you do not pass a UserID and Password. It will let you connect to hive using JDBC URL.

Running "SELECT * FROM <table>" does not need to start a job and that's why SquirreL can run this query using JDBC only without any issue. But, for running a query like "SELECT * FROM <table> WHERE <condition>" needs to start a map/reduce job, which, further needs a linux and hadoop account in the cluster.

I created a service account for SquirreL client on Linux and Hadoop and used the credential in SquirreL configuration for the connection and everything worked as expected.