Member since
09-25-2015
356
Posts
382
Kudos Received
62
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2443 | 11-03-2017 09:16 PM | |
1921 | 10-17-2017 09:48 PM | |
3836 | 09-18-2017 08:33 PM | |
4512 | 08-04-2017 04:14 PM | |
3464 | 05-19-2017 06:53 AM |
12-11-2015
02:03 AM
Did the file Batting.csv get copied over to HDFS? Can you do a hadoop fs -ls /user/quest? Potentially could be a file not existing issue or permission issue.
... View more
12-11-2015
01:16 AM
Are you using Transparent Data Encryption on HDFS?
... View more
12-11-2015
12:57 AM
3 Kudos
Just elaborating on my above comment which @Emily Sharpe has already verified as the workaround. The issue is in the Vectorization code path, see Apache Hive JIRA HIVE-8197, the issue should be fixed in both HDP 2.2.x and HDP 2.3.x. The workaround is to disable vectorization by setting hive.vectorized.execution.enabled = false.
... View more
12-10-2015
08:22 PM
1 Kudo
Can you try the same with hive.vectorized.execution.enabled = false?
... View more
12-07-2015
07:59 PM
1 Kudo
I can tell you that for our System Testing we certify MySQL 5.6 which is what the documents as pointed out by @Neeraj Sabharwal say.
... View more
12-04-2015
05:03 PM
A workaround for this could be launching the hive cli in the following manner: hive -hiveconf hive.execution.engine=mr But this would mean that if you want to run any queries in tez you would need to run "set hive.execution.engine=tez;" before running your queries.
... View more
11-25-2015
01:18 AM
What client are you using to run the query? If its Hive CLI then you can run export HADOOP_OPTS="-Xmx2048m" on the shell and then invoke the hive cli.
... View more
11-24-2015
11:24 PM
3 Kudos
Did you run both table level and column level statistics: analyze table t compute statistics;
analyze table t compute statistics for columns; The log will be the hive client log (/tmp/<user>/hive.log) in case of hive cli, for hiveserver2 it will likely be /var/log/hive/hiveserver2.log (whatever you configured in Ambari).
... View more
11-20-2015
11:33 PM
3 Kudos
If its just a migration, you can make a copy of your old database and then point the HDP 2.3.2 cluster to the copy of your old database. Before starting the Hive services upgrade the hive database by using schemaTool. You can use metatool to update the HDFS locations to the new cluster. Start the hive services.
... View more