Member since
02-26-2016
13
Posts
6
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4490 | 02-26-2016 06:38 PM |
02-09-2017
06:24 PM
Hive uses the default timezone of the JVM. Currently the only way to change the timezone used by Hive is to change the default timezone of the JVM.
... View more
01-18-2017
01:03 AM
- Not sure why dynamic partition pruning is not being chosen .. you might want to try running with DEBUG level logging and run the EXPLAIN statement again, that may give some clues as to what is going on during compiliation. - As far as I can tell it looks like it tries to preserve the larger of the 2 tables for dynamic partition pruning. - hive.explain.user just creates a more "user-friendly" explain, but sometimes the plan is shown a little differently so sometimes it helps to see it without that option.
... View more
01-17-2017
04:41 AM
1. You are correct that you should see something like "Dynamic partitioning event operator" in the explain plan. From some of the examples in the Hive tests dynamic partition pruning can be used even in the case of mapjoin getting selected. 2. It does look like dynamic partition pruning should occur on those tables .. What version of Hive? Can you show the explain plan and DESCRIBE TABLE on the tables involved? Can you try setting hive.explain.user=false? 3. If both tables are partitioned tables, dynamic partition pruning should only happen on one side 4. Not totally sure about the outer join case - might only work in the case that the non-partitioned table was the outer. 5. I don't think text/non-text matters for hive.optimize.ppd - I believe this has to do with predicate pushdown by the optimizer. If you're talking about pushdown of predicates into the storage level (like for ORC), it looks like that setting is controlled by hive.optimize.index.filter.
... View more
03-05-2016
12:03 AM
2 Kudos
There was supposed to be a RELOAD FUNCTIONS command to refresh the function list if the function was created in a different environment from the current one like what you have done here. However it looks like this command was not working properly in HDP 2.3. It was fixed in HIVE-13043. The workaround for HDP 2.3 would be to restart HiveServer2, or to create the function in Hue/ODBC rather than in a separate HiveCLI.
... View more
03-01-2016
06:30 PM
2 Kudos
What database was the function created under (permanent functions are qualified with a database name). If you did not specify the DB name when you created it, it would have automatically used the current DB. Do you see the fully qualified function name when you run SHOW FUNCTIONS?
... View more
02-26-2016
06:38 PM
1 Kudo
Both Hue /ODBC make use of HiveServer2. You might want to check the HiveServer2 logs to see if there are any errors when running these queries. Is the HDFS location of your JAR accessible to the "hive" user (or whatever user is running HiveServer2)?
... View more