Member since
10-21-2015
11
Posts
18
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2452 | 12-17-2015 05:55 PM | |
2302 | 12-14-2015 09:18 PM | |
2593 | 11-16-2015 08:06 PM |
03-01-2016
05:17 PM
1 Kudo
To get the metastore to use the moved MySQL database you need to change the JDBC URL in the hive-site.xml file on the machine running the metastore. The correct configuration value to change is "javax.jdo.option.ConnectionURL". You will have to restart the metastore after changing this. If you are using HiveServer2 you will need to make this change on that machine as well. To get your clients to find the new metastore instance you will have to change the thrift URI for the metastore, again in the hive-site.xml file. This will need to be done on all clients that talk to the metastore. The config value to change is "hive.metastore.uris".
... View more
02-26-2016
06:38 PM
1 Kudo
Can you give details of how it doesn't work? Does it not find the function, or find the function and fail to execute it properly?
... View more
01-14-2016
04:56 PM
2 Kudos
Is there any more information in the logs? (If you're using HiveServer2, the you can look at those logs, or your client logs if you are using the Hive client directly.) The error is caused by a file system error where it's unable to move the file. The file system should log an error message describing why it failed.
... View more
01-04-2016
07:55 PM
1 Kudo
I don't recommend directly messing with the RDBMs. If you can't find any other way out and must make changes in the RDBMS, you should make sure to just switch the transactions states from open to aborted. Removing the transactions completely may have undesirable side effects.
... View more
12-28-2015
06:46 PM
1 Kudo
@Neeraj Sabharwal this isn't expected behavior, but it's not surprising. There are many issues with non-string partition types. If this isn't in the bug base you should file a ticket for it. I suspect if affects all alter table partition clauses.
... View more
12-18-2015
08:49 PM
1 Kudo
You're saying if you put SQL statements between "with Q..." and "select *..." then you get the error? If so can you share an example SQL statement that is between that causes the error?
... View more
12-17-2015
05:55 PM
3 Kudos
ORC doesn't support that at this time. You can use the ORC file dump utility to find the schema (hive --service orcfiledump _filename_) and then use that when you create the table.
... View more
12-14-2015
09:18 PM
3 Kudos
We did not add transactions to Hive to take workloads out of HBase. HBase is great if you want to do lots of point lookups and range scans. Hive is much better for full table scan workloads. Traditional data warehousing queries fall in the full table scan category (like, find me the year over year average sale by store). These types of operations still require transactions, like when you want to stream data in from your transactional stores, or when you need to update dimension tables. We added transactions to Hive to enable these use cases, so that Hive could be better at what it does as a data warehouse. If your use case more closely approximates a traditional transactional work load (e.g. a shopping cart), definitely don't use Hive.
... View more
11-16-2015
08:06 PM
3 Kudos
Pig does not have any notion of appending results to an existing directory. You could write your Pig job to put results into a temporary directory and then use a DFS command (in or out of Pig) to copy the files into the existing directory that you wanted to append to. Note that this has some dangers (starting jobs may or may not see the copied files, or may only see some of them). Also note that this is exactly what Hive does in its INSERT INTO if you aren't using a lock manager.
... View more
11-09-2015
05:03 PM
1 Kudo
The performance implications mostly come at read time. If you have queries that read many (>2k) partitions you will see long (30+ sec) times to plan queries. As Andrew mentioned, the work on the HBase metastore should improve this.
... View more