Member since
09-25-2015
356
Posts
382
Kudos Received
62
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2439 | 11-03-2017 09:16 PM | |
1917 | 10-17-2017 09:48 PM | |
3806 | 09-18-2017 08:33 PM | |
4509 | 08-04-2017 04:14 PM | |
3458 | 05-19-2017 06:53 AM |
10-08-2015
02:22 PM
1 Kudo
Yes, check out http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_yarn_resource_mgt/content/configuring_node_labels.html
... View more
10-08-2015
02:18 PM
1 Kudo
Yes, those are the right ones, we are in the process of pushing to the web, should be available in a day or two.
... View more
10-07-2015
11:30 PM
1 Kudo
From the code it looks like ColumnPruner is always applied by optimizer and there is no way to exclude it after HIVE-4113. It would be good to get more details on the scenario where you ran into this. Have you tried this query with Hive v0.14 or later?
... View more
10-07-2015
05:09 PM
1 Kudo
Coming very soon.
... View more
10-07-2015
05:08 PM
1 Kudo
For that you will need to get access to javax.security.auth.kerberos.KerberosTicket, the class in Hadoop UesrGroupInformation provides API to manage that, however the closest method to deal with expirations is checkTGTAndReloginFromKeytab which is likely not what you want.
... View more
10-07-2015
04:51 PM
1 Kudo
The trick will be to manage separate hive configs for each HiveServer2 instance. You will need to use different values for authentication properties and the port on which to start.
... View more
10-07-2015
03:24 PM
1 Kudo
One way you can achieve the transformation of your CSV data to ORC would be to do the following: 1. Register your CSV GZ data as a text table, something like: create table <tablename>_txt (...) location '...'; 2. Create equivalent ORC table create table <tablename>_orc (...) stored as orc; 3. Populate the data into equivalent ORC table insert overwrite table <tablename>_orc select * from <tablename>_txt; I have used this in the past and worked for me.
... View more
10-07-2015
02:37 AM
Could be a real bug. What is HDP/hive version you are using?
... View more
10-07-2015
12:41 AM
1 Kudo
You can delete Hive tables by calling "drop table <tablename> purge;", this will skip the trash. If this is for testing purposes you can temporarily set fs.trash.interval to 0 and restart namenode. This will globally disable trash collection on HDFS so should only be employed during testing. On your last question about the support of TDE feature, it was available starting HDP 2.3.
... View more
10-06-2015
06:37 PM
1 Kudo
Note that when running more that one Hiveserver2 instances registered on zookeeper you also get load balancing when client uses the ZK info in JDBC url. ZooKeeper responds to client requests by randomly passing a link to one of the active HS2 instances.
... View more