Member since
10-28-2020
572
Posts
46
Kudos Received
40
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
557 | 02-17-2025 06:54 AM | |
4775 | 07-23-2024 11:49 PM | |
795 | 05-28-2024 11:06 AM | |
1343 | 05-05-2024 01:27 PM | |
846 | 05-05-2024 01:09 PM |
08-31-2022
05:05 AM
@mohammad_shamim Did you have Hive HA configured in CDH cluster, in that case, you need to make sure that there are equal number of HS2 instances created in the CDP cluster, because without that HA cannot be attained. Also, make sure that there is no Hiveserver2 instance created under "Hive" service in CDP. It should only be present under Hive on Tez service.
... View more
08-18-2022
06:56 AM
@ssuja I am afraid it's not achievable using Ranger. If you already have a data directory owned by a specific user, say user1, you may create a policy in Ranger providing hive and other users access to that directory path(URI), and keep the physical path owned by user1 itself. See, if this is something you can work with. I should also mention, creating an external Hive table without Location clause, will create a directory with hive ownership, for Impersonation is disabled in Hive.
... View more
08-12-2022
11:27 AM
Hi @ssuja there is a Hive property that would help you achieve what you are aiming for. Look for hive.server2.enable.doAs under Hive on Tez configurations and enable it. However, there is a catch. This property needs to be disabled if you are using Ranger for authorization. If you are not using Ranger, and using Storage Based Authorization(which is not the recommended in CDP), then you could definitely enable this. Refer to the doc here.
... View more
08-05-2022
02:12 AM
1 Kudo
@xinghx The only difference between CDP 7.1.1 and 7.1.7 is HIVE-24920. In your test case, the CREATE TABLE statement is creating an External table with "TRANSLATED_TO_EXTERNAL" table property set to "TRUE". Your second query to change the table to a Managed/acid table does not really work, so that query has no impact apart from just adding a table property. Now coming to the RENAME query, I notice it does not change the location in CDP 7.1.1 either. Please refer to the attachment. In CDP 7.1.7(SP1) it does change the location if we have "TRANSLATED_TO_EXTERNAL" = "TRUE", If we set it to false, we have the same behavior as 7.1.1. alter table alter_test set tblproperties("TRANSLATED_TO_EXTERNAL"="FALSE"); I hope this helps.
... View more
08-03-2022
06:15 AM
1 Kudo
@xinghx This is an expected behavior in later version of CDP. Please refer to this Release note. If yours is a managed table, in the default warehouse location, the HDFS path will be renamed, the way you expect it to. However, if you plan to rename an External table, you will also need to change the location accordingly: ALTER TABLE <tableName> RENAME TO <newTableName>;
ALTER TABLE <newTableName> set location "hdfs://<location>";
... View more
08-01-2022
01:36 PM
1 Kudo
@Imran_chaush If you are on CDP, and using Ranger for authorization, then you may check the audit log to see which users tried to access that specific database and table. Else, you will have to read the raw log file to see what are the queries run on a specific table, and then try to find out the users submitting those queries. e.g. grep -E 'Compiling.*<table name>' /var/log/hive/hadoop-cmf-hive_on_tez-HIVESERVER2-node1.log.out Column 5 is your session ID, and you may grep for the session ID again to find the user associated with it.
... View more
08-01-2022
12:40 PM
@Caliber The following command should work: # for hql in {a.hql,b.hql}; do beeline -n hive -p password --showheader=false --silent=true -f $hql; done
... View more
04-01-2022
10:41 AM
@mattyseltz the select * query is probably submitting a plain fetch task , which does not involve running any Tez task in yarn containers. The Tez errors in my understanding must be independent of ODBC driver. You may attach your error log here, or create a support case.
... View more
03-29-2022
12:02 PM
@mattyseltz what's the ODBC version you are using, and also could you share HDP/CDP version? It is possible that the said version of ODBC driver does not support the Hive version in use. Where did you download the 32-bit ODBC driver? If you do not see any detailed error, have you tried enabling DEBUG logging in ODBC driver, and see if that gives you more info?
... View more
03-24-2022
12:16 PM
2 Kudos
@Ging I don't think there is much in the liquibase extension that could impose security risks. But, it's better to check with Liquibase. About Hive and Impala JDBC drivers, you could download the latest from Cloudera website, and not 2.6.4/2.6.2 as mentioned in the Liquibase blog. Very soon we are going to release newer versions that address the recent log4j vulnerabilities.
... View more