Member since
10-28-2020
622
Posts
47
Kudos Received
40
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1956 | 02-17-2025 06:54 AM | |
6690 | 07-23-2024 11:49 PM | |
1330 | 05-28-2024 11:06 AM | |
1880 | 05-05-2024 01:27 PM | |
1260 | 05-05-2024 01:09 PM |
08-03-2022
06:15 AM
1 Kudo
@xinghx This is an expected behavior in later version of CDP. Please refer to this Release note. If yours is a managed table, in the default warehouse location, the HDFS path will be renamed, the way you expect it to. However, if you plan to rename an External table, you will also need to change the location accordingly: ALTER TABLE <tableName> RENAME TO <newTableName>;
ALTER TABLE <newTableName> set location "hdfs://<location>";
... View more
08-01-2022
01:36 PM
1 Kudo
@Imran_chaush If you are on CDP, and using Ranger for authorization, then you may check the audit log to see which users tried to access that specific database and table. Else, you will have to read the raw log file to see what are the queries run on a specific table, and then try to find out the users submitting those queries. e.g. grep -E 'Compiling.*<table name>' /var/log/hive/hadoop-cmf-hive_on_tez-HIVESERVER2-node1.log.out Column 5 is your session ID, and you may grep for the session ID again to find the user associated with it.
... View more
08-01-2022
12:40 PM
@Caliber The following command should work: # for hql in {a.hql,b.hql}; do beeline -n hive -p password --showheader=false --silent=true -f $hql; done
... View more
04-01-2022
10:41 AM
@mattyseltz the select * query is probably submitting a plain fetch task , which does not involve running any Tez task in yarn containers. The Tez errors in my understanding must be independent of ODBC driver. You may attach your error log here, or create a support case.
... View more
03-29-2022
12:02 PM
@mattyseltz what's the ODBC version you are using, and also could you share HDP/CDP version? It is possible that the said version of ODBC driver does not support the Hive version in use. Where did you download the 32-bit ODBC driver? If you do not see any detailed error, have you tried enabling DEBUG logging in ODBC driver, and see if that gives you more info?
... View more
03-24-2022
12:16 PM
2 Kudos
@Ging I don't think there is much in the liquibase extension that could impose security risks. But, it's better to check with Liquibase. About Hive and Impala JDBC drivers, you could download the latest from Cloudera website, and not 2.6.4/2.6.2 as mentioned in the Liquibase blog. Very soon we are going to release newer versions that address the recent log4j vulnerabilities.
... View more
12-16-2021
12:23 PM
@Gcima009 are you trying to collect the logs with the same user that you submitted the job with? This query completed the map phase, and failed in reducer phase. If you are not able to collect the app logs, do check the HS2 log with the query ID hive_20211210173528_ff76c3df-a33b-41d0-b328-460c9b65deda if you get more information what caused the job to fail.
... View more
12-15-2021
10:02 AM
@Gcima009 It will be difficult to say what's the issue is from this error stack. Could you generate yarn application log for this job, and review it or attach it here, or only paste the detailed error message? yarn logs -applicationId application_1639152705224_0018 > app_log.out
... View more
11-18-2021
12:13 PM
@hxn Please locate java.security file and use "jdk.tls.disabledAlgorithms" to disable TLSv1, TLSv1.1. If you upgrade Java, you will have to redo it. e.g. # find /usr/java/jdk1.8.0_232-cloudera/ -iname java.security
/usr/java/jdk1.8.0_232-cloudera/jre/lib/security/java.security
# grep -i jdk.tls.disabledAlgorithms /usr/java/jdk1.8.0_232-cloudera/jre/lib/security/java.security
# jdk.tls.disabledAlgorithms=MD5, SSLv3, DSA, RSA keySize < 2048
jdk.tls.disabledAlgorithms=SSLv3, TLSv1, TLSv1.1, RC4, MD5withRSA, DH keySize < 768, 3DES_EDE_CBC
# certificates such as jdk.tls.disabledAlgorithms or
... View more
11-16-2021
10:10 AM
Hi @Korez Then, pease consider setting the properties that I mentioned earlier. set hive.server2.tez.sessions.per.default.queue=3 //Number of AM containers/queue
set hive.server2.tez.initialize.default.sessions=true
set hive.prewarm.enabled=true
set hive.prewarm.numcontainers=2
set tez.am.container.reuse.enabled=true
set tez.am.container.idle.release-timeout-max.millis=20000
set tez.am.container.idle.release-timeout-min.millis=10000 This will help keep AM containers up and ready for a hive query.
... View more