Member since
10-28-2020
622
Posts
47
Kudos Received
40
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1982 | 02-17-2025 06:54 AM | |
6697 | 07-23-2024 11:49 PM | |
1335 | 05-28-2024 11:06 AM | |
1884 | 05-05-2024 01:27 PM | |
1266 | 05-05-2024 01:09 PM |
12-25-2023
11:34 PM
@wert_1311 Are we talking about a DataHub/Data Engineering cluster or DataWarehouse? If you have a Cloudera Manager for the cluster, check under the configuration tab of respective services. You may search for "hive_log_dir". If you SSH to the specific AWS instance hosting Hive service, you should be able to find the service logs under that path.
... View more
12-22-2023
03:27 AM
@jayes please share the exact error. What do you mean error code 4? Is it exit code? The error must be getting logged to stdout/stderr. If nothing is getting logged, in the JDBC connection string could you also add "--verbose=true" e.g. bash /opt/cloudera/parcels/CDH-7.1.7-1.cdh7.1.7.p0.15945976/bin/../lib/hive/bin/beeline -u 'jdbc:hive2://machine1.domain.com:2181/default;password=***;principal=hive/_HOST@DOMAIN.COM;serviceDiscoveryMode=zooKeeper;ssl=true;sslTrustStore=/var/lib/cloudera-scm-agent/agent-cert/cm-auto-global_truststore.jks;trustStorePassword=*****;user=HiveUser;zooKeeperNamespace=hiveserver2' --showHeader=false --outputformat=tsv2 --verbose=true
... View more
12-21-2023
03:01 AM
@jayes The beeline command seems to be using both kerberos(principal) and username-password based authentication? What's the auth mechanism we are aiming for? Also, what's the error message? When you fork it from another process, which user the beeline command runs as? Within the JDBC connection string, we are trying to authenticate as "HiveUser" however we are also using -n switch and authenticating as loki user. It's confusing. We need not use "-n" in this case.
... View more
12-15-2023
11:09 AM
Cloudera's official statement on this subject can be found here. Cloudera supports various RDBMS options, each of which have multiple possible strategies to implement HA. Cloudera cannot reasonably test and certify on each strategy for each RDBMS. Cloudera expects HA solutions for RDBMS to be transparent to Cloudera software, and therefore are not supported and debugged by Cloudera. It is the responsibility of the customer to provision, configure, and manage the RDBMS HA deployment, so that Cloudera software behaves as it would when interfacing with a single, non-HA service.
... View more
11-13-2023
10:55 AM
@jayes Please make sure that you have set this property in "HiveServer2 Advanced Configuration Snippet (Safety Valve) for hive-site.xml" under Hive on Tez configuration. I tried this and it works for me: Beeline version 3.1.3000.7.1.7.2000-305 by Apache Hive
0: jdbc:hive2://c1649-node2.coelab.cloudera.c> set dfs.replication=1;
No rows affected (0.208 seconds)
0: jdbc:hive2://c1649-node2.coelab.cloudera.c> set hive.security.authorization.sqlstd.confwhitelist.append;
+----------------------------------------------------+
| set |
+----------------------------------------------------+
| hive.security.authorization.sqlstd.confwhitelist.append=mapred\..*|hive\..*|mapreduce\..*|spark\..*|dfs\..* |
+----------------------------------------------------+
1 row selected (0.109 seconds)
... View more
11-06-2023
11:58 AM
@jayes Hive reads this parameter value from hdfs-site.xml. So, you should probably consider setting the value under HDFS service. Nevertheless, if you want to export that parameter in Hive/beeline cli, you could try setting 'hive.security.authorization.sqlstd.confwhitelist.append' correctly. e.g. <name>hive.security.authorization.sqlstd.confwhitelist.append</name>
<value>mapred\..*|hive\..*|mapreduce\..*|spark\..*|dfs\..*</value>
... View more
10-20-2023
01:56 PM
@Kalpit Do check if you have any jars added to hive classpath using hive.aux.jars.path. Remove them and try once. It's possible that the jars added are not compatible with the current version.
... View more
09-29-2023
08:59 PM
@Srinivas-M You may set these properties in a safety valve for core-site.xml. CM UI > HDFS > Configuration > Cluster-wide Advanced Configuration Snippet (Safety Valve) for core-site.xml
... View more
09-20-2023
08:18 AM
@PetiaLeshiy Adding to @asish 's comment, as its a struct column, we could write the query something like this: SELECT * FROM TABLE_NAME LATERAL VIEW explode(struct_col_name.list_name) exploded_column AS xyz WHERE xyz IS NOT NULL; You may make changes where required.
... View more
08-31-2023
03:16 AM
we tried replicating the issue with the data shared by @Shivakuk . Left/Right Single/Double Quotation Mark(smart quotes) in the text did not show up correctly and got converted to ? . I was able to fix this issue by changing the LC_CTYPE from "UTF-8" to "en_US.UTF-8". Check "locale" command output: # locale
LANG=en_US.UTF-8
LC_CTYPE=en_US.UTF-8
LC_NUMERIC="en_US.UTF-8"
LC_TIME="en_US.UTF-8"
LC_COLLATE="en_US.UTF-8"
LC_MONETARY="en_US.UTF-8"
LC_MESSAGES="en_US.UTF-8"
LC_PAPER="en_US.UTF-8"
LC_NAME="en_US.UTF-8"
LC_ADDRESS="en_US.UTF-8"
LC_TELEPHONE="en_US.UTF-8"
LC_MEASUREMENT="en_US.UTF-8"
LC_IDENTIFICATION="en_US.UTF-8"
LC_ALL=
See what your LC_CTYPE read.
If it does not read "en_US.UTF-8", do the following:
vi ~/.bash_profile
Add the following two lines at the bottom:
+++
LC_CTYPE=en_US.UTF-8
export LC_CTYPE
+++
Save the file, and source it for it to take effect:
#source ~/.bash_profile
Now connect to beeline, and see if the data show up correctly.
... View more