Member since
10-28-2020
403
Posts
18
Kudos Received
27
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
201 | 11-06-2023 11:58 AM | |
453 | 10-20-2023 01:56 PM | |
138 | 09-29-2023 08:59 PM | |
361 | 08-31-2023 03:16 AM | |
671 | 08-29-2023 03:56 AM |
11-13-2023
10:55 AM
@jayes Please make sure that you have set this property in "HiveServer2 Advanced Configuration Snippet (Safety Valve) for hive-site.xml" under Hive on Tez configuration. I tried this and it works for me: Beeline version 3.1.3000.7.1.7.2000-305 by Apache Hive
0: jdbc:hive2://c1649-node2.coelab.cloudera.c> set dfs.replication=1;
No rows affected (0.208 seconds)
0: jdbc:hive2://c1649-node2.coelab.cloudera.c> set hive.security.authorization.sqlstd.confwhitelist.append;
+----------------------------------------------------+
| set |
+----------------------------------------------------+
| hive.security.authorization.sqlstd.confwhitelist.append=mapred\..*|hive\..*|mapreduce\..*|spark\..*|dfs\..* |
+----------------------------------------------------+
1 row selected (0.109 seconds)
... View more
11-06-2023
11:58 AM
@jayes Hive reads this parameter value from hdfs-site.xml. So, you should probably consider setting the value under HDFS service. Nevertheless, if you want to export that parameter in Hive/beeline cli, you could try setting 'hive.security.authorization.sqlstd.confwhitelist.append' correctly. e.g. <name>hive.security.authorization.sqlstd.confwhitelist.append</name>
<value>mapred\..*|hive\..*|mapreduce\..*|spark\..*|dfs\..*</value>
... View more
10-20-2023
01:56 PM
@Kalpit Do check if you have any jars added to hive classpath using hive.aux.jars.path. Remove them and try once. It's possible that the jars added are not compatible with the current version.
... View more
09-29-2023
08:59 PM
@Srinivas-M You may set these properties in a safety valve for core-site.xml. CM UI > HDFS > Configuration > Cluster-wide Advanced Configuration Snippet (Safety Valve) for core-site.xml
... View more
09-20-2023
08:18 AM
@PetiaLeshiy Adding to @asish 's comment, as its a struct column, we could write the query something like this: SELECT * FROM TABLE_NAME LATERAL VIEW explode(struct_col_name.list_name) exploded_column AS xyz WHERE xyz IS NOT NULL; You may make changes where required.
... View more
08-31-2023
03:16 AM
we tried replicating the issue with the data shared by @Shivakuk . Left/Right Single/Double Quotation Mark(smart quotes) in the text did not show up correctly and got converted to ? . I was able to fix this issue by changing the LC_CTYPE from "UTF-8" to "en_US.UTF-8". Check "locale" command output: # locale
LANG=en_US.UTF-8
LC_CTYPE=en_US.UTF-8
LC_NUMERIC="en_US.UTF-8"
LC_TIME="en_US.UTF-8"
LC_COLLATE="en_US.UTF-8"
LC_MONETARY="en_US.UTF-8"
LC_MESSAGES="en_US.UTF-8"
LC_PAPER="en_US.UTF-8"
LC_NAME="en_US.UTF-8"
LC_ADDRESS="en_US.UTF-8"
LC_TELEPHONE="en_US.UTF-8"
LC_MEASUREMENT="en_US.UTF-8"
LC_IDENTIFICATION="en_US.UTF-8"
LC_ALL=
See what your LC_CTYPE read.
If it does not read "en_US.UTF-8", do the following:
vi ~/.bash_profile
Add the following two lines at the bottom:
+++
LC_CTYPE=en_US.UTF-8
export LC_CTYPE
+++
Save the file, and source it for it to take effect:
#source ~/.bash_profile
Now connect to beeline, and see if the data show up correctly.
... View more
08-29-2023
03:56 AM
1 Kudo
@AndreaCavenago For that you will have to check if the connection is getting interrupted/closed between the client and hiveserver2. Without thorough log analysis, it will be difficult to answer that. Could you open a support case for the same?
... View more
08-22-2023
03:03 AM
@Kaher Could you find out what's the value you have set for "tez.staging-dir"; if it's not set, the default path is /tmp/${user.name}/staging. Do verify if there is any issue with the /tmp filesystem. Also, there is a dash missing in the following value: <name>tez.am.java.opts</name>
<value>-Xmx2024m</value>
... View more
08-21-2023
03:08 AM
@itdm_bmi We are still seeing the error "java.lang.ClassNotFoundException: Class org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe" This means, you have not uploaded the "hive-contrib" jar to Hive classpath as mentioned here. If you are on CDP(>7.1.5 onward) then, we will not have to upload hive-contrib jar to the class path. This serde class is already added to Hive native serde list. You just need to alter the original table, and change the serde(Note: only applies to CDP version > 7.1.5) ALTER TABLE <table name> SET SERDE 'org.apache.hadoop.hive.serde2.MultiDelimitSerDe' If you are on an older version, do consider uploading to hive-contrib jar to hive classpath by uploading this to aux jar path location.
... View more
08-16-2023
01:06 PM
@AndreaCavenago Does this error appear every time we run this spark-submit command? As this is a warning message, and it does not have any real impact, we can avoid it by changing the log level. In the script.py file, add the following two lines: from pyspark import SparkContext
SparkContext.setLogLevel("ERROR") This will avoid the WARN message. But it will still be good to address the actual issue.
... View more