Member since
09-16-2021
420
Posts
54
Kudos Received
37
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
538 | 07-15-2025 02:22 AM | |
1053 | 06-02-2025 06:55 AM | |
1331 | 05-22-2025 03:00 AM | |
745 | 05-19-2025 03:02 AM | |
591 | 04-23-2025 12:46 AM |
09-05-2025
07:19 AM
use the following command. CREATE EXTERNAL TABLE ice_t (i int, s string) STORED BY ICEBERG;
... View more
08-19-2025
10:33 PM
Is the source table a JdbcStorageHandler table? Please provide the DDL of the source table, the query used, and any sample data if possible. This information will help us understand the problem better. Also, validate the set -v command, especially configurations like hive.tez.container.size.
... View more
07-16-2025
06:52 AM
HIVE-24552 is back-ported from CDP 7.1.7 and higher versions.
... View more
07-15-2025
02:22 AM
https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-common/3.1.1.3.1.5.0-152
... View more
07-15-2025
12:19 AM
In Hive , __HIVE_DEFAULT_PARTITION__ is a special value used internally to represent: NULL or EMPTY STRINGS in partition column values . Since it's just a string literal in the metadata , you can access it just like any other string in a query. Example : SELECT * FROM your_table WHERE data_dt = '__HIVE_DEFAULT_PARTITION__'; We cannot do WHERE data_dt IS NULL — because NULLs are replaced with __HIVE_DEFAULT_PARTITION__ before being written If you do WHERE length(data_dt) = 26 — it won’t match anything, because partition columns behave differently.
... View more
07-15-2025
12:10 AM
This appears to be a new issue. To help us investigate further, could you please provide the steps to reproduce the problem?
... View more
07-02-2025
12:48 AM
Please connect using the Beeline client included with the CDP parcel. Then, check the HiveServer2 (HS2) logs to determine the root cause. Sharing the complete stack trace will also help us identify the issue.
... View more
06-24-2025
06:26 AM
Please share the Maven coordinates (groupId and artifactId) for the dependency. On a related note, I recommend we begin a transition from our current HDP dependencies to the latest versions available on the CDP. The HDP dependencies are no longer current, and migrating to CDP will improve our application's stability and supportability.
... View more
06-23-2025
04:11 AM
Mostly it will be under hortonworks-repo . Look for particular dependency , for example hadoop-common-repo
... View more
06-02-2025
06:55 AM
It does looks like query failed with ClassCastException. It indicates that ( org.apache.hadoop.hive.serde2.io.HiveDecimalWritable cannot be cast to org.apache.hadoop.io.LongWritable ) a mismatch between the data type Hive expects and the data type it's actually encountering while processing the query. From the Error trace , Hive treats a value as a DECIMAL(HiveDecimalWritable) but the metadata seems to be Long(LongWritable). One possible Reason might be Schema Mismatch: Hive table schema defines a column but the underlying data file (e.g., Parquet, ORC, ...) for that column actually contains DECIMAL Values. To validate , Run DESCRIBE FORMATTED <your_table_name>; for the table involved in the failing query. Pay close attention to the data types of all columns, especially those that might be involved in the conversion. Compare these Hive schema data types with the actual data types in your source data files. For example, if you're using Parquet, use tools like parquet-tools to inspect the schema of Parquet files. if you're using ORC , use hive --orcfiledump to inspect the schema of orc files. Also make sure that Serde's pointing to valid underlying file formats.
... View more