Member since
11-11-2019
622
Posts
33
Kudos Received
25
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1636 | 02-28-2023 09:32 PM | |
2788 | 02-27-2023 03:33 AM | |
25284 | 12-24-2022 05:56 AM | |
2217 | 12-05-2022 06:17 AM | |
5704 | 11-25-2022 07:37 AM |
02-19-2025
09:43 PM
@Mamun_Shaheed Please check the JDK version,it is available from Java 13. This look like infrastructure issue not related to Driver.
... View more
02-13-2025
06:18 AM
@Mamun_Shaheed Could you please try setting -Dsun.security.jgss.lib="C:\Program Files\Java\jdk- 13.0.2\bin\sspi_bridge.dll" Please refer https://docs.cloudera.com/documentation/other/connectors/hive-jdbc/2-6-25/Cloudera-JDBC-Connector-for-Apache-Hive-Install-Guide.pdf
... View more
02-05-2025
11:20 PM
@Mamun_Shaheed Please check below things in your Windows machine 1. Check if krb5.conf is same as your CDP cluster 2, Are you able to get the ticket in Windows machine?
... View more
02-05-2025
06:59 AM
@Mamun_Shaheed Make sure you use Cloudera JDBC driver and follow https://community.cloudera.com/t5/Community-Articles/How-to-Connect-to-Hiveserver2-Using-Cloudera-JDBC-driver/ta-p/376336 as per your configuration. Create the connection string based on the article and try.
... View more
09-05-2024
04:27 AM
1 Kudo
hi @denysobukhov Is your cluster SSL and LDAP enabled? Are you able to connect from beeline? Please review https://community.cloudera.com/t5/Community-Articles/How-to-Connect-to-Hiveserver2-Using-Cloudera-JDBC-driver/ta-p/376336 and change it as per the usage.
... View more
08-19-2024
06:43 AM
hi @APentyala 1. 1. Data Modeling Design: Which model is best suited for a Lakehouse implementation, star schema or snowflake schema? Ans: We don't have those designs or we are not aware of those 2. We are using CDP (Private) and need to implement updates and deletes (SCD Type 1 & 2). Are there any limitations with Hive external tables? Ans: There are no limitations for EXTERNAL tables. Are you using HDFS or islon to store? 3. Are there any pre-built dimension models or ER models available for reference? apentyala Ans : We don't have any thing as such
... View more
08-14-2024
09:08 PM
2 Kudos
@APentyala Please find the answers below: 1. Which data modeling approach is recommended for this domain? Ans: If you have large data, we would recommend to go with Partitioning or multi-level partitioning. You could implement Bucketing if the data inside partition is large. 2. Are there any sample models available for reference? Ans: You could take a refrence for partitioning and bucketing in https://www.linkedin.com/pulse/what-partitioning-vs-bucketing-apache-hive-shrivastava/ You could create a new table perfroom CTAS with Dynamic Partitiining from the existing table Refrence: https://www.geeksforgeeks.org/overview-of-dynamic-partition-in-hive/ 3. What best practices should we follow to ensure data integrity and performance? Ans: Please follow below best parctices: a. Paartion and bucket it b. You could use Iceberg table which would reduce the significant load on Metastore, if you are using CDP Public CLoud or CDP private CLoud(ECS/Opesnshit) c. Use ORC/parquet d. Use EXTERNAL tables,if you dont perfrom Update/Delete as reading External table is faster. 4. How can we efficiently manage large-scale data ingestion and processing? Ans: The model follows as: Kafka/Spark Streaming: Ingestion Spark: Data Modelling Hive: Warehosuing where you extract the data Please. be specific on the use case. 5. Are there any specific challenges or pitfalls we should be aware of when implementing a lakehouse in this sector? Ans: There should be no challenges, we would request to provide more briefing on this.
... View more
07-30-2024
08:19 AM
@Maicat You can not typecast array to the string. There are 2 ways you can use 1. Select the nth object of the array. SELECT level5[0] AS first_genre FROM my_table; WHere 0 is the first object 2. You can flatten it SELECT column1 FROM my_table LATERAL VIEW explode(level5) genre_table AS level5;
... View more
02-23-2024
07:48 AM
f you deacivate and reactivate with "Public Locadbalancer" and "Public Executor", this should work
... View more
02-12-2024
11:57 PM
1 Kudo
To use an internal load balancer for Cloudera Data Warehouse (CDW), you must select the option to enable an internal load balancer while activating the Azure environment from the CDW UI. Otherwise, CDW uses the Standard public load balancer that is enabled by default when you provision an AKS cluster. Before activating you should remove.https://docs.cloudera.com/data-warehouse/cloud/azure-environments/topics/dw-azure-enable-internal-aks-lb.html
... View more