Member since
09-16-2021
305
Posts
43
Kudos Received
22
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
227 | 10-25-2024 05:02 AM | |
1250 | 09-10-2024 07:50 AM | |
553 | 09-04-2024 05:35 AM | |
1416 | 08-28-2024 12:40 AM | |
1011 | 02-09-2024 04:31 AM |
11-07-2024
08:36 AM
To disable Kerberos in Cloudera, it’s important to know that there is no direct option to turn it off entirely once enabled, as Kerberos is typically applied cluster-wide. It's generally not recommended to disable Kerberos once enabled due to security implications.
... View more
10-29-2024
12:42 AM
1 Kudo
To access Kafka topics in a security-enabled cluster, please follow the steps outlined below. kinit with appropriate user. kinit -kt <user> <principal> Create jaas.conf file. KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useTicketCache=true;
}; Export the jaas.conf file in KAFKA_OPTS environment variable. export KAFKA_OPTS="-Djava.security.auth.login.config=/root/jaas.conf" Make Sure provide fully qualified path of the jaas.conf file. Create client.properties as per your cluster configurations. security.protocol=SASL_SSL
ssl.truststore.location=<truststore location>
ssl.truststore.password=<truststore password>
sasl.kerberos.service.name=kafka Use kafka-topics utility to list the topics in the cluster. kafka-topics --bootstrap-server <broker:port> --list --command-config client.properties
... View more
10-25-2024
05:10 AM
1 Kudo
From the below , it does looks like tez session itself not initialized. Validate configuration from set -v , make sure everything fine and try re-running the query. org.apache.tez.dag.api.SessionNotRunning: TezSession has already shutdown. Application application_1726980746968_0077 failed 1 times (global limit =2; local limit is =1) due to AM Container for appattempt_1726980746968_0077_000001 exited with exitCode: -1000
Failing this attempt.Diagnostics: [2024-10-13 10:30:27.706]Application application_1726980746968_0077 initialization failed (exitCode=255) with output: main : command provided 0 If you can't able to identify the incorrect configuration , raise a support case for the same.
... View more
10-25-2024
05:02 AM
1 Kudo
Error looks similar to HIVE-27778. Can you try the workaround given in the kb
... View more
10-10-2024
04:28 PM
1 Kudo
@IanWilloughby If you are still experiencing the issue, can you provide the information @ggandharan has requested? Thanks.
... View more
10-07-2024
12:29 AM
1 Kudo
We recommend utilizing CDW for Kubernetes on Hive. Based on the description, it seems that you are currently using the apache-hive library. In the upstream (Apache), images have already been pushed to Docker Hub, so you can utilize the same. I have attached the relevant documents for your reference. https://hive.apache.org/development/quickstart/ https://docs.cloudera.com/data-warehouse/cloud/overview/topics/dw-service-architecture.html
... View more
09-18-2024
09:19 PM
1 Kudo
This solution worked for eliminating error , but data is not being fetched from table. empty data frame showing.
... View more
09-18-2024
01:22 AM
1 Kudo
@zhuodongLi, Did the responses help resolve your query? If it did, kindly mark the relevant reply as the solution, as it will aid others in locating the answer more easily in the future.
... View more
09-11-2024
08:47 AM
@ggangadharan thanks for your reply. Yes, as soon spark sees NUMBER data type in oralce it convert the df datatype to decimal(38,10) then when precision value in oracle column contains >30 spark cant accommodate it as it only allows 28 max digits if decimal(38,10) hence getting this issue. yeah as you said the probable solution is to cast it as string Type.
... View more
09-05-2024
04:53 AM
1 Kudo
@Lorenzo The issue seems to be related to HIVE-27191 where some mhl_txnids do not exist in TXNS,completed_txn_components txn_components table but they are still present in min_history_level table, as a result, the cleaner gets blocked and many entries are stuck in the ready-for-cleaning state. To confirm that collect the output of below query SELECT MHL_TXNID FROM HIVE.MIN_HISTORY_LEVEL WHERE MHL_MIN_OPEN_TXNID = (SELECT MIN(MHL_MIN_OPEN_TXNID) FROM HIVE.MIN_HISTORY_LEVEL); Once we get the output of the above query check if those txn ids are there in TXNS,completed_txn_components txn_components tables using below commands. select * from txn_components where tc_txnid IN (MHL_TXNID ); select * from completed_txn_components where ctc_txnid IN (MHL_TXNID); select * from TXNS where ctc_txnid IN (MHL_TXNID); If we got 0 results from the above queries this confirms that the MHL_TXNIDs we got above are orphans and we need to remove them in order to unblock the cleaner. delete from MIN_HISTORY_LEVEL where MHL_TXNID=13422; --(repeat for all) Hope this helps you in resolving the issue
... View more