Member since
03-06-2020
406
Posts
56
Kudos Received
37
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1099 | 08-29-2025 12:27 AM | |
| 1634 | 11-21-2024 10:40 PM | |
| 1549 | 11-21-2024 10:12 PM | |
| 5290 | 07-23-2024 10:52 PM | |
| 3022 | 05-16-2024 12:27 AM |
10-25-2022
04:37 AM
1 Kudo
Hi, There is a KB article related to this issue, please review this below: https://community.cloudera.com/t5/Customer/Permission-denied-when-accessing-Hive-tables-from-Spark-in/ta-p/310498 Regards, Chethan YM
... View more
10-25-2022
04:31 AM
Hi, The alert message does not give more information to check, review the HMS and CM service monitor logs related to this issue and provide the stack-traces. Regards, Chethan YM
... View more
09-24-2022
10:12 PM
Hi , There seems to be a UDF present for SK in Hive, Have you tried this? Is it Working? https://docs.cloudera.com/cdp-private-cloud-base/7.1.6/using-hiveql/topics/hive_surrogate_keys.html Regards, Chethan YM
... View more
09-16-2022
03:16 AM
Hi, Below is the suspected causes for this issue: https://issues.apache.org/jira/browse/YARN-3055 https://issues.apache.org/jira/browse/YARN-2964 Yes, You can set that parameter at workflow level and test. Regards, Chethan YM
... View more
09-15-2022
02:53 AM
Hi @coco Can you follow the below steps in Hue, if you are running the job from Hue. 1. Login to Hue 2. Go to Workflows -> Editors -> Workflows 3. Open the Workflow to edit. 4. On the left hand pane, Click 'Properties' 5. Under section 'Hadoop Job Properties', in the Name box enter 'mapreduce.job.complete.cancel.delegation.tokens' and in value enter 'true'. 6. Save the workflow and submit. If you are running from terminal add the above property in configurations section then rerun the workflow and see if it helps. If this works please accept it as a solution. Regards, Chethan YM
... View more
09-07-2022
11:08 PM
1 Kudo
Hi @gocham , In CDP 7.1.7 Capacity Scheduler is alone supported and Fair Scheduler is not supported, Capacity Scheduler is the default and only supported scheduler. You must transition from Fair Scheduler to Capacity Scheduler when upgrading your cluster to CDP Private Cloud Base. This is the related Jira from Cloudera - CLR-106983 Note: If i answered your question please give a thumbs up and accept it as a solution. Regards, Chethan YM
... View more
09-07-2022
07:18 AM
1 Kudo
Hi @Ramesh_hdp I do not think we have an option to check number of users are logged into Hive, But we can check how many connections are made to Hive. >you can refer the below article: https://community.cloudera.com/t5/Support-Questions/How-to-get-number-of-live-connections-with-HiveServer2-HDP-2/td-p/106284 >If you enable Hiveserver2 UI then under Active Sessions you can see which user connected from which IP Regards, Chethan YM
... View more
09-05-2022
05:05 AM
1 Kudo
Hi @Iga21207 , So how it works in catalod is when you run any refresh commands then that is executed sequentially and once that is completed then it goes to next one. It doesn't run in parallel as per the catalogd which is a single threaded operation. There is a lock that catalogd thread creates on class getCatalogObjects(). So when you are refreshing(means they have not completed yet sequentially) and after that when the new request came in then the Catalog threw the error on that table as it can't get the lock because the lock is already there on previous table on which the refresh command was running. Not sure on your CDH version, This may resolved in Higher version of CDP/CDH. Note: If i answered your question please give a thumbs up and accept it as a solution. Regards, Chethan YM
... View more
09-05-2022
04:49 AM
Hi, > Running workflow for more than 7 days means, does it run entire 7 days all the time and fails? Can you provide the script you are running? > Is the shell script works outside of oozie without issues? > Please provide the complete error stack trace you are seeing. Regards, Chethan YM
... View more
08-29-2022
05:21 AM
2 Kudos
Hi, After you run the query you need to look at the query profile to analyse the complete memory. look for “Per node peak memory usage” in the profile to understand how much memory each host or impala daemon used to run this query. For above snippet from your side it looks like this query has the 3gb max limit to run the query, this can be set at session level or in impala admission control pool. If you provide the complete query profile i think we can get more details. Regards, Chethan YM
... View more