Member since
03-06-2020
398
Posts
54
Kudos Received
35
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
148 | 11-21-2024 10:12 PM | |
1003 | 07-23-2024 10:52 PM | |
1143 | 05-16-2024 12:27 AM | |
3248 | 05-01-2024 04:50 AM | |
1416 | 03-19-2024 09:23 AM |
11-18-2022
04:43 AM
1 Kudo
Hi, It looks like known- OPSAPS-60161 unresolved issue. Can you disable the canary health check? https://docs.cloudera.com/documentation/enterprise/6/6.3/topics/cm_ht_hive_metastore_server.html#concept_6qo_fpn_yk Regards, Chethan YM
... View more
11-02-2022
07:28 AM
Hi, > As per the document it is service down time, So i think it is complete impala service down time. (However I haven't seen the issue on live) > No metrics/graph to check " inc_stats_size" > If 1GB is insufficient, Try to use "compute stats" instead of "compute incremental stats" Regards, Chethan YM
... View more
10-25-2022
04:47 AM
1 Kudo
Hi @yassan , I would like to let you know that, the default value on the flag(inc_stats_size_limit_bytes) is set to 200 MB, as a safety check to prevent Impala from hitting the maximum limit for the table metadata. Whereas, the error reported usually serves as an indication that 'COMPUTE INCREMENTAL STATS' should not be used on the particular table and consider spitting the table thereby, using regular 'COMPUTE STATS' statement if possible. However, incase if you are not able to use the 'Compute Stats' statement then you could try to increase the default limit on the flag(inc_stats_size_limit_bytes) where, it should be set less than 1 GB limit and the value is measured in bytes. Below is the seteps: 1. CM > Impala Service > Configuration > Search "Impala Command Line Argument Advanced Configuration Snippet (Safety Valve)" 2. Add --inc_stats_size_limit_bytes= #####Please note that the above value is in bytes. For example, if you want to set 400 Mb, please input 419430400(400*1024*1024). 3. Please save and restart Impala service. Note: If I answered your question please give a thumbs up and Accept it as a solution. Regards, Chethan YM
... View more
10-25-2022
04:37 AM
1 Kudo
Hi, There is a KB article related to this issue, please review this below: https://community.cloudera.com/t5/Customer/Permission-denied-when-accessing-Hive-tables-from-Spark-in/ta-p/310498 Regards, Chethan YM
... View more
10-25-2022
04:31 AM
Hi, The alert message does not give more information to check, review the HMS and CM service monitor logs related to this issue and provide the stack-traces. Regards, Chethan YM
... View more
10-25-2022
04:26 AM
Hi, As per the below Git, PauseTransitRunnable is the runnable which is scheduled to run at the configured interval, it checks all bundles. to see if they should be paused, un-paused or started. https://github.com/apache/oozie/blob/master/core/src/main/java/org/apache/oozie/service/PauseTransitService.java#:~:text=PauseTransitRunnable%20is%20the%20runnable%20which%20is%20scheduled%20to%20run%20at%20the%20configured%20interval%2C%20it%20checks%20all%20bundles It also released the lock, Are you running this sqoop-import from oozie? If yes try to rerun outside of oozie and check if it still get stucks and review the corresponding NM and RM logs if anything interrupting. Regards, Chethan YM
... View more
09-24-2022
10:12 PM
Hi , There seems to be a UDF present for SK in Hive, Have you tried this? Is it Working? https://docs.cloudera.com/cdp-private-cloud-base/7.1.6/using-hiveql/topics/hive_surrogate_keys.html Regards, Chethan YM
... View more
09-24-2022
10:03 PM
1 Kudo
Hi, It looks like waiting for inserting the data, It may get finish after few minutes. Is it worked or still hangs for hours?Create table when we upload CSV file usually takes more time. Regards, Chethan YM
... View more
09-16-2022
03:16 AM
Hi, Below is the suspected causes for this issue: https://issues.apache.org/jira/browse/YARN-3055 https://issues.apache.org/jira/browse/YARN-2964 Yes, You can set that parameter at workflow level and test. Regards, Chethan YM
... View more
09-15-2022
02:53 AM
Hi @coco Can you follow the below steps in Hue, if you are running the job from Hue. 1. Login to Hue 2. Go to Workflows -> Editors -> Workflows 3. Open the Workflow to edit. 4. On the left hand pane, Click 'Properties' 5. Under section 'Hadoop Job Properties', in the Name box enter 'mapreduce.job.complete.cancel.delegation.tokens' and in value enter 'true'. 6. Save the workflow and submit. If you are running from terminal add the above property in configurations section then rerun the workflow and see if it helps. If this works please accept it as a solution. Regards, Chethan YM
... View more