Member since
01-16-2018
541
Posts
33
Kudos Received
82
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
114 | 01-18-2023 12:10 AM | |
75 | 01-16-2023 01:54 AM | |
171 | 01-13-2023 01:59 AM | |
163 | 01-13-2023 01:35 AM | |
100 | 01-02-2023 10:03 PM |
12-02-2022
01:12 AM
Hello @sekhar1 Thanks for using Cloudera Community. Generally such Exception are received if Traffic from your Machine isn't allowed to or from the Security Group linked with the VPC wherein the CML Workspace instances are deployed. Check if the Kubernetes Pods associated with the CML Workspace Kubernetes Cluster are Up/Running. If Yes, Such Exception should be reviewed from a Network Standpoint only. You may reach out to your Customer's AWS/Platform Team to review the Traffic between your Machine & the VPC within which the CML Workspace is deployed. Regards, Smarak
... View more
12-01-2022
02:39 AM
Hello @Nghia Thanks for using Cloudera Community. Based on the Post, the CDE Airflow UI isn't working for you. Few things we can request you to review further on the Post: Whether other UIs like Job Page is opening for you. Ensure you have performed [1] i.e. the "After You Finish" Section, which is required for each CDE Virtual Cluster. Ensure the UI are reviewed in Incognito Mode to rule out any Caching concerns. Ensure the Pods running in the "dex-app-jsx4msml" are running correctly. If all above Checks are done yet the Issue persists, We would suggest engaging Support as any further troubleshooting would require sharing the Logs over the Public Community forum, which may have Customer's details. We shall mark the Post as Resolved now. If you have any concerns, Feel free to Update the Post likewise. Regards, Smarak [1] https://docs.cloudera.com/data-engineering/1.3.4/manage-clusters/topics/cde-private-cloud-create-cluster.html
... View more
12-01-2022
02:25 AM
Hello @QiDam As stated by JD above, the CDE Service relies on Ozone Service on Base Cluster. If Ozone Service wasn't in Healthy State, the CDE Service enabling would fail with similar tracing as shared by you. We would recommend the following checks: Ensure Ozone Service is Up & Running on Base Cluster. Create a new Environment & check CDE Service enabling on the new Environment. If the above CDE Service enabling on new Environment is Successful, Reattempt the CDE Service enabling on the existing Environment. If the above Suggestion doesn't help, We would suggest engaging Support as any further troubleshooting would require sharing the Logs over the Public Community forum, which may have Customer's details. We shall mark the Post as Resolved now. If you have any concerns, Feel free to Update the Post likewise. Regards, Smarak
... View more
11-21-2022
02:24 AM
1 Kudo
Hello @sfdragonstorm & @pacman In the Region Name "img,0006943d-20150504220458043384375D00000002-00093,1527295748538.7b45a9f6f5584fc50b3152d41a5323a2.", the Table Name is "img", StartKey is "0006943d-20150504220458043384375D00000002-00093", Timestamp is "1527295748538" & "7b45a9f6f5584fc50b3152d41a5323a2" is the Region ID. Under HBase Data Directory, each Table Directory would have Region-Level Directories as identified by Region ID ("7b45a9f6f5584fc50b3152d41a5323a2" in Example). The Region ID is an MD5 encoded string for the Region Name & generated by HBase itself. Refer [1], if your Team wish to review the same. Regards, Smarak [1] https://hbase.apache.org/apidocs/src-html/org/apache/hadoop/hbase/client/RegionInfo.html#line.164
... View more
11-15-2022
09:15 PM
1 Kudo
Hello @dch44 Hope you are doing well. We believe your Team have moved past the above Error via Cloudera Support engagement. As such, We would request your team to Update the Post accordingly to ensure fellow Community Users encountering the concerned issue are aware of the remediation steps. Regards, Smarak
... View more
11-15-2022
09:13 PM
Hello @balance002 Kindly let us know if the above Post dated 2022-11-08 assisted your team to identify the affected Hfile(s) & take necessary remediation to ensure Table's access is restored for Users. Regards, Smarak
... View more
11-15-2022
09:11 PM
Hello @hanumanth Hope you are doing well. Kindly confirm if your ask has been addressed by our Post dated 2022-11-09. If Yes, Please mark the Post as Solved. If No, Feel free to share any further queries/concerns & we shall respond accordingly. Regards, Smarak
... View more
11-09-2022
10:53 PM
Hello @hanumanth Thanks for using Cloudera Community. Based on the Post & Screenshot shared, the CPU Usage for the concerned Timeframe of (2022-10-27 11:30 PM - 2022-10-28 05:30 AM) was 2.2% as a Mean i.e. Average, considering the samples collected during the concerned period. Regards, Smarak
... View more
11-09-2022
10:41 PM
Hello @TheFixer We hope your queries around the RegionServer log traces has been answered by us. As such, We shall mark the Post as Solved. If you have any further ask, Feel free to update the Post & we shall get back to you accordingly. Regards, Smarak
... View more
11-09-2022
10:39 PM
Hello @Venkatd Since this is an Old Post & we have shared a few Action Plan with missing "atlas.application.properties" from Atlas Configuration Directory in HBase Configuration Directory "/etc/hbase/conf" being the Most-Likely reason for the concerned failure along with Steps to mitigate the HMaster/RegionServer failure (By setting "hbase.coprocessor.abortonerror" as "false") to avoid any Co-Processor failure cause Master/RegionServer startup failure, We shall mark the Post as Solved now. Thank You for using Cloudera Community. Regards, Smarak
... View more
11-09-2022
08:51 AM
1 Kudo
Hello @TouguiOmar Update acknowledged. You may wish to use Step (VI) from above Action Plan & share the Outcome. Regards, Smarak
... View more
11-08-2022
11:22 PM
Hello @Venkatd Thanks for using Cloudera Community. This is an Older Post, yet I am sharing a review for fellow Community Users. Based on the Post, the Service is affected owing to " org.apache.atlas.hbase.hook.HBaseAtlasCoprocessor" issues. A Quick Solution is to use "hbase.coprocessor.abortonerror" as "false" to avoid any Co-Processor failure cause Master/RegionServer startup failure. Typically, Such issues are caused by few factors as shared below: 1. Missing "atlas.application.properties" from Atlas Configuration Directory in HBase Configuration Directory "/etc/hbase/conf". 2. The HBase-Atlas Hook being disabled. 3. Ensure "org.apache.atlas.hbase.hook.HBaseAtlasCoprocessor" is present in the Co-Processor Classes (" hbase.coprocessor.<master|region>.classes"). The Missing "atlas.application.properties" file is the Most-Likely Cause. Regards, Smarak
... View more
11-08-2022
11:12 PM
Hello @sekhar1 Thanks for using Cloudera Community. The Error received by you is caused by a missing entitlement. For Customer without the concerned entitlement, the Error is received. We are improving the Product Experience to ensure the Error isn't displayed for Customer missing the entitlement. In your Case Or any one receiving the error, Kindly engage Cloudera Support to enable an entitlement to avoid getting the concerned error. Since Entitlement enabling requires few Customer's specific details, the Support engagement path is recommended. Regards, Smarak
... View more
11-08-2022
11:05 PM
Hello @balance002 The Error "IllegalStateException: Invalid currTagsLen" likely indicates Hfile Corruption & a possible factor is too many cell-level ACL tags being applied to a specific table. Unfortunately, such Corruption requires additional review into the Hfile Metadata & is extremely rarely observed. As such, It's best to open a Support Case with Cloudera, if you are experiencing the concerned issue to avoid sharing any Hfile Metadata (Which includes Customer's Data) over Community. As a last resort, We need to use the HFile tool with the -p or -k flags, and determine which HFile the command fails on. The -p flag will print all of the data scanned, while -k will only check the integrity of the row, but both will fail on the affected HFile, indicating which has the affected cell: (1) Find the bad HFile with: hfile -k <HFile location> (2) Disable the table: disable '<table name>' (2) Move the HFile to a different location in HDFS: hdfs dfs -mv <Hfile location> <tmp location> (3) Reload all of the data from that HFile, applying the tags to those rows so that they are not too large Regards, Smarak
... View more
11-08-2022
10:55 PM
Hello @Kings We hope our Post dated 2022-10-10 has answered your Q & shall mark the Post as Solved. If you have any further ask, Feel free to Update the Post & we shall get back to you accodingly. Thank You for using Cloudera Community. Regards, Smarak
... View more
11-08-2022
10:47 PM
Hello @mohamed_t We hope your Q concerning SDX has been answered. We are marking the Post as Solved. Assuming you have any further ask, Feel free to Update the Post & we shall get back to you accordingly. Regards, Smarak
... View more
11-08-2022
10:11 PM
1 Kudo
Hello @TouguiOmar, We hope your Q has been addressed by our Post & marking the Post as resolved. If No, Feel free to Update the Post & we shall get back to you accordingly. The Action Plan shared below: (I) Confirm Infra-Solr Service is installed on the concerned Cluster. The same should be installed already for Customer ideally. (II) Connect to the Host wherein Infra-Solr Service is present & fetch the Infra-Solr Service Keytab. Next, List the Collection via Command (### solrctl collection --list) & List Configs via Command (### solrctl instancedir --list). (III) Considering the Error received, the Command (### solrctl collection --list) wouldn't list any "ranger_audits" Collection. Yet, We expect the "ranger_audits" Config to be listed via Command (### solrctl instancedir --list). (IV) Assuming the Command (### solrctl instancedir --list) shows "ranger_audits" & the Command (### solrctl collection --list) doesn't list "ranger_audits", Your team can create the Collection via Command (### solrctl collection --create ranger_audits -s 1 -r 1 -m 1 -c ranger_audits). Once performed, Run the Command (### solrctl collection --list), which should list the "ranger_audits" Collection. (V) Refresh Ranger "Audits" UI & Confirm the Outcome. Your team may require a Restart of Ranger Admin Service after creating the "ranger_audits" Collection. (VI) If Command (### solrctl instancedir --list) doesn't list "ranger_audits" from Step (IV), We need to upload the "ranger_audits" Config via solrctl Command. The "ranger_audits" ConfigSet can be found in Ranger Parcel Directory (Sample Path "/opt/cloudera/parcels/CDH-7.1.7-1.cdh7.1.7.p1029.28168653/lib/ranger-admin/contrib/solr_for_audit_setup/") & uploaded via Command (### solrctl config --upload ranger_audits <Path>). Once Uploaded, the Command (### solrctl instancedir --list) would list "ranger_audits", upon which your Team can create the Collection via Command (### solrctl collection --create ranger_audits -s 1 -r 1 -m 1 -c ranger_audits). Once performed, Run the Command (### solrctl collection --list), which should list the "ranger_audits" Collection. All solrctl Commands are documented in [1] for your team's reference & should be run with Solr-KeyTab. Regards, Smarak [1] https://docs.cloudera.com/cdp-private-cloud-base/7.1.8/search-solrctl-reference/topics/search-solrctl-ref.html
... View more
11-08-2022
09:57 PM
Hello @Sunilkumarks We hope your Q concerning Grafana integration with CDP (Private Cloud & Public Cloud) has been answered. We shall mark the Post as resolved for now. Regards, Smarak
... View more
11-08-2022
09:56 PM
Hello @dch44 Thanks for engaging Cloudera Community. Based on the Post, Ranger & Infra-Solr aren't starting with [1]. This Error requires manual intervention into Ambari DB " clusterconfigmapping" Table to adjust the values of "type_name" ( " kerberos-env" & "krb5-conf") to set "selected" Column as 1 for the latest "version_tag" in the concerned " clusterconfigmapping" Table . Considering the changes to Ambari DB should be performed with extreme caution, We would recommend engaging Cloudera Support to assist on this issue. Regards, Smarak [1] /usr/lib/python2.6/site-packages/resource_management/libraries/script/config_dictionary.py", line 73, in __getattr__
raise Fail("Configuration parameter '" + self.name + "' was not found in configurations dictionary!")
resource_management.core.exceptions.Fail: Configuration parameter 'kerberos-env' was not found in configurations dictionary!
... View more
11-08-2022
01:39 AM
Hello @Sunilkumarks To add additional details, Each Data Service (CDW, CML, CDE) offers a Grafana Dashboard [1] & [2] on CDP Public Cloud as well as CDP Private Cloud. Similarly, CDP Public Cloud DataHub offers Grafana integration as documented by [3]. In CDP Private Cloud, Customer can integrate Grafana with Cloudera Manager Datastore as documented in [4]. Having said that, Feel free to check the amazing dashboard offered by Cloudera Manager by default. Additionally, You can build Dashboard/Time-Series as per requirement [5] as well. Regards, Smarak [1] https://docs.cloudera.com/data-engineering/cloud/troubleshooting/topics/cde-connecting-to-grafana-dashboards.html [2] https://docs.cloudera.com/data-warehouse/cloud/monitoring/topics/dw-grafana-monitoring.html [3] https://community.cloudera.com/t5/Community-Articles/Configuring-Grafana-for-Datahub-Clusters/ta-p/348869 [4] https://grafana.com/grafana/plugins/foursquare-clouderamanager-datasource/ [5] https://docs.cloudera.com/cdp-private-cloud-base/7.1.8/monitoring-and-diagnostics/topics/cm-charting-time-series-data.html
... View more
11-08-2022
01:21 AM
Hello @TouguiOmar Kindly review & let us know if you have reviewed our Post dated 2022-11-07 & have any further queries concerning the queries around the Action Plan shared. Regards, Smarak
... View more
11-08-2022
01:21 AM
Hello @TheFixer Kindly review & let us know if you have reviewed our Post dated 2022-11-07 & have any further queries concerning the queries around HBase RegionServer logging highlighted in the Post. Regards, Smarak
... View more
11-08-2022
01:20 AM
Hello @mohamed_t Kindly review & let us know if you have reviewed our Post dated 2022-11-07 & have any further queries concerning the queries around SDX. Regards, Smarak
... View more
11-07-2022
02:24 AM
Hello @mohamed_t Thanks for using Cloudera Community. SDX would allow your Team to operation with an Environment Scope. An Environment is similar to a realm, within which Customer would operate. Additional details on Environment is available here. Customer can create as many as Environment as possible. Within an Environment, Customer can have any number of Cluster (Traditional Based) & Experience (Kubernetes Based). These Clusters & Experiences uses the same SDX. Check out Link for a tour of SDX. To your queries, SDX deals with Security, Metadata & Goverance without dealing with actual Customer's data. Your queries deals sharing data, which isn't SDX role. Rather, SDX deals with deciding who can access the data within the CDP Environment. Using Ranger, Customer can allow any CDP Workload User to access any data using their Workload Password/AccessKeys etc. All details around User Management is covered in Link. Finally, I would strongly urge your team to review CDP Product Tour & if interested, Start a conversation with Cloudera's amazing Sales/Field team. They can quickly understand your Customer's requirement & share details into our Product, which fits right into your Customer's Use-Case. Regards, Smarak
... View more
11-07-2022
02:07 AM
Hello @TheFixer Thanks for using Cloudera Community. To your queries, Please find the details below: (I) Concerning below log tracing, Each Column Family has their own MemStore & HBase ensure the Flush happens before an Edit is too old. A WAL can't be archived unless the corresponding entries from the WAL remain Un-flushed from the MemStore. These logging aren't a concern, yet PeriodicMemstoreFlusher working as designed. MemstoreFlusherChore requesting flush of table 1. because K has an old edit so flush to free WALs after random delay 65889ms (II) Concerning below log tracing, the same indicates the Flush has Completed by "MemStoreFlusher.1" (Defined by " hbase.hstore.flusher.count ") of size ~6.68MB & the SequenceId indicates the Sequence ID associated with the LastEdit flushed. This Sequence ID is used to compare with Edits in WAL prior to archiving the WAL. 2022-10-29 12:21:46,170 INFO [MemStoreFlusher.1] regionserver.HRegion: Finished flush of dataSize ~6.68 MB/7002393, heapSize ~19.57 MB/20517200, currentSize=0 B/0 for ad1bd353d9ae0c52e30a935c5d06ecfa in 335ms, sequenceid=25141679, compaction requested=true (III) Finally, the CompactionQueue indicates the Size of Major (Long) & Minor (Short) Compaction. Compaction/Split Queue summary: compactionQueue=(longCompactions=111975:shortCompactions=35811), splitQueue=0 All the above Logging are HBase Internals & shouldn't be a concern for Customer, unless Customer is observing any impact. Your Team should review (If not done yet) the below links for details around the Flush Internals & Compaction Algorithms: (I) MemStore Flush Configuration (II) Conditions For MemStore Flush (III) Compaction Alogrithm Kindly review & let us know if your Team have any queries. Regards, Smarak
... View more
11-07-2022
01:26 AM
Hello @Felix-Han Hope you are doing well. Since the Post by @rki_ referencing [1] for your team would fix the CQTBE for your team, We shall mark the Post as resolved. If your team have any further ask, Feel free to Update the Post likewise. Regards, Smarak [1] https://my.cloudera.com/knowledge/CallQueueTooBigException--Call-queue-is-full-on-000060020-too?id=73901
... View more
11-07-2022
01:14 AM
Hello @TouguiOmar Thanks for using Cloudera Community. Based on the Post, the "Audits" tab of Ranger UI shows "Collection Not Found: Ranger_Audits". The same is observed when the Collection isn't present in Solr. As such, Kindly follow the below steps: (I) Confirm Infra-Solr Service is installed on the concerned Cluster. The same should be installed already for Customer ideally. (II) Connect to the Host wherein Infra-Solr Service is present & fetch the Infra-Solr Service Keytab. Next, List the Collection via Command (### solrctl collection --list) & List Configs via Command (### solrctl instancedir --list). (III) Considering the Error received, the Command (### solrctl collection --list) wouldn't list any "ranger_audits" Collection. Yet, We expect the "ranger_audits" Config to be listed via Command (### solrctl instancedir --list). (IV) Assuming the Command (### solrctl instancedir --list) shows "ranger_audits" & the Command (### solrctl collection --list) doesn't list "ranger_audits", Your team can create the Collection via Command (### solrctl collection --create ranger_audits -s 1 -r 1 -m 1 -c ranger_audits). Once performed, Run the Command (### solrctl collection --list), which should list the "ranger_audits" Collection. (V) Refresh Ranger "Audits" UI & Confirm the Outcome. Your team may require a Restart of Ranger Admin Service after creating the "ranger_audits" Collection. (VI) If Command (### solrctl instancedir --list) doesn't list "ranger_audits" from Step (IV), We need to upload the "ranger_audits" Config via solrctl Command. The "ranger_audits" ConfigSet can be found in Ranger Parcel Directory & uploaded via Command (### solrctl config --upload ranger_audits <Path>). All solrctl Commands are documented in [1] for your team's reference. Kindly review above & let us know the Outcome. Regards, Smarak [1] https://docs.cloudera.com/cdp-private-cloud-base/7.1.8/search-solrctl-reference/topics/search-solrctl-ref.html
... View more
10-20-2022
10:41 PM
Hello @Khairul_Hasan We are marking the Post as Closed as we didn't hear from your side on our Update dated 2022-10-10. If your team have any concerns, Feel free to Update this Post. If your team adopted any other approach to resolve the issue, We would appreciate if your team can share the finer details for our wider community. Regards, Smarak
... View more
10-20-2022
10:35 PM
Hello @utehrani We are marking the Post as Closed with the recommendation for your Team to engage Cloudera Support as the Troubleshooting of the issue would require sharing of Logs, which may be sensitive to be shared across Community along with Bundle/Config Files. Regards, Smarak
... View more
10-14-2022
03:42 AM
Hello @sekhar1 We hope your Q was answered by André. As such, We are marking the Post as Resolved. If the Link shared by André didn't fix the issue, Feel free to Update the Post likewise. Regards, Smarak
... View more