Member since
01-16-2018
613
Posts
48
Kudos Received
109
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 778 | 04-08-2025 06:48 AM | |
| 955 | 04-01-2025 07:20 AM | |
| 916 | 04-01-2025 07:15 AM | |
| 962 | 05-06-2024 06:09 AM | |
| 1504 | 05-06-2024 06:00 AM |
11-08-2022
10:47 PM
Hello @mohamed_t We hope your Q concerning SDX has been answered. We are marking the Post as Solved. Assuming you have any further ask, Feel free to Update the Post & we shall get back to you accordingly. Regards, Smarak
... View more
11-08-2022
10:11 PM
2 Kudos
Hello @TouguiOmar, We hope your Q has been addressed by our Post & marking the Post as resolved. If No, Feel free to Update the Post & we shall get back to you accordingly. The Action Plan shared below: (I) Confirm Infra-Solr Service is installed on the concerned Cluster. The same should be installed already for Customer ideally. (II) Connect to the Host wherein Infra-Solr Service is present & fetch the Infra-Solr Service Keytab. Next, List the Collection via Command (### solrctl collection --list) & List Configs via Command (### solrctl instancedir --list). (III) Considering the Error received, the Command (### solrctl collection --list) wouldn't list any "ranger_audits" Collection. Yet, We expect the "ranger_audits" Config to be listed via Command (### solrctl instancedir --list). (IV) Assuming the Command (### solrctl instancedir --list) shows "ranger_audits" & the Command (### solrctl collection --list) doesn't list "ranger_audits", Your team can create the Collection via Command (### solrctl collection --create ranger_audits -s 1 -r 1 -m 1 -c ranger_audits). Once performed, Run the Command (### solrctl collection --list), which should list the "ranger_audits" Collection. (V) Refresh Ranger "Audits" UI & Confirm the Outcome. Your team may require a Restart of Ranger Admin Service after creating the "ranger_audits" Collection. (VI) If Command (### solrctl instancedir --list) doesn't list "ranger_audits" from Step (IV), We need to upload the "ranger_audits" Config via solrctl Command. The "ranger_audits" ConfigSet can be found in Ranger Parcel Directory (Sample Path "/opt/cloudera/parcels/CDH-7.1.7-1.cdh7.1.7.p1029.28168653/lib/ranger-admin/contrib/solr_for_audit_setup/") & uploaded via Command (### solrctl config --upload ranger_audits <Path>). Once Uploaded, the Command (### solrctl instancedir --list) would list "ranger_audits", upon which your Team can create the Collection via Command (### solrctl collection --create ranger_audits -s 1 -r 1 -m 1 -c ranger_audits). Once performed, Run the Command (### solrctl collection --list), which should list the "ranger_audits" Collection. All solrctl Commands are documented in [1] for your team's reference & should be run with Solr-KeyTab. Regards, Smarak [1] https://docs.cloudera.com/cdp-private-cloud-base/7.1.8/search-solrctl-reference/topics/search-solrctl-ref.html
... View more
11-08-2022
09:57 PM
Hello @Sunilkumarks We hope your Q concerning Grafana integration with CDP (Private Cloud & Public Cloud) has been answered. We shall mark the Post as resolved for now. Regards, Smarak
... View more
11-08-2022
09:56 PM
Hello @dch44 Thanks for engaging Cloudera Community. Based on the Post, Ranger & Infra-Solr aren't starting with [1]. This Error requires manual intervention into Ambari DB "clusterconfigmapping" Table to adjust the values of "type_name" ("kerberos-env" & "krb5-conf") to set "selected" Column as 1 for the latest "version_tag" in the concerned "clusterconfigmapping" Table. Considering the changes to Ambari DB should be performed with extreme caution, We would recommend engaging Cloudera Support to assist on this issue. Regards, Smarak [1] /usr/lib/python2.6/site-packages/resource_management/libraries/script/config_dictionary.py", line 73, in __getattr__
raise Fail("Configuration parameter '" + self.name + "' was not found in configurations dictionary!")
resource_management.core.exceptions.Fail: Configuration parameter 'kerberos-env' was not found in configurations dictionary!
... View more
11-08-2022
01:39 AM
Hello @Sunilkumarks To add additional details, Each Data Service (CDW, CML, CDE) offers a Grafana Dashboard [1] & [2] on CDP Public Cloud as well as CDP Private Cloud. Similarly, CDP Public Cloud DataHub offers Grafana integration as documented by [3]. In CDP Private Cloud, Customer can integrate Grafana with Cloudera Manager Datastore as documented in [4]. Having said that, Feel free to check the amazing dashboard offered by Cloudera Manager by default. Additionally, You can build Dashboard/Time-Series as per requirement [5] as well. Regards, Smarak [1] https://docs.cloudera.com/data-engineering/cloud/troubleshooting/topics/cde-connecting-to-grafana-dashboards.html [2] https://docs.cloudera.com/data-warehouse/cloud/monitoring/topics/dw-grafana-monitoring.html [3] https://community.cloudera.com/t5/Community-Articles/Configuring-Grafana-for-Datahub-Clusters/ta-p/348869 [4] https://grafana.com/grafana/plugins/foursquare-clouderamanager-datasource/ [5] https://docs.cloudera.com/cdp-private-cloud-base/7.1.8/monitoring-and-diagnostics/topics/cm-charting-time-series-data.html
... View more
11-08-2022
01:21 AM
Hello @TouguiOmar Kindly review & let us know if you have reviewed our Post dated 2022-11-07 & have any further queries concerning the queries around the Action Plan shared. Regards, Smarak
... View more
11-08-2022
01:21 AM
Hello @TheFixer Kindly review & let us know if you have reviewed our Post dated 2022-11-07 & have any further queries concerning the queries around HBase RegionServer logging highlighted in the Post. Regards, Smarak
... View more
11-08-2022
01:20 AM
Hello @mohamed_t Kindly review & let us know if you have reviewed our Post dated 2022-11-07 & have any further queries concerning the queries around SDX. Regards, Smarak
... View more
11-07-2022
02:24 AM
Hello @mohamed_t Thanks for using Cloudera Community. SDX would allow your Team to operation with an Environment Scope. An Environment is similar to a realm, within which Customer would operate. Additional details on Environment is available here. Customer can create as many as Environment as possible. Within an Environment, Customer can have any number of Cluster (Traditional Based) & Experience (Kubernetes Based). These Clusters & Experiences uses the same SDX. Check out Link for a tour of SDX. To your queries, SDX deals with Security, Metadata & Goverance without dealing with actual Customer's data. Your queries deals sharing data, which isn't SDX role. Rather, SDX deals with deciding who can access the data within the CDP Environment. Using Ranger, Customer can allow any CDP Workload User to access any data using their Workload Password/AccessKeys etc. All details around User Management is covered in Link. Finally, I would strongly urge your team to review CDP Product Tour & if interested, Start a conversation with Cloudera's amazing Sales/Field team. They can quickly understand your Customer's requirement & share details into our Product, which fits right into your Customer's Use-Case. Regards, Smarak
... View more
11-07-2022
02:07 AM
Hello @TheFixer Thanks for using Cloudera Community. To your queries, Please find the details below: (I) Concerning below log tracing, Each Column Family has their own MemStore & HBase ensure the Flush happens before an Edit is too old. A WAL can't be archived unless the corresponding entries from the WAL remain Un-flushed from the MemStore. These logging aren't a concern, yet PeriodicMemstoreFlusher working as designed. MemstoreFlusherChore requesting flush of table 1. because K has an old edit so flush to free WALs after random delay 65889ms (II) Concerning below log tracing, the same indicates the Flush has Completed by "MemStoreFlusher.1" (Defined by "hbase.hstore.flusher.count") of size ~6.68MB & the SequenceId indicates the Sequence ID associated with the LastEdit flushed. This Sequence ID is used to compare with Edits in WAL prior to archiving the WAL. 2022-10-29 12:21:46,170 INFO [MemStoreFlusher.1] regionserver.HRegion: Finished flush of dataSize ~6.68 MB/7002393, heapSize ~19.57 MB/20517200, currentSize=0 B/0 for ad1bd353d9ae0c52e30a935c5d06ecfa in 335ms, sequenceid=25141679, compaction requested=true (III) Finally, the CompactionQueue indicates the Size of Major (Long) & Minor (Short) Compaction. Compaction/Split Queue summary: compactionQueue=(longCompactions=111975:shortCompactions=35811), splitQueue=0 All the above Logging are HBase Internals & shouldn't be a concern for Customer, unless Customer is observing any impact. Your Team should review (If not done yet) the below links for details around the Flush Internals & Compaction Algorithms: (I) MemStore Flush Configuration (II) Conditions For MemStore Flush (III) Compaction Alogrithm Kindly review & let us know if your Team have any queries. Regards, Smarak
... View more