Member since
01-16-2018
602
Posts
46
Kudos Received
100
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
13 | 04-15-2024 12:28 AM | |
20 | 04-15-2024 12:17 AM | |
48 | 04-15-2024 12:04 AM | |
24 | 04-14-2024 11:54 PM | |
29 | 04-14-2024 11:45 PM |
11-07-2022
01:14 AM
Hello @TouguiOmar Thanks for using Cloudera Community. Based on the Post, the "Audits" tab of Ranger UI shows "Collection Not Found: Ranger_Audits". The same is observed when the Collection isn't present in Solr. As such, Kindly follow the below steps: (I) Confirm Infra-Solr Service is installed on the concerned Cluster. The same should be installed already for Customer ideally. (II) Connect to the Host wherein Infra-Solr Service is present & fetch the Infra-Solr Service Keytab. Next, List the Collection via Command (### solrctl collection --list) & List Configs via Command (### solrctl instancedir --list). (III) Considering the Error received, the Command (### solrctl collection --list) wouldn't list any "ranger_audits" Collection. Yet, We expect the "ranger_audits" Config to be listed via Command (### solrctl instancedir --list). (IV) Assuming the Command (### solrctl instancedir --list) shows "ranger_audits" & the Command (### solrctl collection --list) doesn't list "ranger_audits", Your team can create the Collection via Command (### solrctl collection --create ranger_audits -s 1 -r 1 -m 1 -c ranger_audits). Once performed, Run the Command (### solrctl collection --list), which should list the "ranger_audits" Collection. (V) Refresh Ranger "Audits" UI & Confirm the Outcome. Your team may require a Restart of Ranger Admin Service after creating the "ranger_audits" Collection. (VI) If Command (### solrctl instancedir --list) doesn't list "ranger_audits" from Step (IV), We need to upload the "ranger_audits" Config via solrctl Command. The "ranger_audits" ConfigSet can be found in Ranger Parcel Directory & uploaded via Command (### solrctl config --upload ranger_audits <Path>). All solrctl Commands are documented in [1] for your team's reference. Kindly review above & let us know the Outcome. Regards, Smarak [1] https://docs.cloudera.com/cdp-private-cloud-base/7.1.8/search-solrctl-reference/topics/search-solrctl-ref.html
... View more
10-20-2022
10:41 PM
Hello @Khairul_Hasan We are marking the Post as Closed as we didn't hear from your side on our Update dated 2022-10-10. If your team have any concerns, Feel free to Update this Post. If your team adopted any other approach to resolve the issue, We would appreciate if your team can share the finer details for our wider community. Regards, Smarak
... View more
10-20-2022
10:35 PM
Hello @utehrani We are marking the Post as Closed with the recommendation for your Team to engage Cloudera Support as the Troubleshooting of the issue would require sharing of Logs, which may be sensitive to be shared across Community along with Bundle/Config Files. Regards, Smarak
... View more
10-14-2022
03:42 AM
Hello @sekhar1 We hope your Q was answered by André. As such, We are marking the Post as Resolved. If the Link shared by André didn't fix the issue, Feel free to Update the Post likewise. Regards, Smarak
... View more
10-14-2022
03:31 AM
Greetings @Khairul_Hasan Hope you are doing well. We wish to follow-up on the above Post. Kindly let us know the Outcome of the above ask from our side. Note that in 7.1.3, CDP uses Phoenix 5.0 & in 7.1.7, CDP have Phoenix 5.1. Henceforth, our ask to include the Phoenix Server & Client Jar explicitly. Regards, Smarak
... View more
10-14-2022
03:26 AM
Hello @SDL This is an Old Thread & I assume your Team have moved on, yet wish to Update this Post for future references. It was observed that such Overnight Restart were resetting the default CleanUp (24 Hours) set via [1] in SolrConfig.XML of the respective Solr Collection (Sample from Ranger_Audits Collection). This caused the CleanUp to be postponed on a daily basis & causes Document PileUp beyond their Expiration. If Customer are restarting the Service nightly, It's advisable to set the CleanUp from 24 Hours to a Lower Value (Like, 20 or 22 Hours). Regards, Smarak [1] <processor class="solr.processor.DocExpirationUpdateProcessorFactory"> <int name="autoDeletePeriodSeconds">86400</int> <str name="ttlFieldName">_ttl_</str> <str name="expirationFieldName">_expire_at_</str> </processor>
... View more
10-12-2022
03:16 AM
Hello @utehrani Thanks for engaging Cloudera Community. Based on the Post, Pods for CML on ECS are failing, which isn't being resolved by Restarting the Pod as well. The Error [1] is generally observed for few Checks/Configs failing & frankly, the review of such an issue wouldn't be easy with Community. Since your Team installed ECS & deploying CML, We believe it's quicker for your Team to engage Cloudera Support via a Support Case while sharing the Version (CDP Private Cloud Base Version, Private Cloud Data Service Version) along with the Pod Listing "kubectl get pods -A" & the Logs from the affected Pods. Also, Whether your Team have deployed other Data Services like CDW (Cloudera Data Warehouse) & CDE (Cloudera Data Engineering) on the concerned ECS Setup yet. If No, We would like to confirm if their Deployment is working fine or facing similar concerns. Regards, Smarak [1] Failed to obtain initialization data due to {} com.cloudera.cdp.CdpServiceException: com.cloudera.cdp.CdpServiceException: 404: NOT_FOUND
... View more
10-10-2022
09:27 PM
Hello @cprakash Since we haven't heard from your Team, We are marking the Post as Resolved. Feel free to add your Team's observation whenever feasible. In Summary, Review the HMaster Logs to confirm the reasoning for ConnectionRefused. Few possible scenarios being Port 16000 is being used by any other Service Or, "master1" isn't correctly being mapped as per DNS Or, Port 16000 may be blocked. Regards, Smarak
... View more
10-10-2022
09:21 PM
Hello @Kings Thanks for using Cloudera Community. Based on your Post, You wish to reuse the Hardware used for MapR to run Cloudera with the Data Storage Setup. The Hardware/Java/Network/Database requirement for CDP Private Cloud v7.1.8 (Latest Release) is available via [1]. Your Team can review the same & confirm if the existing Hardware & Setup meets the requirement. Note that your Team needs to be install Cloudera Manager & allow Cloudera Manager to perform the Installation of various Services (Including HDFS) across the Cluster Hosts. Whether a Complete Lift & Shift works depends on the existing Setup & compliance with [1]. For information, Cloudera offers a Public Cloud & SaaS offering as well. Additionally, Customer can use the existing Hardware to turn into a full-fledged Kubernetes Cluster via Cloudera Manager to run Kubernetes based Workload (CDE, CDW, CML). The reason for stating the same is your Team selected "CDE" i.e. Cloudera Data Engineering in the Labels. We believe this is a right point to engage with our Cloudera's Sales Team & find the exact fit for your Team's requirement in Cloudera's Hybrid Cloud Offering. Kindly review & let us know. Regards, Smarak [1] https://docs.cloudera.com/cdp-private-cloud-base/7.1.8/installation/topics/cdpdc-hardware-requirements.html
... View more
10-10-2022
09:01 PM
Hello @Khairul_Hasan Thanks for engaging Cloudera Community. Based on the Post, Your Team is receiving [1] while running the Command [2] in CDP v7.1.7. The Same Command was running with CDP v7.1.3. If you include Verbose for ClassLoading (Similar to "-verbose:class"), Your Team would confirm the Jar from which the Method is loaded. If your Team can pass the Phoenix Client & Phoenix Server jar explicitly in the ClassPath (The Path would be similar to "/opt/cloudera/parcels/<PhoenixParcelDir>/lib/phoenix/<PhoenixClientJar>" & "/opt/cloudera/parcels/<PhoenixParcelDir>/lib/phoenix/<PhoenixServerJar>"), We expect the Error to be managed. Kindly review the same & share the Outcome. Regards, Smarak [1] Can't find method newStub in org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService! [2] java -cp /etc/hadoop/conf.cloudera.hdfs/ssl-client.xml:/etc/hbase/conf.cloudera.hbase/hbase-site.xml:/etc/hadoop/conf.cloudera.hdfs/core-site.xml:/etc/hadoop/conf.cloudera.hdfs/hdfs-site.xm:/data/scripts/LeaApp-1.0-SNAPSHOT.jar net.ba.lea.transformation.FileActions "/data/scripts/msc/IN/" "/tmp/nss_processing/" "/data/scripts/msc/reject/" "250" "LEA.DBM_CDR_FILE_HEAD" "NSS" "jdbc:phoenix:gzvlcdpnode01.ba.net:2181:/hbase:phoenix/gzvlcdpnode02@BA.NET:/etc/security/keytab/phoenix.keytab"
... View more