Member since
01-16-2018
613
Posts
48
Kudos Received
109
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1443 | 04-08-2025 06:48 AM | |
| 1714 | 04-01-2025 07:20 AM | |
| 1714 | 04-01-2025 07:15 AM | |
| 1358 | 05-06-2024 06:09 AM | |
| 2083 | 05-06-2024 06:00 AM |
10-20-2022
10:35 PM
Hello @utehrani We are marking the Post as Closed with the recommendation for your Team to engage Cloudera Support as the Troubleshooting of the issue would require sharing of Logs, which may be sensitive to be shared across Community along with Bundle/Config Files. Regards, Smarak
... View more
10-14-2022
03:42 AM
Hello @sekhar1 We hope your Q was answered by André. As such, We are marking the Post as Resolved. If the Link shared by André didn't fix the issue, Feel free to Update the Post likewise. Regards, Smarak
... View more
10-14-2022
03:31 AM
Greetings @Khairul_Hasan Hope you are doing well. We wish to follow-up on the above Post. Kindly let us know the Outcome of the above ask from our side. Note that in 7.1.3, CDP uses Phoenix 5.0 & in 7.1.7, CDP have Phoenix 5.1. Henceforth, our ask to include the Phoenix Server & Client Jar explicitly. Regards, Smarak
... View more
10-14-2022
03:26 AM
Hello @SDL This is an Old Thread & I assume your Team have moved on, yet wish to Update this Post for future references. It was observed that such Overnight Restart were resetting the default CleanUp (24 Hours) set via [1] in SolrConfig.XML of the respective Solr Collection (Sample from Ranger_Audits Collection). This caused the CleanUp to be postponed on a daily basis & causes Document PileUp beyond their Expiration. If Customer are restarting the Service nightly, It's advisable to set the CleanUp from 24 Hours to a Lower Value (Like, 20 or 22 Hours). Regards, Smarak [1] <processor class="solr.processor.DocExpirationUpdateProcessorFactory"> <int name="autoDeletePeriodSeconds">86400</int> <str name="ttlFieldName">_ttl_</str> <str name="expirationFieldName">_expire_at_</str> </processor>
... View more
10-12-2022
03:16 AM
Hello @utehrani Thanks for engaging Cloudera Community. Based on the Post, Pods for CML on ECS are failing, which isn't being resolved by Restarting the Pod as well. The Error [1] is generally observed for few Checks/Configs failing & frankly, the review of such an issue wouldn't be easy with Community. Since your Team installed ECS & deploying CML, We believe it's quicker for your Team to engage Cloudera Support via a Support Case while sharing the Version (CDP Private Cloud Base Version, Private Cloud Data Service Version) along with the Pod Listing "kubectl get pods -A" & the Logs from the affected Pods. Also, Whether your Team have deployed other Data Services like CDW (Cloudera Data Warehouse) & CDE (Cloudera Data Engineering) on the concerned ECS Setup yet. If No, We would like to confirm if their Deployment is working fine or facing similar concerns. Regards, Smarak [1] Failed to obtain initialization data due to {} com.cloudera.cdp.CdpServiceException: com.cloudera.cdp.CdpServiceException: 404: NOT_FOUND
... View more
10-10-2022
09:27 PM
Hello @cprakash Since we haven't heard from your Team, We are marking the Post as Resolved. Feel free to add your Team's observation whenever feasible. In Summary, Review the HMaster Logs to confirm the reasoning for ConnectionRefused. Few possible scenarios being Port 16000 is being used by any other Service Or, "master1" isn't correctly being mapped as per DNS Or, Port 16000 may be blocked. Regards, Smarak
... View more
10-10-2022
09:21 PM
Hello @Kings Thanks for using Cloudera Community. Based on your Post, You wish to reuse the Hardware used for MapR to run Cloudera with the Data Storage Setup. The Hardware/Java/Network/Database requirement for CDP Private Cloud v7.1.8 (Latest Release) is available via [1]. Your Team can review the same & confirm if the existing Hardware & Setup meets the requirement. Note that your Team needs to be install Cloudera Manager & allow Cloudera Manager to perform the Installation of various Services (Including HDFS) across the Cluster Hosts. Whether a Complete Lift & Shift works depends on the existing Setup & compliance with [1]. For information, Cloudera offers a Public Cloud & SaaS offering as well. Additionally, Customer can use the existing Hardware to turn into a full-fledged Kubernetes Cluster via Cloudera Manager to run Kubernetes based Workload (CDE, CDW, CML). The reason for stating the same is your Team selected "CDE" i.e. Cloudera Data Engineering in the Labels. We believe this is a right point to engage with our Cloudera's Sales Team & find the exact fit for your Team's requirement in Cloudera's Hybrid Cloud Offering. Kindly review & let us know. Regards, Smarak [1] https://docs.cloudera.com/cdp-private-cloud-base/7.1.8/installation/topics/cdpdc-hardware-requirements.html
... View more
10-10-2022
09:01 PM
Hello @Khairul_Hasan Thanks for engaging Cloudera Community. Based on the Post, Your Team is receiving [1] while running the Command [2] in CDP v7.1.7. The Same Command was running with CDP v7.1.3. If you include Verbose for ClassLoading (Similar to "-verbose:class"), Your Team would confirm the Jar from which the Method is loaded. If your Team can pass the Phoenix Client & Phoenix Server jar explicitly in the ClassPath (The Path would be similar to "/opt/cloudera/parcels/<PhoenixParcelDir>/lib/phoenix/<PhoenixClientJar>" & "/opt/cloudera/parcels/<PhoenixParcelDir>/lib/phoenix/<PhoenixServerJar>"), We expect the Error to be managed. Kindly review the same & share the Outcome. Regards, Smarak [1] Can't find method newStub in org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService! [2] java -cp /etc/hadoop/conf.cloudera.hdfs/ssl-client.xml:/etc/hbase/conf.cloudera.hbase/hbase-site.xml:/etc/hadoop/conf.cloudera.hdfs/core-site.xml:/etc/hadoop/conf.cloudera.hdfs/hdfs-site.xm:/data/scripts/LeaApp-1.0-SNAPSHOT.jar net.ba.lea.transformation.FileActions "/data/scripts/msc/IN/" "/tmp/nss_processing/" "/data/scripts/msc/reject/" "250" "LEA.DBM_CDR_FILE_HEAD" "NSS" "jdbc:phoenix:gzvlcdpnode01.ba.net:2181:/hbase:phoenix/gzvlcdpnode02@BA.NET:/etc/security/keytab/phoenix.keytab"
... View more
08-08-2022
02:45 AM
Hello @hbasetest You wish to enable Normalizer at Cluster Level irrespective of the Table Level Setting i.e. NORMALIZATION_ENABLED be True or False. As far as I believe, We would require Table Level enabling. Having said that, If you can Open a Post on the same by using the Steps shared by @VidyaSargur, Our fellow Community Gurus can get back to you sooner, as compared to a Comment on an Article written in 2016.
... View more
08-04-2022
02:28 AM
Hello @achandra, This is an Old Post yet closing the same by sharing the feedback concerning your ask for wider audience. The API is failing owing to Space between "NOW-" & "7DAYS". There shouldn't be any gap between the same. In Summary, the Command is below, where Customer needs to set the HTTP(s) header, Solr Host & Solr Port accordingly. Additionally, the Example uses "ranger_audits" Collection & "evtTime" field to delete any Documents older than 7 Days: ### curl -k --negotiate -u : "http[s]://<Any Solr Host FQDN>:<Solr Port>/solr/ranger_audits/update?commit=true" -H "Content-Type: text/xml" --data-binary "<delete><query>evtTime:[* TO NOW-7DAYS]</query></delete>" Regards, Smarak
... View more