Member since
01-16-2018
613
Posts
48
Kudos Received
109
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
341 | 04-08-2025 06:48 AM | |
507 | 04-01-2025 07:20 AM | |
399 | 04-01-2025 07:15 AM | |
789 | 05-06-2024 06:09 AM | |
1180 | 05-06-2024 06:00 AM |
01-16-2023
01:54 AM
Hello @panb We hope your Query has been addressed by us & shall mark the Post as Resolved. In Summary, Your Team needs to meet the requirement as stated in [1], which doesn't differentiate in Processor Type & I believe your Team is referring to Hygon Dhyana Processor. Note that we have shared the Hardware requirement is shared for CDP v7.1.8 as CDH isn't recommended now owing to End-Of-Life. As a Best Practise, I shall suggest engaging with Cloudera Account Team associated with Customer to perform any due diligence with respect to Supportability & Best Practices prior to onboarding Use-Cases into any new Platform, wherein Supportability is doubted by your Team. Regards, Smarak [1] Hardware Requirements | CDP Private Cloud (cloudera.com)
... View more
01-13-2023
01:43 AM
Hi Smarak, thanks for your answer. That helps me!
... View more
01-03-2023
08:46 AM
How u increased the timeout to 15 minutes ?
... View more
01-02-2023
10:03 PM
Hello @quangbilly79 Thanks for using Cloudera Community. Based on your Post, you may consider "Kafka Gateway" as the Client for Kafka, which are setup on the Hosts wherein the same is added as per Cloudera Manager "Assign Roles". A Client/Gateway is familiar with the Service (Kafka in this Case) & all Client/Service Configs are available for the Client/Gateway without any manual intervention. Any changes made to the Service or Client Configs is pushed to the Service/Client Configuration by Cloudera Manager. Imagine a Scenario wherein you wish to run "hdfs dfs -ls" on a HDFS FileSystem. Simply running the Command won't work unless the Host wherein the Command "hdfs dfs -ls" is being run knows the Setup (HDFS FileSystem, NameNode, Port, Protocol). Review [1] for an Example. Adding an HDFS Gateway ensures User doesn't need to manually configure a Client/Gateway with Cloudera Manager doing the needful. Similarly, Kafka Gateway operates. Else, Customer need to manually configure the Client/Gateway Setup. Hope the above answer your query concerning the Gateway Role. Regards, Smarak [1] https://www.ibm.com/docs/en/spectrum-scale-bda?topic=hdfs-clients-configuration
... View more
12-29-2022
02:47 AM
Hello @smdas , can you let me know or tag Hbase engineers who could provide more clarity on my doubts especially on the caching? It would be very helpful. Thanks
... View more
12-27-2022
11:27 PM
Hello @smdas Thanks for the response. https://issues.apache.org/jira/browse/HBASE-24289 https://docs.google.com/document/d/1fk_EWLNnxniwt3gDjUS_apQ3cPzn90AmvDT1wkirvKE/edit# These links mention date tiered compaction policy in hbase. Does it somehow help in configuring different policy for same column family? or did i misunderstood?
... View more
12-25-2022
05:50 AM
Hello @Serhii This is an Old Post, yet I am answering the same as there are few changes with CDP recent release & ensuring Community awareness. With CDP v7.1.6 [1] allows Accumulo to be installed via Cloudera Manager. The Installation is documented via [1] & requires a Separate Parcel to be installed before attempting to add Accumulo via Cloudera Manager. Having said that, Feel free to engage with Cloudera Account Team for Customer as the investment into Accumulo isn't as par with other similar counterpart to review any long term engagement with Accumulo for meeting Customer's Use-Case. Regards, Smarak [1] https://docs.cloudera.com/cdp-private-cloud-base/7.1.6/opdb-accumulo-installation/topics/opdb-accumulo-install.html
... View more
12-21-2022
07:04 AM
1 Kudo
Hello @SagarCapG Confirmed that Phoenix v5.1.0 has the Fix for "!primarykeys" to show the Primary Key linked with a Phoenix Table. Upon checking our Product Documentation, CDP v7.1.6 introduces Phoenix v5.1.0 [1]. As such, I am surprised your Team has Phoenix v5.0.0 with CDP v7.1.7, wherein Official v7.1.7 Doc [2] says Phoenix v5.1.1.7.1.7.0-551 is used. Since the Issue is fixed in Phoenix v5.1.x & CDP v7.1.6 onwards ship Phoenix v5.1.x, Kindly engage Cloudera Support to allow Support to review your Cluster for identifying the reasoning for CDP v7.1.7 using Phoenix v5.0.0. Or, Upgrade to Phoenix v5.1.x (If Customer is managing Phoenix outside of CDP) to use "!primarykeys" functionality. Regards, Smarak [1] What's New in Apache Phoenix | CDP Private Cloud (cloudera.com) [2] Cloudera Runtime component versions | CDP Private Cloud
... View more
12-20-2022
11:35 PM
Hello @brajeshreddy Since the Issue isn't replicated with CML release internally & your team have engaged Cloudera Support for further assistance, We shall Close the Post now. For our fellow Community Users, the Steps to perform the Team's Name Modification is shared above. Assuming the same isn't working for Customers, Ensure you are connected as MLAdmin & Caching is ruled out as well. Regards, Smarak
... View more
12-20-2022
08:28 AM
Hello @sekhar1 Since we haven't heard back from your side concerning the Post, We shall mark the Post as Resolved with the Action Plan to review if Traffic from your Machine isn't allowed to or from the Security Group linked with the VPC wherein the CML Workspace instances are deployed. Check if the Kubernetes Pods associated with the CML Workspace Kubernetes Cluster are Up/Running. If Yes, Such Exception should be reviewed from a Network Standpoint only. You may reach out to your Customer's AWS/Platform Team to review the Traffic between your Machine & the VPC within which the CML Workspace is deployed. Assuming your team fixed the issue outside of any Customer's Network concerns, We would appreciate your feedback to ensure our fellow Community Users can benefit from your experience. Regards, Smarak
... View more