Member since
01-16-2018
607
Posts
48
Kudos Received
106
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
404 | 05-06-2024 06:09 AM | |
543 | 05-06-2024 06:00 AM | |
579 | 05-06-2024 05:51 AM | |
627 | 05-01-2024 07:38 AM | |
668 | 05-01-2024 06:42 AM |
02-27-2023
10:54 PM
1 Kudo
Hello @bgkim Thanks for using Cloudera Community. To your Q, the Composite Primary Key would require using both A & B in WHERE Clause as the Indexing is done collectively. As such, Your SELECT Query would ideally benefit upon creating a Local Index on A & C. You may review [1] as Read-Heavy Use-Case benefit via Global Index with Penalty incurred during Writes. Additionally, Phoenix offers Covered Index & Explain Plan helps confirming the Index Usage. Link [2] offers few examples as well. With all recommendations, Best Advise is always to review the Performance internally prior to implementing them in Production. Regards, Smarak [1] https://phoenix.apache.org/secondary_indexing.html [2] https://learn.microsoft.com/en-us/azure/hdinsight/hbase/apache-hbase-phoenix-performance
... View more
02-01-2023
09:19 AM
I agree with Smarak, the error code typically means that there were not enough resources available to start the job. You could probably use Grafana dashboard (available in the Admin page) to look at the cluster resources and load around the time you had this issue. Is it happening consistently? For Jobs, I usually see this at the start or end of the month, when a lot of people schedule periodic jobs and they all trigger at the same time.
... View more
01-18-2023
12:10 AM
Hello @Girija Since we haven't heard back from you, We shall mark the Post as Solved. If you happen to have any further ask, Feel free to Update the Post. In Summary, Internally, I wasn't able to replicate the issue being faced by you as I was able to create the ConfigSet using "_default" ConfigSet as baseConfig. Customer can use the below solrctl command to create a ConfigSet with Solr KeyTab: solrctl config --create Test_Config _default -p configSetProp.immutable=false Assuming the above Command fails, Running the solrctl command with "--trace" after "solrctl" & before "config" would print the trace logging & assist in troubleshooting the issue faced by your team. Regards, Smarak
... View more
01-17-2023
11:03 PM
Hello @pankshiv1809 Since we haven't heard from your side concerning the Post, We are marking the Post as Solved. If you have any further ask, Feel free to update the Post & we shall get back to you accordingly. Regards, Smarak
... View more
01-16-2023
09:09 PM
Hello @Ryan_2002 Thanks for engaging Cloudera Community. First of all, Thank You for the detailed description of the Problem. I believe your ask is Valid, yet reviewing the same over a Community Post isn't a suitable approach. Feasible for you to engage Cloudera Support to allow our Team to work with you, with the suitability of Screen-Sharing Session as well as Logs exchange, both of which aren't feasible in Community. That would greatly expedite the review of your ask. Regards, Smarak
... View more
01-16-2023
01:54 AM
Hello @panb We hope your Query has been addressed by us & shall mark the Post as Resolved. In Summary, Your Team needs to meet the requirement as stated in [1], which doesn't differentiate in Processor Type & I believe your Team is referring to Hygon Dhyana Processor. Note that we have shared the Hardware requirement is shared for CDP v7.1.8 as CDH isn't recommended now owing to End-Of-Life. As a Best Practise, I shall suggest engaging with Cloudera Account Team associated with Customer to perform any due diligence with respect to Supportability & Best Practices prior to onboarding Use-Cases into any new Platform, wherein Supportability is doubted by your Team. Regards, Smarak [1] Hardware Requirements | CDP Private Cloud (cloudera.com)
... View more
01-13-2023
01:43 AM
Hi Smarak, thanks for your answer. That helps me!
... View more
01-03-2023
08:46 AM
How u increased the timeout to 15 minutes ?
... View more
01-02-2023
10:03 PM
Hello @quangbilly79 Thanks for using Cloudera Community. Based on your Post, you may consider "Kafka Gateway" as the Client for Kafka, which are setup on the Hosts wherein the same is added as per Cloudera Manager "Assign Roles". A Client/Gateway is familiar with the Service (Kafka in this Case) & all Client/Service Configs are available for the Client/Gateway without any manual intervention. Any changes made to the Service or Client Configs is pushed to the Service/Client Configuration by Cloudera Manager. Imagine a Scenario wherein you wish to run "hdfs dfs -ls" on a HDFS FileSystem. Simply running the Command won't work unless the Host wherein the Command "hdfs dfs -ls" is being run knows the Setup (HDFS FileSystem, NameNode, Port, Protocol). Review [1] for an Example. Adding an HDFS Gateway ensures User doesn't need to manually configure a Client/Gateway with Cloudera Manager doing the needful. Similarly, Kafka Gateway operates. Else, Customer need to manually configure the Client/Gateway Setup. Hope the above answer your query concerning the Gateway Role. Regards, Smarak [1] https://www.ibm.com/docs/en/spectrum-scale-bda?topic=hdfs-clients-configuration
... View more