Member since
09-29-2015
5226
Posts
22
Kudos Received
34
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1394 | 07-13-2022 07:05 AM | |
3585 | 08-11-2021 05:29 AM | |
2328 | 07-07-2021 01:04 AM | |
1574 | 07-06-2021 03:24 AM | |
3546 | 06-07-2021 11:12 PM |
07-18-2020
12:02 PM
3 Kudos
@Henry2410 MySQL Server is intended for mission-critical, heavy-load production systems as well as for embedding into mass-deployed software. On the other hand, Snowflake is detailed as "The data warehouse built for the cloud". There's not really an equivalence between MySQL and Snowflake use cases. What you are asking really is whether Snowflake can play the role of an OLTP database. Snowflake is not an OLTP database. It is an OLAP database. So generally speaking I would say no. Snowflake is a cloud-based warehouse and it would be used most of the times for OLAP purpose back to your questions, Snowflake can be used under the following conditions: If you have only inserts into target table and not much updates to the table we can achieve good performance by using cluster by and other inline views Having said that, to explore your use case a little bit more I would ask yourself or your stakeholders the following questions: Do you need millisecond response times for INSERTs, UPDATEs, and SELECTs? Does your application tool require indexes? Does your application need referential integrity and uniqueness constraints enforced? If you said yes to ANY of 1, 2, 3 then go MySQL. If you said NO to ALL 1, 2, and 3, then Snowflake might be viable. But even then I would not recommend it, as that is not what Snowflake was built for.
... View more
07-15-2020
06:17 AM
Hello @Raj78 , thank you for enquiring about how to set up Livy against CDH. Please note, Livy is supported for CDP, however it is not supported on CDH6. The main reason of a product not being supported in a certain version is that it is not production ready for those releases. Please evaluate our CDP product. Please find here the documentation for configuring Livy ACLs on CDP. Please let us know if you need more information on this topic. Best regards: Ferenc
... View more
07-15-2020
12:32 AM
Hello @davidla , thank you for watching our Cloudera OnDemand materials. Currently we do not provide the option of downloading the videos. Best regards: Ferenc
... View more
06-25-2020
06:03 AM
Hello @NumeroUnoNU , I've run the "alternatives --list" command on a cluster node and noticed that there is a "hadoop-conf" item, which points to a directory that has the hdfs-site.xml location. You can also discover it by: "/usr/sbin/alternatives --display hadoop-conf". This lead to me to google for "/var/lib/alternatives/hadoop-conf" and found this Community Article reply, which I believe answers your question. In short if you have e.g. gateway roles deployed for HDFS on a node, you will find the up-to-date hdfs-site.xml in /etc/hadoop/conf folder... We have a little bit diverged from the original topic in this thread. To make the conversation easier to read for future visitors, would you mind open a new thread for each major topics, please? Please let us know if the above information helped you by pressing the "Accept as Solution" button. Best regards: Ferenc
... View more
06-24-2020
01:02 AM
Hello @iceqboy , thank you for raising your enquiry about how to upgrade the OS version on a cluster. As a first step, please upgrade your OS. [1] points out that temporarily - while the OS upgrade is carried out - it is supported by Cloudera to run on mixed minor version releases. It means that it is less risky to run on different minor OS releases than on different OS-es. [2] describes that: "Upgrading the operating system to a higher version but within the same major release is called a minor release upgrade. For example, upgrading from Redhat 6.8 to 6.9. This is a relatively simple procedure that involves properly shutting down all the components, performing the operating system upgrade, and then restarting everything in reverse order." Once the cluster is on the same OS release, the next step is to upgrade your CM [3]. The CM version has to be higher or equal to the CDH version you are upgrading to. Then please follow our documentation on how to upgrade to CDH5.16. [4] Please let us know if we addressed your enquiry! Best regards: Ferenc [1] https://docs.cloudera.com/documentation/enterprise/release-notes/topics/rn_consolidated_pcm.html [2] https://docs.cloudera.com/cdp/latest/upgrade-cdh/topics/ug_os_upgrade.html [3] https://docs.cloudera.com/cdp/latest/upgrade-cdh/topics/ug_cm_upgrade.html [4] https://docs.cloudera.com/cdp/latest/upgrade-cdh/topics/ug_cdh_upgrade.html
... View more
06-23-2020
02:30 AM
Hello @mhchethan , it is an internal jira. For future reference it is the DOCS-6740 [HDF3.3.0 SLES12SP3 download location is not shown]. Thank you for confirming you have all the information you need. You can close the thread by pressing "Accept as Solution" button under the message that you consider that answered your enquiry, please. Best regards: Ferenc
... View more
06-22-2020
01:58 AM
Hello @Saimukunth , thank you for reaching out! Please note, the docker image is based on CDH5.13 and no longer maintained. You can still browse however the instructions on how to run the docker image. Going forward, we encourage you to trial our latest product line, CDP. Please let us know if you need any further input regarding to trialling CDP. Best regards: Ferenc
... View more
06-15-2020
01:07 AM
1 Kudo
Hello @Saagar , thank you for expressing your interest on downloading the Quickstart VM for CDH5.14. Unfortunately The Cloudera Quick start VM has been discontinued. You can try the docker image of Cloudera available publicly on https://hub.docker.com/r/cloudera/quickstart or simply run below command to download this on docker enabled system. docker pull cloudera/quickstart Please note, Cloudera don't support QuickStart VM Officially. The up-to-date product is Cloudera Data Platform, and you can download a trial version to install on-premises here. Best regards: Ferenc
... View more
06-12-2020
04:08 AM
Hello @ijarvis , - If you are looking to try out HDP, please consider downloading our HDP Sandbox [1]. It does not require to become a subscription customer or to have paywall credentials. - If you would like to deploy HDP in a production environment, please reach out to our Sales Team [2] to guide you further. Once you are a subscription customer, make sure you are registered to our Support Portal [3], please. Please note, the Community Portal registration is different from the Support Portal one. After logging in to the Support Portal, you can navigate to the Downloads page and follow an automated process for the paywall credentials. You will need your license key ready to generate the paywall credential, if you need a copy of the license key, you can open a non-technical case to request it (once you are registered and logged in to our Support Portal). The binaries are behind a paywall for which you need paywall credentials. Please see more about our Licencing Policy FAQ under [4]. - you can always download and compile the source code [5], which is not behind the paywall Please let me know if you need further input! Thank you: Ferenc [1] https://www.cloudera.com/downloads/hortonworks-sandbox.html [2] https://www.cloudera.com/contact-sales.html [3] https://sso.cloudera.com/register.html [4] https://www.cloudera.com/products/faq.html [5] https://github.com/hortonworks/hadoop-release/releases
... View more
06-11-2020
04:38 AM
2 Kudos
There is currently a bug. Please try the following: Click Parcel Repositories & Network Settings You should see a error that looks like: Remove that URL, and replace it with: https://archive.cloudera.com/cdh7/7.1.1.0/parcels/ This solution could be resolved or could change over time as newer versions of CM and Cloudera Runtime are released.
... View more