Member since
09-29-2015
5243
Posts
22
Kudos Received
34
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2659 | 07-13-2022 07:05 AM | |
| 5696 | 08-11-2021 05:29 AM | |
| 3408 | 07-07-2021 01:04 AM | |
| 3041 | 07-06-2021 03:24 AM | |
| 4936 | 06-07-2021 11:12 PM |
06-24-2020
08:55 AM
1 Kudo
Hello @NumeroUnoNU , yes, you either parse the contents of the hdfs-site.xml or you utilise the HDFS Client, so you do not need to worry about implementation details. I've just quickly googled for you an explanation of what is HDFS Client [1]. If you go for the parsing exercise, make sure you are not referencing the NN, otherwise on failover you should prepare your script to handle that situation. Kind regards: Ferenc [1] https://stackoverflow.com/questions/43221993/what-does-client-exactly-mean-for-hadoop-hdfs
... View more
06-24-2020
06:45 AM
Hello @NumeroUnoNU , Cloudera Manager is taking care of the Client Configuration files [1]. It makes sure that the latest configurations are deployed to all nodes where related services deployed or gateway roles for that service is configured. You will find the client configs present the node where e.g. a Datanode role is running under this folder: /var/run/cloudera-scm-agent/process/[largest number]...[Service name].../ The up-to-date configs are always in the folder which is starting with the largest number. Hope this helps! Kind regards: Ferenc [1] https://docs.cloudera.com/documentation/enterprise/5-16-x/topics/cm_mc_client_config.html
... View more
06-24-2020
01:02 AM
Hello @iceqboy , thank you for raising your enquiry about how to upgrade the OS version on a cluster. As a first step, please upgrade your OS. [1] points out that temporarily - while the OS upgrade is carried out - it is supported by Cloudera to run on mixed minor version releases. It means that it is less risky to run on different minor OS releases than on different OS-es. [2] describes that: "Upgrading the operating system to a higher version but within the same major release is called a minor release upgrade. For example, upgrading from Redhat 6.8 to 6.9. This is a relatively simple procedure that involves properly shutting down all the components, performing the operating system upgrade, and then restarting everything in reverse order." Once the cluster is on the same OS release, the next step is to upgrade your CM [3]. The CM version has to be higher or equal to the CDH version you are upgrading to. Then please follow our documentation on how to upgrade to CDH5.16. [4] Please let us know if we addressed your enquiry! Best regards: Ferenc [1] https://docs.cloudera.com/documentation/enterprise/release-notes/topics/rn_consolidated_pcm.html [2] https://docs.cloudera.com/cdp/latest/upgrade-cdh/topics/ug_os_upgrade.html [3] https://docs.cloudera.com/cdp/latest/upgrade-cdh/topics/ug_cm_upgrade.html [4] https://docs.cloudera.com/cdp/latest/upgrade-cdh/topics/ug_cdh_upgrade.html
... View more
06-23-2020
02:30 AM
Hello @mhchethan , it is an internal jira. For future reference it is the DOCS-6740 [HDF3.3.0 SLES12SP3 download location is not shown]. Thank you for confirming you have all the information you need. You can close the thread by pressing "Accept as Solution" button under the message that you consider that answered your enquiry, please. Best regards: Ferenc
... View more
06-23-2020
12:43 AM
Hello @mhchethan , I have reached out to Product Management internally and the binaries for SLES12 SP1 can be used for SP3 too. Raised a jira internally, so our doc would reflect this in the future. Please let us know if you need any further information! Thank you: Ferenc
... View more
06-22-2020
06:25 AM
Hello @mhchethan , good point, let me check internally. Thank you: Ferenc
... View more
06-22-2020
03:57 AM
Hello @mhchethan , thank you for sharing your concerns with us. All the source code is reachable, however the binaries are moved behind a paywall. I understand that the wording of the sentence might be interpreted in a way as you described. In this report under page 451. there is a more clear explanation: "Cloudera notes its full commitment to open source and will continue to follow the practice of making contributions to the upstream source first, including to any new open source projects. Access to binaries, however, will only be available from Cloudera and will require a subscription agreement with the company to access, which is a departure from how it previously distributed its binaries. The reason for putting binaries behind a paywall is that it provides some level of protection for the vendor. The binaries contain Cloudera-specific IP that turns the many disparate open source projects into an enterprise-grade functioning system." Hope it helps! Best regards: Ferenc
... View more
06-22-2020
02:30 AM
Hello @NumeroUnoNU , thank you for confirming that the github repo covers your enquiries. Regarding to WebHDFS, I would use the hdfs-site.xml config file to get the URLs to the namenodes and datanodes after you've enabled it. The Apache Hadoop WebHDFS documentation describes further how the URIs are composed. Please let me know if it addresses your enquiry. Kind regards: Ferenc
... View more
06-22-2020
01:58 AM
Hello @Saimukunth , thank you for reaching out! Please note, the docker image is based on CDH5.13 and no longer maintained. You can still browse however the instructions on how to run the docker image. Going forward, we encourage you to trial our latest product line, CDP. Please let us know if you need any further input regarding to trialling CDP. Best regards: Ferenc
... View more
06-22-2020
12:14 AM
Hello @mhchethan , all the binaries now behind a paywall. For more details, please read this reply. In case you would like to trial the product, please consider downloading our sandbox release. Please let us know if we addressed your enquiry! Thank you: Ferenc
... View more