Member since
09-29-2015
5226
Posts
22
Kudos Received
34
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1405 | 07-13-2022 07:05 AM | |
3601 | 08-11-2021 05:29 AM | |
2331 | 07-07-2021 01:04 AM | |
1578 | 07-06-2021 03:24 AM | |
3557 | 06-07-2021 11:12 PM |
07-17-2020
03:06 AM
1 Kudo
Hello @Henry2410 , thank you for raising the question about how to migrate data from MySql and how to move data in real time between clusters. For real-time data streaming we recommend NiFi and Kafka on Cloudera Data Platform. Here is a great blog article about NiFi and Kafka on CDP: "Kafka and NiFi’s availability in CDP Data Hub allows organizations to build the foundation for their Data Movement and Stream Processing use cases in the cloud. CDP Data Hub provides a cloud-native service experience built to meet the security and governance needs of large enterprises." One way of exporting a MySql database is to stream out the data, which you can carry out with the above products. Please take a look at our Cloudera Data Warehouse for an all-in-one place to replace multiple vendor solutions from different vendors. For data migration into CDP, please check out this documentation. For backup and disaster recovery you can use the Replication Manager. What Cloudera product are you using currently, please? Thank you: Ferenc
... View more
07-15-2020
06:17 AM
Hello @Raj78 , thank you for enquiring about how to set up Livy against CDH. Please note, Livy is supported for CDP, however it is not supported on CDH6. The main reason of a product not being supported in a certain version is that it is not production ready for those releases. Please evaluate our CDP product. Please find here the documentation for configuring Livy ACLs on CDP. Please let us know if you need more information on this topic. Best regards: Ferenc
... View more
07-15-2020
12:32 AM
Hello @davidla , thank you for watching our Cloudera OnDemand materials. Currently we do not provide the option of downloading the videos. Best regards: Ferenc
... View more
07-14-2020
06:42 AM
1 Kudo
Hello @ashish_inamdar , based on the stack trace you shared with us: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.6.2.v20151217-774c696): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: org.postgresql.util.PSQLException: ERROR: relation "metainfo" does not exist
Error Code: 0
Call: SELECT "metainfo_key", "metainfo_value" FROM metainfo WHERE ("metainfo_key" = ?)
bind => [1 parameter bound]
Query: ReadObjectQuery(name="readMetainfoEntity" referenceClass=MetainfoEntity sql="SELECT "metainfo_key", "metainfo_value" FROM metainfo WHERE ("metainfo_key" = ?)") It means that the metainfo table does not exist. Did you follow the installation steps, please? I've found the same issue was resolved in this thread Quoting: "I used yum to uninstall ambari and associated packages, including postgres. Then I re-installed Ambari using yum and ran ambari-server install." Please let us know if it helped! Kind regards: Ferenc
... View more
06-25-2020
06:03 AM
Hello @NumeroUnoNU , I've run the "alternatives --list" command on a cluster node and noticed that there is a "hadoop-conf" item, which points to a directory that has the hdfs-site.xml location. You can also discover it by: "/usr/sbin/alternatives --display hadoop-conf". This lead to me to google for "/var/lib/alternatives/hadoop-conf" and found this Community Article reply, which I believe answers your question. In short if you have e.g. gateway roles deployed for HDFS on a node, you will find the up-to-date hdfs-site.xml in /etc/hadoop/conf folder... We have a little bit diverged from the original topic in this thread. To make the conversation easier to read for future visitors, would you mind open a new thread for each major topics, please? Please let us know if the above information helped you by pressing the "Accept as Solution" button. Best regards: Ferenc
... View more
06-24-2020
08:55 AM
1 Kudo
Hello @NumeroUnoNU , yes, you either parse the contents of the hdfs-site.xml or you utilise the HDFS Client, so you do not need to worry about implementation details. I've just quickly googled for you an explanation of what is HDFS Client [1]. If you go for the parsing exercise, make sure you are not referencing the NN, otherwise on failover you should prepare your script to handle that situation. Kind regards: Ferenc [1] https://stackoverflow.com/questions/43221993/what-does-client-exactly-mean-for-hadoop-hdfs
... View more
06-24-2020
06:45 AM
Hello @NumeroUnoNU , Cloudera Manager is taking care of the Client Configuration files [1]. It makes sure that the latest configurations are deployed to all nodes where related services deployed or gateway roles for that service is configured. You will find the client configs present the node where e.g. a Datanode role is running under this folder: /var/run/cloudera-scm-agent/process/[largest number]...[Service name].../ The up-to-date configs are always in the folder which is starting with the largest number. Hope this helps! Kind regards: Ferenc [1] https://docs.cloudera.com/documentation/enterprise/5-16-x/topics/cm_mc_client_config.html
... View more
06-24-2020
01:02 AM
Hello @iceqboy , thank you for raising your enquiry about how to upgrade the OS version on a cluster. As a first step, please upgrade your OS. [1] points out that temporarily - while the OS upgrade is carried out - it is supported by Cloudera to run on mixed minor version releases. It means that it is less risky to run on different minor OS releases than on different OS-es. [2] describes that: "Upgrading the operating system to a higher version but within the same major release is called a minor release upgrade. For example, upgrading from Redhat 6.8 to 6.9. This is a relatively simple procedure that involves properly shutting down all the components, performing the operating system upgrade, and then restarting everything in reverse order." Once the cluster is on the same OS release, the next step is to upgrade your CM [3]. The CM version has to be higher or equal to the CDH version you are upgrading to. Then please follow our documentation on how to upgrade to CDH5.16. [4] Please let us know if we addressed your enquiry! Best regards: Ferenc [1] https://docs.cloudera.com/documentation/enterprise/release-notes/topics/rn_consolidated_pcm.html [2] https://docs.cloudera.com/cdp/latest/upgrade-cdh/topics/ug_os_upgrade.html [3] https://docs.cloudera.com/cdp/latest/upgrade-cdh/topics/ug_cm_upgrade.html [4] https://docs.cloudera.com/cdp/latest/upgrade-cdh/topics/ug_cdh_upgrade.html
... View more
06-23-2020
02:30 AM
Hello @mhchethan , it is an internal jira. For future reference it is the DOCS-6740 [HDF3.3.0 SLES12SP3 download location is not shown]. Thank you for confirming you have all the information you need. You can close the thread by pressing "Accept as Solution" button under the message that you consider that answered your enquiry, please. Best regards: Ferenc
... View more
06-23-2020
12:43 AM
Hello @mhchethan , I have reached out to Product Management internally and the binaries for SLES12 SP1 can be used for SP3 too. Raised a jira internally, so our doc would reflect this in the future. Please let us know if you need any further information! Thank you: Ferenc
... View more