Member since
10-01-2018
802
Posts
144
Kudos Received
130
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3537 | 04-15-2022 09:39 AM | |
| 2862 | 03-16-2022 06:22 AM | |
| 7472 | 03-02-2022 09:44 PM | |
| 3446 | 03-02-2022 08:40 PM | |
| 2359 | 01-05-2022 07:01 AM |
12-27-2021
05:18 AM
@ronys There is no logical difference in CDSW on HDP. You just have to run same instructions from command line just the path for that will change to /etc/cdsw/scripts instead of /opt/cloudera/parcels/CDSW/scripts.
... View more
12-27-2021
12:37 AM
@ronys From CDSW Function perspective you need to adjust some Configuration as per new IP and the most important thing is forward and reverse DNS lookups MUST work. Apart from that you can follow fe simple steps. Stop CDSW Service in Cloudera Manager (CM). Take a backup of CDSW app dir on each host by running the following command: tar cvzf cdsw.tar.gz /var/lib/cdsw/* Change the IP address of the host(s). Ensure that forward and reverse DNS still works (use dig or nslookup commands). If not, work with netops to fix it before proceeding. Ensure that /etc/hosts is not used for DNS. If yes, use a DNS server before proceeding. On each host, check for multiple name servers in /etc/resolv.conf. If there are multiple, work with netops to ensure that each name server recognizes the new IP address. Update CDSW config in CM with the new IP address of the CDSW master host: Beneath CM > CDSW > Configuration > search for "IP" and set MASTER_IP to correct IP address for CDSW master host Save Changes and run a cluster wide Deploy Client Configuration. Complete a weave reset and run prepare node on each host. /opt/cloudera/parcels/CDSW/cni/bin/weave reset Beneath CM > CDSW > Instances > Select All Instances > Prepare Node On every host, worker, and master CDSW hosts clear all iptable restrictions set up by k8s, docker, etc. iptables-save | awk '/^[*]/ { print $1 } /^:[A-Z]+ [^-]/ { print $1 " ACCEPT" ; } /COMMIT/ { print $0; }' | iptables-restore In CM, start docker daemon on every CDSW host (do not start the other CDSW roles). Run the weave reset on every CDSW host. /opt/cloudera/parcels/CDSW/cni/bin/weave reset In CM, stop all of the docker daemons. In CM, select master role > actions > Prepare Node in CM for every worker node, select worker role > Actions > Prepare Node. Start all CDSW services in CM.
... View more
12-21-2021
01:38 AM
@ronys The requirements are valid for all worker and master hosts. All Cloudera Data Science Workbench gateway hosts must be part of the same datacenter and use the same network. Hosts from different data-centers or networks can result in unreliable performance. A wildcard subdomain such as *.cdsw.company.com must be configured. Wildcard subdomains are used to provide isolation for user-generated content. The wildcard DNS hostname configured for Cloudera Data Science Workbench must be resolvable from both, the CDSW cluster, and your browser. So you have to make sure the DNS and wildcard with TLS host cert (if any) is properly configured. For reference use any of the working hosts.
... View more
12-20-2021
10:53 AM
@Satyajit Do you have same repo with credentials on CM server host as well or not? I am guessing the CM server have the repo file without credentials. Also follow the doc to add new host : https://docs.cloudera.com/documentation/enterprise/6/6.3/topics/cm_mc_adding_hosts.html#cmug_topic_7_5_1__section_e5g_h5r_j3b
... View more
12-20-2021
10:48 AM
@ronys This seems an issue with the TLS setup within CDSW. You have to make sure the wildcard domain is properly configured and then restart CDSW again to see if this makes progress. https://docs.cloudera.com/cdsw/1.9.2/installation/topics/cdsw-set-up-a-wildcard-dns-subdomain.html
... View more
12-06-2021
10:48 AM
1 Kudo
@wert_1311 NO, the downtime is recommended. https://docs.cloudera.com/cdp-private-cloud-upgrade/latest/upgrade-cdh/topics/ug_cdh_upgrade_hdfs_finalize.html
... View more
12-06-2021
10:34 AM
@Ismail27 The user you are specifying "admin", does this have root access on CM host? I believe you are providing the Cloudera Manager UI user but here you need SSH user i.e root which have same root privileges on host to run CMCA commands then only it can work.
... View more
10-01-2021
03:11 AM
@Sam2020 Logically this is a maintenance upgrade so no much trouble and extra steps needed. Your steps looks good till "c" i.e once you upgrade Cloudera Manager server to 7.4.4 there is a very simple steps you need, means just activate the Parcels or 7.1.7 that’s it. In layman’s term these steps will look like this. I am assuming you have the parcels for 7.1.7 available (credentials and repo details). Then follow below steps. 1.) Please stop the cluster services 2.) Navigate to CM -> hosts -> parcels -> Parcel Repository & Network Settings and include the 7.1.7 parcel "Remote Parcel Repository URLs" 3.) Take the back up of the required service databases following the steps below https://docs.cloudera.com/cdp-private-cloud/latest/upgrade-cdp/topics/ug_cdh_upgrade_backup.html 4.) Download, distribute and activate patch parcel 5.) Start the services. You are all set.
... View more
09-30-2021
09:21 PM
@reca This should work. The Oracle Database earlier versions are supported for C5.13. Please see the support matrix here. https://docs.cloudera.com/documentation/enterprise/6/6.3/topics/cm_ag_migrate_postgres_db_to_oracle_mysql.html Once you decide the Oracle DB version you can just follow this guide for migrating a Postgres DB to Oracle. https://docs.cloudera.com/documentation/enterprise/6/6.3/topics/cm_ag_migrate_postgres_db_to_oracle_mysql.html
... View more
09-26-2021
09:24 PM
@RizkyMei I would suggest you to Stop CDSW and then deploy Client configuration from CM > Cluster > Action menu. This seems Client Configuration not being pushed. After that start CDSW and try again.
... View more