Member since
10-01-2018
799
Posts
139
Kudos Received
128
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
268 | 04-15-2022 09:39 AM | |
218 | 03-16-2022 06:22 AM | |
206 | 03-02-2022 09:44 PM | |
154 | 03-02-2022 08:40 PM | |
252 | 01-05-2022 07:01 AM |
01-05-2022
07:01 AM
1 Kudo
@wenzf Yes, Sentry Stores some info in HDFS I believe. Take a look of below doc which talks about architecture. https://docs.cloudera.com/documentation/enterprise/6/6.3/topics/sg_sentry_overview.html#sentry_overview https://cwiki.apache.org/confluence/display/SENTRY/Sentry+Tutorial
... View more
01-05-2022
05:50 AM
@Dasan I am not the expert but This issue due to bad HDFS URI (hdfs:// hdfs://nameservice1./home/xxx/Airports_new ) for one of your hive databases. You can test using below steps. 1. Delete the database from Hive 2. Clean HDFS directory folder which is manually created. 3. Clean all its metadata entires. 4. Create the database again.
... View more
01-05-2022
03:36 AM
@muslihuddin No I didn’t find any other bug. Not sure in your case by the modification of java.security file didn’t worked alone. The solution you are having atm is also fine in my opinion no harm in that.
... View more
01-04-2022
10:56 PM
@wenzf Yes, you can try with that.
... View more
01-03-2022
03:32 AM
@bdworld2 Take a look of this doc which can help about the Maven use. https://docs.cloudera.com/documentation/enterprise/release-notes/topics/cdh_vd_cdh5_maven_repo.html
... View more
01-03-2022
03:09 AM
@muslihuddin You are running into a known Java Bug which has been found Earlier and documented. You have to follow below steps to overcome this issue..
For JDK 8u241 and higher versions running on Kerberized clusters, you must disable referrals by settingsun.security.krb5.disableReferrals=true.
For example, with OpenJDK 1.8.0u242:
Open /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.242.b08-0.el7_7.x86_64/jre/lib/security/java.security with a text editor.
Add sun.security.krb5.disableReferrals=true (it can be at the bottom of the file).
Add this property on each node that has the impacted JDK version.
Restart the applications using the JDK so the change takes effect.
For more information, see the KB article. You can so many similar discussion on Cloudera Community which has been resolved Earlier. [1] https://docs.cloudera.com/cdp-private-cloud-upgrade/latest/release-guide/topics/cdpdc-java-requirements.html
... View more
01-03-2022
02:48 AM
@raniaa You need to allow HBase impersonation in HDFS for HBase Browser to work. Here are the steps: https://docs.cloudera.com/documentation/enterprise/latest/topics/admin_hdfs_proxy_users.html In summary you need to: 1. Enable hbase.thrift.support.proxyuser. CM > HBase > Configuration > Search: "hbase.thrift.support.proxyuser" 2. Allow all groups form all hosts in HDFS.
CM > HDFS > “Cluster-wide Advanced Configuration Snippet (Safety Valve) for core-site.xml” hadoop.proxyuser.hbase.hosts * hadoop.proxyuser.hbase.groups *
... View more
01-03-2022
01:42 AM
@writtenyu You can see the error in CM server logs. Please attach those here to see what’s happening.
... View more
01-03-2022
01:18 AM
@RickWang If you can point the exact repo on GitHub I can try to take a look and can come with an answer. Normally if this is available on public Git then you can fork and modified as per convenience in my opinion.
... View more
01-03-2022
01:09 AM
@Phantom No CDH4 is discontinued and not on internet as well. If you have old Internal Repository setup then you can try to install but from internet it's not possible. One way I can think of is try to manually place the parcels on the new node but again you might not have agent version so it will be a mess. If you are having a paid subscription then I would suggest you to upgrade to some supported version or AT LEAST publicly available version and then add node. Without Subscription New Node Addition is not possible.
... View more
12-31-2021
09:39 AM
1 Kudo
@wenzf Cloudera Manager HA is still not available out of the box. You can configure other CDH components but not CM itself for now.
... View more
12-31-2021
09:28 AM
@ebeb Are you able to kinit from the host with the existed keytab? That’s a vlid check to start with and then see where is the issue.
... View more
12-31-2021
09:23 AM
1 Kudo
@sandeep1 Take a look at : https://docs.cloudera.com/cdp-private-cloud-base/7.1.3/runtime-release-notes/topics/rt-runtime-component-versions.html
... View more
12-31-2021
09:22 AM
@xpouser You want to take a look at : https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/GracefulDecommission.html
... View more
12-27-2021
08:20 AM
@nanc Ideally None of them is required. However if needed only CM server Restart can take care of the license.
... View more
12-27-2021
05:33 AM
@MajidAli Have you tried restarting the CM server and look at the CM server logs to see the full error? Log can be found under /var/log/cloudera-scm-server/cloudera-scm-server.log
... View more
12-27-2021
05:30 AM
@noamsh_88 It’s not tested that C5.1.1 can work with upgraded log4j as this is no more under support cycle and no testing going on. However you can give it a try but keep in mind CDH5 is not impacted with log4j vulnerability so you might have to examine first what is desired and then take action.
... View more
12-27-2021
05:20 AM
No @ronys I think the installation of the worker node is corrupted somehow. This error seems related with kubeconfig file and may be you want to try delete the node again and clean that and then re add as worker.
... View more
12-27-2021
05:18 AM
@ronys There is no logical difference in CDSW on HDP. You just have to run same instructions from command line just the path for that will change to /etc/cdsw/scripts instead of /opt/cloudera/parcels/CDSW/scripts.
... View more
12-27-2021
12:37 AM
@ronys From CDSW Function perspective you need to adjust some Configuration as per new IP and the most important thing is forward and reverse DNS lookups MUST work. Apart from that you can follow fe simple steps. Stop CDSW Service in Cloudera Manager (CM). Take a backup of CDSW app dir on each host by running the following command: tar cvzf cdsw.tar.gz /var/lib/cdsw/* Change the IP address of the host(s). Ensure that forward and reverse DNS still works (use dig or nslookup commands). If not, work with netops to fix it before proceeding. Ensure that /etc/hosts is not used for DNS. If yes, use a DNS server before proceeding. On each host, check for multiple name servers in /etc/resolv.conf. If there are multiple, work with netops to ensure that each name server recognizes the new IP address. Update CDSW config in CM with the new IP address of the CDSW master host: Beneath CM > CDSW > Configuration > search for "IP" and set MASTER_IP to correct IP address for CDSW master host Save Changes and run a cluster wide Deploy Client Configuration. Complete a weave reset and run prepare node on each host. /opt/cloudera/parcels/CDSW/cni/bin/weave reset Beneath CM > CDSW > Instances > Select All Instances > Prepare Node On every host, worker, and master CDSW hosts clear all iptable restrictions set up by k8s, docker, etc. iptables-save | awk '/^[*]/ { print $1 } /^:[A-Z]+ [^-]/ { print $1 " ACCEPT" ; } /COMMIT/ { print $0; }' | iptables-restore In CM, start docker daemon on every CDSW host (do not start the other CDSW roles). Run the weave reset on every CDSW host. /opt/cloudera/parcels/CDSW/cni/bin/weave reset In CM, stop all of the docker daemons. In CM, select master role > actions > Prepare Node in CM for every worker node, select worker role > Actions > Prepare Node. Start all CDSW services in CM.
... View more
12-21-2021
01:38 AM
@ronys The requirements are valid for all worker and master hosts. All Cloudera Data Science Workbench gateway hosts must be part of the same datacenter and use the same network. Hosts from different data-centers or networks can result in unreliable performance. A wildcard subdomain such as *.cdsw.company.com must be configured. Wildcard subdomains are used to provide isolation for user-generated content. The wildcard DNS hostname configured for Cloudera Data Science Workbench must be resolvable from both, the CDSW cluster, and your browser. So you have to make sure the DNS and wildcard with TLS host cert (if any) is properly configured. For reference use any of the working hosts.
... View more
12-20-2021
10:53 AM
@Satyajit Do you have same repo with credentials on CM server host as well or not? I am guessing the CM server have the repo file without credentials. Also follow the doc to add new host : https://docs.cloudera.com/documentation/enterprise/6/6.3/topics/cm_mc_adding_hosts.html#cmug_topic_7_5_1__section_e5g_h5r_j3b
... View more
12-20-2021
10:48 AM
@ronys This seems an issue with the TLS setup within CDSW. You have to make sure the wildcard domain is properly configured and then restart CDSW again to see if this makes progress. https://docs.cloudera.com/cdsw/1.9.2/installation/topics/cdsw-set-up-a-wildcard-dns-subdomain.html
... View more
12-06-2021
10:53 AM
@hadoopFreak01 One idea is to set a cron job to cleanup older files periodically but I don’t think there is any inbuilt feature available to purge these files.
... View more
12-06-2021
10:48 AM
1 Kudo
@wert_1311 NO, the downtime is recommended. https://docs.cloudera.com/cdp-private-cloud-upgrade/latest/upgrade-cdh/topics/ug_cdh_upgrade_hdfs_finalize.html
... View more
12-06-2021
10:44 AM
@mananasaly The deployment json can be obtain from this command: http://<Cm-server-host>:7180/api/<apiversion>/cm/deployment you can look at the name of the parameter. Does this something can help?
... View more
12-06-2021
10:34 AM
@Ismail27 The user you are specifying "admin", does this have root access on CM host? I believe you are providing the Cloudera Manager UI user but here you need SSH user i.e root which have same root privileges on host to run CMCA commands then only it can work.
... View more
12-06-2021
10:31 AM
@KyleZ I believe the Grafana option was available in Ambari/HDP based installation and same has been documented here. https://docs.cloudera.com/smm/2.0.0/monitoring-kafka-clusters/topics/smm-monitoring-topics.html In CDP based install you can hookup Grafana by own but this is not inbuilt in CDP. This below article can give you some directions in same context. https://medium.com/cloudera-inc/streaming-data-for-brewery-ops-with-grafana-7aa722f3421
... View more
11-16-2021
10:38 AM
@Mik87 Please follow this guide for Trial installation. https://docs.cloudera.com/cdp-private-cloud-base/7.1.7/installation/topics/cdpdc-trial-installation.html
... View more
10-01-2021
03:11 AM
@Sam2020 Logically this is a maintenance upgrade so no much trouble and extra steps needed. Your steps looks good till "c" i.e once you upgrade Cloudera Manager server to 7.4.4 there is a very simple steps you need, means just activate the Parcels or 7.1.7 that’s it. In layman’s term these steps will look like this. I am assuming you have the parcels for 7.1.7 available (credentials and repo details). Then follow below steps. 1.) Please stop the cluster services 2.) Navigate to CM -> hosts -> parcels -> Parcel Repository & Network Settings and include the 7.1.7 parcel "Remote Parcel Repository URLs" 3.) Take the back up of the required service databases following the steps below https://docs.cloudera.com/cdp-private-cloud/latest/upgrade-cdp/topics/ug_cdh_upgrade_backup.html 4.) Download, distribute and activate patch parcel 5.) Start the services. You are all set.
... View more