Member since
10-01-2018
802
Posts
139
Kudos Received
128
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
408 | 04-15-2022 09:39 AM | |
290 | 03-16-2022 06:22 AM | |
306 | 03-02-2022 09:44 PM | |
222 | 03-02-2022 08:40 PM | |
303 | 01-05-2022 07:01 AM |
01-05-2022
06:08 PM
So the sentence (Cloudera Manager HA is still not available out of the box) you mentioned means that Cloudera itself does not fully support Cloudera Manager HA functions and needs to use external tools such as load balance and NFS mounts? Am I right? Please take a look again...
... View more
01-05-2022
05:57 PM
Thank you very much for your answer, but another question is that I need a clear evidence to convince customers that HDFS is necessary when adding Apache sentry service. Can you help find the relevant documents here? Sorry for the inconvenience.
... View more
01-05-2022
03:36 AM
@muslihuddin No I didn’t find any other bug. Not sure in your case by the modification of java.security file didn’t worked alone. The solution you are having atm is also fine in my opinion no harm in that.
... View more
01-03-2022
03:32 AM
@bdworld2 Take a look of this doc which can help about the Maven use. https://docs.cloudera.com/documentation/enterprise/release-notes/topics/cdh_vd_cdh5_maven_repo.html
... View more
01-03-2022
01:42 AM
@writtenyu You can see the error in CM server logs. Please attach those here to see what’s happening.
... View more
01-03-2022
01:18 AM
@RickWang If you can point the exact repo on GitHub I can try to take a look and can come with an answer. Normally if this is available on public Git then you can fork and modified as per convenience in my opinion.
... View more
01-03-2022
01:09 AM
@Phantom No CDH4 is discontinued and not on internet as well. If you have old Internal Repository setup then you can try to install but from internet it's not possible. One way I can think of is try to manually place the parcels on the new node but again you might not have agent version so it will be a mess. If you are having a paid subscription then I would suggest you to upgrade to some supported version or AT LEAST publicly available version and then add node. Without Subscription New Node Addition is not possible.
... View more
01-02-2022
08:15 PM
1 Kudo
@noamsh_88, to recap:
You started out the thread saying that you are "using Cloudera V5.1.1 with log4j v1.2.17" and asked how you could upgrade to the latest version of log4j on CDH V5.1.1.
@GangWar replied that CDH 5.x is not and would not be tested with a later version of log4j, as CDH 5.x has reached End of Support (open that link and then expand the section labeled "Cloudera Enterprise products" underneath Current End of Support (EoS) Dates) and so if you tried it, you would be on your own.
He also wrote that CDH-5 was not impacted by the log4j vulnerability described in log4j2 CVE-2021-44228
You replied on 2 Jan that you ran the "patch for log4j provided at https://github.com/cloudera/cloudera-scripts-for-log4j" and asked:
how can we verify our env is out from log4j risk?
is there some java classes we should verify inside?
The very first sentence of the README.md file that renders in the browser automatically when one visits the URL you shared earlier for the cloudera-scripts-for-log4j reads:
This repo contains scripts and helper tools to mitigate the critical log4j vulnerability CVE-2021-44228 for Cloudera products affecting all versions of log4j between 2.0 and 2.14.1.
Emphasis added.
As @GangWar indicated, your environment, based on CDH 5.x, should not have had a version of log4j between 2.0 and 2.14.1 installed, and therefore should not have been vulnerable to the the log4j vulnerability described in log4j2 CVE-2021-44228. This is because, as you yourself pointed out in your original post on 23 Dec, you only had log4j v1.2.17 installed in your environment. log4j v1.2.17 is not a version of log4j between 2.0 and 2.14.1.
This also explains why, after you ran the script intended for systems using log4j versions between 2.0 and 2.14.1 on a system using log4j v1.2.17, the log4j V1 jars were not removed.
But since you ran the script for log4j provided at https://github.com/cloudera/cloudera-scripts-for-log4j anyway and presumably still have it handy, you could check manually for log4j .jar files in your environment in a similar manner that the script does and verify for yourself that none of those files still have the JndiLookup.class still present and thereby verify your environment is not at risk to the log4j vulnerability described in log4j2 CVE-2021-44228 (this information is also in the same README.md file on GitHub where the script you ran is being distributed from).
... View more
01-01-2022
05:14 PM
Hello @GangWar , Yes I can do kinit -kt for all the userids including yarn, livy and own userid from the same server.
... View more
12-31-2021
09:23 AM
1 Kudo
@sandeep1 Take a look at : https://docs.cloudera.com/cdp-private-cloud-base/7.1.3/runtime-release-notes/topics/rt-runtime-component-versions.html
... View more
12-27-2021
08:20 AM
@nanc Ideally None of them is required. However if needed only CM server Restart can take care of the license.
... View more
12-27-2021
05:33 AM
@MajidAli Have you tried restarting the CM server and look at the CM server logs to see the full error? Log can be found under /var/log/cloudera-scm-server/cloudera-scm-server.log
... View more
12-06-2021
10:53 AM
@hadoopFreak01 One idea is to set a cron job to cleanup older files periodically but I don’t think there is any inbuilt feature available to purge these files.
... View more
12-06-2021
10:48 AM
1 Kudo
@wert_1311 NO, the downtime is recommended. https://docs.cloudera.com/cdp-private-cloud-upgrade/latest/upgrade-cdh/topics/ug_cdh_upgrade_hdfs_finalize.html
... View more
12-06-2021
10:44 AM
@mananasaly The deployment json can be obtain from this command: http://<Cm-server-host>:7180/api/<apiversion>/cm/deployment you can look at the name of the parameter. Does this something can help?
... View more
12-06-2021
10:34 AM
@Ismail27 The user you are specifying "admin", does this have root access on CM host? I believe you are providing the Cloudera Manager UI user but here you need SSH user i.e root which have same root privileges on host to run CMCA commands then only it can work.
... View more
12-06-2021
10:31 AM
@KyleZ I believe the Grafana option was available in Ambari/HDP based installation and same has been documented here. https://docs.cloudera.com/smm/2.0.0/monitoring-kafka-clusters/topics/smm-monitoring-topics.html In CDP based install you can hookup Grafana by own but this is not inbuilt in CDP. This below article can give you some directions in same context. https://medium.com/cloudera-inc/streaming-data-for-brewery-ops-with-grafana-7aa722f3421
... View more
11-25-2021
12:48 AM
From my experience, the one available in docker repository cloudera/quickstart, the version under tag latest is of CDH 5.7. You will need to build it yourself, or try to search in the dockerhub to see if someone else shared their version
... View more
11-22-2021
04:08 AM
Thanks for your suggestion. I tried using double slash but it not worked for me.
... View more
11-16-2021
10:38 AM
@Mik87 Please follow this guide for Trial installation. https://docs.cloudera.com/cdp-private-cloud-base/7.1.7/installation/topics/cdpdc-trial-installation.html
... View more
11-16-2021
07:09 AM
@GangWar this didnt solve the issue for me. Please provide an alternate solution
... View more
11-10-2021
05:38 AM
Yes, I will start a new thread for this issue. Just an update though...the issue was due to Windows 10 Home version. Upgrading to Pro should let me disable the WSL option.
... View more
10-13-2021
05:23 AM
Hi, I can use this machine to my magister work? I need licence and I don't know if this source is legal.
... View more
- Tags:
- licensing
10-11-2021
11:31 PM
1 Kudo
@GangWar Can you describe more about this. I don't understand this step. "There may be a chance the The property "tez.history.logging.proto-base-dir" is pointing to wrong HDFS path location so check that once and correct if needed." I have same issue with this topic and I do all step that provide but Hiveserver2 still down.
... View more
10-03-2021
10:38 PM
@reca, Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more
10-01-2021
03:11 AM
@Sam2020 Logically this is a maintenance upgrade so no much trouble and extra steps needed. Your steps looks good till "c" i.e once you upgrade Cloudera Manager server to 7.4.4 there is a very simple steps you need, means just activate the Parcels or 7.1.7 that’s it. In layman’s term these steps will look like this. I am assuming you have the parcels for 7.1.7 available (credentials and repo details). Then follow below steps. 1.) Please stop the cluster services 2.) Navigate to CM -> hosts -> parcels -> Parcel Repository & Network Settings and include the 7.1.7 parcel "Remote Parcel Repository URLs" 3.) Take the back up of the required service databases following the steps below https://docs.cloudera.com/cdp-private-cloud/latest/upgrade-cdp/topics/ug_cdh_upgrade_backup.html 4.) Download, distribute and activate patch parcel 5.) Start the services. You are all set.
... View more
09-28-2021
02:52 AM
Hi @GangWar Thanks for your suggestion. I've try your suggestion before, still same error show up 'spark content' , any other suggestion? Thanks n Regards MRM
... View more
09-25-2021
05:53 AM
1 Kudo
@GangWar Hello, This problem has been solved because the atlas service has been deleted, but there are dependencies in other service configurations that have not been cleared. This can also be seen from the log: ERROR Staleness-Detector-1: com.cloudera.cmf.service.config.components.ProcessStalenessDetector: Failed to check staleness for service client configs : DbService{id=75,name=atlas} After adding the deleted atlas service back, I succeeded in generating the missing Kerberos credentials again. In short, thank you very much for your reply and wish you a happy life!
... View more