Member since
09-21-2016
26
Posts
3
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
793 | 04-30-2021 11:28 PM | |
789 | 03-01-2021 08:28 PM | |
2418 | 06-10-2020 09:10 PM |
02-27-2022
09:59 PM
1 Kudo
Hi Sam2020, Just wanna add,I encounter this issue as well. Got the exact same error while downloading the parcel from Cloudera Manager. Im using local/internal repository. My permission on /opt/cloudera/parcel-repo is already cloudera-scm without any modification needed. And the sha file also does not have 1 or 256 at the end. What I did was: 1.Remove old unused 7.1.7 p0 parcels. 2.Restart CM server 3.Redownload 7.1.7 p74 from CM UI. *Im on cdh 6 planning to upgrade to 7.1.7 p74 without doing the 7.1.7 p0 *I do realize that all other parcels including 7.1.7 p0 have .torrent file except this 7.1.7 p74. Not sure if that have anything to do with it
... View more
01-03-2022
09:00 PM
Hi @GangWar Thank you very much for your response. I did suspect something got to do with my java version and I've already did what you mention to disable the referrals setting sun.security.krb5.disableReferrals=true but the zookeeper still unable to start. On the problematic cluster, I'm using OpenJDK 1.8.0u262. I have one more kerberized cluster that is running fine using OpenJDK 1.8.0u312.So what other things I tried previously. 1. Downgraded my OpenJDK to match the problematic version u262. 2. Restarted cluster few times. 3. Cluster still working fine with Kerberos, no need to comment renew_lifetime That is why I ignored the Java version suspicion. So the only thing for now that can make my zookeeper start was by commenting on the renew_lifetime. This guy have the same exact thing with my problem and solution. He did try the referrals as well. Do you think there are any other bugs related to this problem? https://community.cloudera.com/t5/Community-Articles/How-to-solve-the-Message-stream-modified-41-error-on/ta-p/292986 Thank you and regards. Mus
... View more
12-29-2021
03:02 AM
Hi to all, My cluster is the latest CM 7.4.4, and CR 7.1.7. Cluster working fine until I enable Kerberos. Zookeeper wont start with error Could not configure server because SASL configuration did not allow the Zookeeper server to authenticat itself properly:javax.security.auth.login.LoginException: Message stream modified (41) I'm able to get zookeeper and other services up if I commented # renew_lifetime = 7 d on all the nodes and kerberos server. But only Hue Kerberos Ticket Renewer will have a problem. So what I did was I commented out renew_lifetime = 7 d on server that hosted Kerberos Ticket REnewar roles. So now my cluster will be up. But this does not like a good workarund as some of the UI are having a problem like Atlas and Solr with error (tgt renewal). Anyone encounter this? P/S: I have a working Kerberized cluster with same version of CDP. It is working fine. Same exact version, os version, java version, and kerberos version. Only not all components is available in this cluster. So weird.
... View more
Labels:
- Labels:
-
Apache Zookeeper
-
Kerberos
04-30-2021
11:28 PM
1 Kudo
I've resolved this. Turn out my permission on the logs folder is messed up. Changed the /tmp/logs folder to be owned by mapred solved this.
... View more
04-20-2021
02:27 PM
Hi all, I've run few oozie workflow through Hue. But once the jobs finished, I cannot see the logs anymore. I'm wondering whether this is normal behaviour or I can config something on yarn to make the logs available even after job finishes. yarn.log-aggregation-enable is set to enable on the cluster Using this command is fine. yarn logs -applicationId < Application ID >. Only cannot see in the hue web ui after the job finished Thanks
... View more
04-07-2021
02:40 AM
Hi @GangWar . thanks for the suggestion. I tried to ping between the nodes, not seing abnormal latency. But looks like the loading time solve by changing my java version. It now load pretty quick. Btw the same happen when I'm running simple hdfs command such as hdfs dfs -ls / . It will take me quite sometimes to list the directory. But once java version changed, it is now looks fine. You think this is the real root cause?
... View more
03-29-2021
02:28 AM
Hi, When running spark-shell, it took me about 5 minutes to get into this cli There are no errors. Tuned my YARN memory but still having 5 minutes everytime. Cluster is small setup about 10 nodes. Didn't encounter this on a smaller cluster(3nodes), and on bigger cluster(40nodes). Anyone got idea?it is really appreciated.
... View more
Labels:
- Labels:
-
Apache Spark
-
Apache YARN
03-28-2021
07:52 PM
Hi @jake_allston , do you find the culprit? I've got similar issue. Took 5 minutes to load spark-shell. Its a new cluster. Doen not happen to my other cluster
... View more
03-01-2021
08:28 PM
Thanks @PabitraDas Alright, done as your suggestion, the cluster is looking good now without the alert Thanks for the link too.
... View more
02-28-2021
02:40 PM
Hi,I've enabled HA successfully. My directory for the three journal node is not the same accross that 3 journal node. There are alert of required JournalNode Default Group box. Is it safe to I put anything here? as my journal node location is not the same? It is only for managing that 3 journal node loaction right? Thanks
... View more
11-03-2020
12:41 AM
Hi, went through my testing again. Unfortunately, I missed the step where I need to change/add the parameters on my commandline. Change my TeraGen, TeraSort and TeraValidate parameter and got better results TeraGen: 1 min 57 sec TeraSort: 22min 55sec TeraValidate: 1 min 23sec. Thank you very much for your writeup again.
... View more
11-01-2020
11:27 PM
Hi @sunile_manjee . Thank you very much for this excellent writeup and testing. I got the chance to run TeraGen, Terasort and TeraValidate with an environment which have around 40+ worker node with 3 master recently. Comparing to your results, it seems that for the Teragen mine is quite bad. My results: Teragen 57mnt 44sec, TeraSort 49mnt 01sec TeraValidate 4mnt16sec My spec: Masternode,cpu:16core , RAM:384GB Storage:12x2TB SATA 6Gb 7.2K RPM Workernode,cpu:24core, RAM;384GB Storage: 12x2TB NL SAS 12Gb 7.2k RPM network badwidth:20gbps My observation It believe the speed of the disk is very important here. Since my disk is only using SATA&SAS instead of SSD, I'm assuming this is the core reason why my Teragen results is bad even I got more nodes. Please correct me if this is not the case. But one thing that I saw from your BigStep test, the workernode is using local HDD not SSD like the AWS nodes. Shouldn't this decrease the TeraGen results? but it was the fastest one at 11min 49seconds. I also read from your test here https://community.cloudera.com/t5/Community-Articles/More-Hadoop-nodes-faster-IO-and-processing-time/ta-p/247155. The performance is way better using 5nodes against 3nodes. I wass hoping my 40+ node will top this, but it is not. Any advice is very very appreciated. P/s: I play with YARN configuration to try to get a better result. I set Map&Reduce cores&memory to 8 & 64GB. Do you think this help? Default setting (1core and 1gb ram)will make my terasort crash.
... View more
10-19-2020
10:37 PM
Hi, I tried to get the CDP documentation in pdf using the pdf button on the online document. Its good, but I found out that some of the screenshot image became lot bigger and does not fit within the document(Some is perfectly fine). Is this can be repaired or I'm asking a bit too much. In this example i used https://docs.cloudera.com/runtime/7.1.1/security-ranger-authorization/security-ranger-authorization.pdf Thank you very much. the pdf button that i used this is the same page as the 1st picture
... View more
- Tags:
- CDP
- documentation
Labels:
- Labels:
-
Cloudera Data Platform (CDP)
09-28-2020
09:54 PM
Alright. THanks man
... View more
09-28-2020
02:30 AM
Thanks @GangWar . Thats mean, if my CM Database resides outside of the CM server itself and it is working fine, I can skip the restoring old CM DB part right? I just need to install new CM and point it to the existing CM DB using the installation wizard correct? But for the old monitoring data part,I believe my old monitoring data might be available since my cm management services data is still available in another server. Isn't that the case? Thanks
... View more
09-27-2020
10:18 PM
Thanks. I just installed it manually on the cluster. Worked fine as of now. Looking forward for the future supported version.
... View more
09-27-2020
10:12 PM
1 Kudo
I think you can just run the linux command anytime. Your command will only be HDFS command once you use #hdfs dfs
... View more
09-27-2020
10:04 PM
Hi everyone, Let say my existing Cloudera manager is managing a 10 node cluster. What happen if my cloudera manager server is down?(my cloudera management service is on diffrerent server and I'm using external database for the setup). Can I immediately setup a new server, reinstall cloudera manager here, and manage the existing cluster without too much effort on the configuration and setup? I'm worried about the TLS setting, keberos setting,etc. Can it be preserve and work the way it was before the cloudera manager is replaced I believe the data in the HDFS should be safe right? Thanks
... View more
Labels:
07-13-2020
09:49 PM
Hi everyone,
How do you guys install/manage Druid through Cloudera Manager? Been trying to find the parcel or CSD file for installation without any luck.
Is this option coming soon or you just install druid directly on the servers. I understand that Druid was one of the components that can be install on Ambari previously. Is this going to be the same on Cloudera Data Platform or Cloudera CDH version 5 or 6.
Thank you very much.
... View more
Labels:
06-10-2020
09:14 PM
@meridee yeah, this is what I meant. Thanks I actually just got a confirmation on this from another Cloudera representative and he is saying the same thing as Bender said ,where the cloudera runtime 7.1.1 will be hidden during the Cloudera Manger 7.1.1 installation unless you have the license. Thats why I'm only seeing legacy CDH version for my trial installation. So sad. Thanks anyway.@Bender
... View more
06-10-2020
09:10 PM
Hi @Bender Thank you very much your reply. But I tried CDP7.0.3 and I can select the 7.0.3 runtime tfor the deployment, not the legacy CDH as on my 1st picture. "Upgrades from CDH to higher versions of CDH" explains that " Upgrades to Cloudera Manager 6.3.3 or higher now require a Cloudera Enterprise license file and a username and password." So is this means that if i want to try version 7.1.1, i would not get to try the runtime 7.1.1 if I dont have the Enterprice license? *I'm not upgrading from existing cluster. I'm currently just trying to deploy the Trial version of Cloudera Data Platform version 7.1.1 to test all the function.
... View more
06-05-2020
01:02 PM
Hi
I'm currently deploying CDP-DC trial version on my dev enviroment using the latest Cloudera Manager 7.1.1
But at Add Cluster-Installation --> 4. Select repository, why do runtime version that i can select is only CDH 6.3.2 & CDH 5.16. Should't i be able to select cloudera runtime 7.1.1?
Cloudera Manager 7.1.1 got only CDH 6.3 anf 5.16
I've tried 7.0.3 installation, and i got the option for Cloudera runtime 7.0.3.
Cloiudera Manager 7.0.3 got runtime 7.0.3
it shouldn't be like this right?
Thanks
... View more
04-17-2020
08:10 AM
Hi @StevenOD Thank you very much for your info. I've actually configured CDP-DC at my office to try it out. It is very small scale setup for now and currently only have one base cluster. I'm trying to create a project using similar to use case shown in very recent Cloudera Data Flow webinar which is injecting/streaming iot sensors data to data warehouse using NiFi and Kafka. The problem is, in my CDP-DC environment, there is no option to create a cluster from templates like the one available in CDP Public Cloud such as Streaming Messaging and Flow Management template which natively consist component like NiFi. For now there is only Data Engineering, Data Mart,Operational Database & custom cluster template available. So it is safe for me to say that all these cluster template and components available in public cloud will be available to CDP Private cloud soon right? Or I can proceed with my CDP-DC and install NiFi through csd to proceed with my project. Really appreciate if you could point me to the right direction. Thanks again
... View more
04-16-2020
10:28 PM
Is this Cloudera Management Console available to be install on premise? I would really like to try it. I did some reading, and what I understand, it is currently only on Public Cloud right? Is my understanding correct? Thanks
... View more