Member since
09-21-2016
26
Posts
4
Kudos Received
3
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2649 | 04-30-2021 11:28 PM | |
| 2437 | 03-01-2021 08:28 PM | |
| 6208 | 06-10-2020 09:10 PM |
02-27-2022
09:59 PM
1 Kudo
Hi Sam2020, Just wanna add,I encounter this issue as well. Got the exact same error while downloading the parcel from Cloudera Manager. Im using local/internal repository. My permission on /opt/cloudera/parcel-repo is already cloudera-scm without any modification needed. And the sha file also does not have 1 or 256 at the end. What I did was: 1.Remove old unused 7.1.7 p0 parcels. 2.Restart CM server 3.Redownload 7.1.7 p74 from CM UI. *Im on cdh 6 planning to upgrade to 7.1.7 p74 without doing the 7.1.7 p0 *I do realize that all other parcels including 7.1.7 p0 have .torrent file except this 7.1.7 p74. Not sure if that have anything to do with it
... View more
01-03-2022
09:00 PM
1 Kudo
Hi @GangWar Thank you very much for your response. I did suspect something got to do with my java version and I've already did what you mention to disable the referrals setting sun.security.krb5.disableReferrals=true but the zookeeper still unable to start. On the problematic cluster, I'm using OpenJDK 1.8.0u262. I have one more kerberized cluster that is running fine using OpenJDK 1.8.0u312.So what other things I tried previously. 1. Downgraded my OpenJDK to match the problematic version u262. 2. Restarted cluster few times. 3. Cluster still working fine with Kerberos, no need to comment renew_lifetime That is why I ignored the Java version suspicion. So the only thing for now that can make my zookeeper start was by commenting on the renew_lifetime. This guy have the same exact thing with my problem and solution. He did try the referrals as well. Do you think there are any other bugs related to this problem? https://community.cloudera.com/t5/Community-Articles/How-to-solve-the-Message-stream-modified-41-error-on/ta-p/292986 Thank you and regards. Mus
... View more
12-29-2021
03:02 AM
Hi to all, My cluster is the latest CM 7.4.4, and CR 7.1.7. Cluster working fine until I enable Kerberos. Zookeeper wont start with error Could not configure server because SASL configuration did not allow the Zookeeper server to authenticat itself properly:javax.security.auth.login.LoginException: Message stream modified (41) I'm able to get zookeeper and other services up if I commented # renew_lifetime = 7d on all the nodes and kerberos server. But only Hue Kerberos Ticket Renewer will have a problem. So what I did was I commented out renew_lifetime = 7d on server that hosted Kerberos Ticket REnewar roles. So now my cluster will be up. But this does not like a good workarund as some of the UI are having a problem like Atlas and Solr with error (tgt renewal). Anyone encounter this? P/S: I have a working Kerberized cluster with same version of CDP. It is working fine. Same exact version, os version, java version, and kerberos version. Only not all components is available in this cluster. So weird.
... View more
Labels:
- Labels:
-
Apache Zookeeper
-
Kerberos
04-30-2021
11:28 PM
1 Kudo
I've resolved this. Turn out my permission on the logs folder is messed up. Changed the /tmp/logs folder to be owned by mapred solved this.
... View more
04-20-2021
02:27 PM
Hi all, I've run few oozie workflow through Hue. But once the jobs finished, I cannot see the logs anymore. I'm wondering whether this is normal behaviour or I can config something on yarn to make the logs available even after job finishes. yarn.log-aggregation-enable is set to enable on the cluster Using this command is fine. yarn logs -applicationId <Application ID>. Only cannot see in the hue web ui after the job finished Thanks
... View more
Labels:
03-01-2021
08:28 PM
Thanks @PabitraDas Alright, done as your suggestion, the cluster is looking good now without the alert Thanks for the link too.
... View more
02-28-2021
02:40 PM
Hi,I've enabled HA successfully. My directory for the three journal node is not the same accross that 3 journal node. There are alert of required JournalNode Default Group box. Is it safe to I put anything here? as my journal node location is not the same? It is only for managing that 3 journal node loaction right? Thanks
... View more
Labels:
11-03-2020
12:41 AM
Hi, went through my testing again. Unfortunately, I missed the step where I need to change/add the parameters on my commandline. Change my TeraGen, TeraSort and TeraValidate parameter and got better results TeraGen: 1 min 57 sec TeraSort: 22min 55sec TeraValidate: 1 min 23sec. Thank you very much for your writeup again.
... View more
11-01-2020
11:27 PM
Hi @sunile_manjee . Thank you very much for this excellent writeup and testing. I got the chance to run TeraGen, Terasort and TeraValidate with an environment which have around 40+ worker node with 3 master recently. Comparing to your results, it seems that for the Teragen mine is quite bad. My results: Teragen 57mnt 44sec, TeraSort 49mnt 01sec TeraValidate 4mnt16sec My spec: Masternode,cpu:16core , RAM:384GB Storage:12x2TB SATA 6Gb 7.2K RPM Workernode,cpu:24core, RAM;384GB Storage: 12x2TB NL SAS 12Gb 7.2k RPM network badwidth:20gbps My observation It believe the speed of the disk is very important here. Since my disk is only using SATA&SAS instead of SSD, I'm assuming this is the core reason why my Teragen results is bad even I got more nodes. Please correct me if this is not the case. But one thing that I saw from your BigStep test, the workernode is using local HDD not SSD like the AWS nodes. Shouldn't this decrease the TeraGen results? but it was the fastest one at 11min 49seconds. I also read from your test here https://community.cloudera.com/t5/Community-Articles/More-Hadoop-nodes-faster-IO-and-processing-time/ta-p/247155. The performance is way better using 5nodes against 3nodes. I wass hoping my 40+ node will top this, but it is not. Any advice is very very appreciated. P/s: I play with YARN configuration to try to get a better result. I set Map&Reduce cores&memory to 8 & 64GB. Do you think this help? Default setting (1core and 1gb ram)will make my terasort crash.
... View more
10-19-2020
10:37 PM
Hi, I tried to get the CDP documentation in pdf using the pdf button on the online document. Its good, but I found out that some of the screenshot image became lot bigger and does not fit within the document(Some is perfectly fine). Is this can be repaired or I'm asking a bit too much. In this example i used https://docs.cloudera.com/runtime/7.1.1/security-ranger-authorization/security-ranger-authorization.pdf Thank you very much. the pdf button that i used this is the same page as the 1st picture
... View more
Labels:
- Labels:
-
Cloudera