Member since
09-21-2016
26
Posts
3
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1628 | 04-30-2021 11:28 PM | |
1513 | 03-01-2021 08:28 PM | |
4212 | 06-10-2020 09:10 PM |
02-27-2022
09:59 PM
1 Kudo
Hi Sam2020, Just wanna add,I encounter this issue as well. Got the exact same error while downloading the parcel from Cloudera Manager. Im using local/internal repository. My permission on /opt/cloudera/parcel-repo is already cloudera-scm without any modification needed. And the sha file also does not have 1 or 256 at the end. What I did was: 1.Remove old unused 7.1.7 p0 parcels. 2.Restart CM server 3.Redownload 7.1.7 p74 from CM UI. *Im on cdh 6 planning to upgrade to 7.1.7 p74 without doing the 7.1.7 p0 *I do realize that all other parcels including 7.1.7 p0 have .torrent file except this 7.1.7 p74. Not sure if that have anything to do with it
... View more
01-03-2022
09:00 PM
Hi @GangWar Thank you very much for your response. I did suspect something got to do with my java version and I've already did what you mention to disable the referrals setting sun.security.krb5.disableReferrals=true but the zookeeper still unable to start. On the problematic cluster, I'm using OpenJDK 1.8.0u262. I have one more kerberized cluster that is running fine using OpenJDK 1.8.0u312.So what other things I tried previously. 1. Downgraded my OpenJDK to match the problematic version u262. 2. Restarted cluster few times. 3. Cluster still working fine with Kerberos, no need to comment renew_lifetime That is why I ignored the Java version suspicion. So the only thing for now that can make my zookeeper start was by commenting on the renew_lifetime. This guy have the same exact thing with my problem and solution. He did try the referrals as well. Do you think there are any other bugs related to this problem? https://community.cloudera.com/t5/Community-Articles/How-to-solve-the-Message-stream-modified-41-error-on/ta-p/292986 Thank you and regards. Mus
... View more
12-29-2021
03:02 AM
Hi to all, My cluster is the latest CM 7.4.4, and CR 7.1.7. Cluster working fine until I enable Kerberos. Zookeeper wont start with error Could not configure server because SASL configuration did not allow the Zookeeper server to authenticat itself properly:javax.security.auth.login.LoginException: Message stream modified (41) I'm able to get zookeeper and other services up if I commented # renew_lifetime = 7d on all the nodes and kerberos server. But only Hue Kerberos Ticket Renewer will have a problem. So what I did was I commented out renew_lifetime = 7d on server that hosted Kerberos Ticket REnewar roles. So now my cluster will be up. But this does not like a good workarund as some of the UI are having a problem like Atlas and Solr with error (tgt renewal). Anyone encounter this? P/S: I have a working Kerberized cluster with same version of CDP. It is working fine. Same exact version, os version, java version, and kerberos version. Only not all components is available in this cluster. So weird.
... View more
Labels:
- Labels:
-
Apache Zookeeper
-
Kerberos
04-30-2021
11:28 PM
1 Kudo
I've resolved this. Turn out my permission on the logs folder is messed up. Changed the /tmp/logs folder to be owned by mapred solved this.
... View more
04-20-2021
02:27 PM
Hi all, I've run few oozie workflow through Hue. But once the jobs finished, I cannot see the logs anymore. I'm wondering whether this is normal behaviour or I can config something on yarn to make the logs available even after job finishes. yarn.log-aggregation-enable is set to enable on the cluster Using this command is fine. yarn logs -applicationId <Application ID>. Only cannot see in the hue web ui after the job finished Thanks
... View more
04-07-2021
02:40 AM
Hi @GangWar . thanks for the suggestion. I tried to ping between the nodes, not seing abnormal latency. But looks like the loading time solve by changing my java version. It now load pretty quick. Btw the same happen when I'm running simple hdfs command such as hdfs dfs -ls / . It will take me quite sometimes to list the directory. But once java version changed, it is now looks fine. You think this is the real root cause?
... View more
03-29-2021
02:28 AM
Hi, When running spark-shell, it took me about 5 minutes to get into this cli There are no errors. Tuned my YARN memory but still having 5 minutes everytime. Cluster is small setup about 10 nodes. Didn't encounter this on a smaller cluster(3nodes), and on bigger cluster(40nodes). Anyone got idea?it is really appreciated.
... View more
Labels:
- Labels:
-
Apache Spark
-
Apache YARN
03-28-2021
07:52 PM
Hi @jake_allston , do you find the culprit? I've got similar issue. Took 5 minutes to load spark-shell. Its a new cluster. Doen not happen to my other cluster
... View more
03-01-2021
08:28 PM
Thanks @PabitraDas Alright, done as your suggestion, the cluster is looking good now without the alert Thanks for the link too.
... View more
02-28-2021
02:40 PM
Hi,I've enabled HA successfully. My directory for the three journal node is not the same accross that 3 journal node. There are alert of required JournalNode Default Group box. Is it safe to I put anything here? as my journal node location is not the same? It is only for managing that 3 journal node loaction right? Thanks
... View more