Member since
07-18-2016
26
Posts
4
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5561 | 10-23-2017 06:11 AM | |
3622 | 03-09-2017 09:32 AM | |
18917 | 02-06-2017 04:09 AM |
05-14-2020
07:56 PM
yes, it's Java problem, I export JAVA_HOME again to solve it export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
... View more
12-07-2019
08:26 AM
after adding the journal node service stop the services verify for the presence or Please create the folder till the value it configured till the path mentioned under the config -dfs.journalnode.edits.dir run the scp -r command from other journal node for Clusternamenode folder Ex: scp -r journal2:/data/hadoop/hdfs/journal/clustername chown hdfs:hadoop /data/hadoop/hdfs/journal/clustername Start the Journal node Observe the Sync of transaction range activity in jounral log and after that edits will be patched as usual
... View more
05-31-2018
01:44 AM
please let us know how it resolved. We are even facing the same exception in our Zookeeper.
... View more
12-25-2017
12:17 AM
Hi @SandyCT, well, this system is broken a bit more than I expected, since owner of groups is also damaged. What did you run exactly? If I had to guess, some recursive chmod on /, or /etc? Before you try this last option, try switching to console (ctrl+alt+F1 on a normal pc, not sure about the vm), and logging in as root, with password "cloudera". If this does not work, for whichever reason, here's a way to reboot Centos 6 in "safe mode". I suggest you make a backup of the whole vm file/directory first. https://lintut.com/reset-forgotten-root-password-in-centos/ If this does not work (I cannot test now, since I don't have my vm around), replace " 1 " in the tutorial with "rw init=/bin/bash" In either case, this will grant you root, but fixing your vm might take a while. For example, your sudo command should be "---s--x--x", or something to that regard, /etc/sudoers "-r--r-----", and "/etc/group" -rw-r--r--. Have fun & good luck! 🙂
... View more
11-09-2017
07:37 AM
I've discovered by painful experience that this might be caused by having more than one oracle java installed, and even worse, any version of openjdk java. Namely, make sure you have added the ca you used for the ssl to the java keystore of the java version you are using (you can find that out in process list). Also, make sure that keytool you are using is belonging to this version of java - so it's best to have only one version installed, or (if that is unavoidable), use the full path to keytool. Hope it helps.
... View more
10-25-2017
01:29 AM
It does help! Thank you for the information 🙂 Have a nice day 🙂 Anna
... View more
03-09-2017
09:32 AM
2 Kudos
Hi @aawasthi, I know it has been a while since you asked this question, but I ran into a similar issue, and it can be caused by many things. What I would do in your case is to check if there is some firewall between the 2 datanodes (you can try with telnet), and if there isn't, try checking the number of *_wait connections on the source datanodes. I have found that some of the replicas for what I was trying to copy were placed on a datanode which was technically working, but had a lot of connections in a close_wait state, which were using the overall limits. Feel free to take a look at the answer below:
https://community.hortonworks.com/questions/38822/hdfs-exception.html#answer-38817
and the one below that, if you need more details.
I hope it helps,
camypaj
... View more