Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2445 | 04-27-2020 03:48 AM | |
4880 | 04-26-2020 06:18 PM | |
3976 | 04-26-2020 06:05 PM | |
3219 | 04-13-2020 08:53 PM | |
4924 | 03-31-2020 02:10 AM |
02-18-2017
04:07 AM
@Aruna Sameera
Try using: "stop-all.sh" first to stop all processes. Keep checking the logs (Specially the Datanode log/out) Then try starting then using "start-all.sh". If datanode still does not stop then please share the out file of datanode. Also try starting it using the following command to individually start them: hadoop-daemon.sh start namenode
hadoop-daemon.sh start datanode .
... View more
02-18-2017
03:25 AM
@Aruna Sameera In order to start/stop datanode separately you can try using the "sbin/start-dfs.sh" and "sbin/stop-dfs.sh" scripts. And if datanode is still not starting then check the datanode log.
... View more
02-18-2017
02:48 AM
1 Kudo
@Aruna Sameera The "hadoop namenode -format" command formats the specified NameNode. It starts the NameNode, formats it and then shut it down. dfs.namenode.name.dir: Determines where on the local filesystem the DFS name node should store the name table(fsimage). If this is a comma-delimited list of directories then the name table is replicated in all of the directories, for redundancy. with default value ${hadoop.tmp.dir}/dfs/name. dfs.datanode.data.dir: (default: file://${hadoop.tmp.dir}/dfs/data) - Determines where on the local filesystem an DFS data node
should store its blocks. hadoop.tmp.dir: "/tmp/hadoop-${user.name}" (Default) - A base for other temporary directories. . Please check the values of above parameters defined in your core-site.xml/hdfs-site.xml . Ref: https://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-common/core-default.xml The Default values are determined as mentioned in the link: https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml#L332-L339 <property>
<name>dfs.namenode.name.dir</name>
<value>file://${hadoop.tmp.dir}/dfs/name</value>
<description>Determines where on the local filesystem the DFS name node
should store the name table(fsimage). If this is a comma-delimited list
of directories then the name table is replicated in all of the
directories, for redundancy. </description>
</property> .
... View more
02-17-2017
12:02 PM
@Aruna Sameera
You do not need to copy ssh keys from "hadoop" user to "aruna". Which user you want to use in order to start Hadoop? "aruna" or "hadoop" When you run the "ssh-keyhen" command then by default it send the keys to the "/home/<username>/.ssh" directory. Also " $HOME/aruna/.ssh/authorized_keys <== means ==> /user/aruna/aruna/.ssh/authorized_keys . Because $HOME itself is "/user/<username>" , Here username will be replaced by the name of the logged in user who is running the command. .
... View more
02-17-2017
11:26 AM
1 Kudo
@Aruna Sameera
First you will have to generate the "ssh-keys" Example: (Use all default values ... keep pressing enter with the "ssh-key" command.) I mean empty passphrase. $ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/jay/.ssh/id_rsa):
Created directory '/home/jay/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/jay/.ssh/id_rsa.
Your public key has been saved in /home/jay/.ssh/id_rsa.pub.
The key fingerprint is:
a1:70:20:75:52:aa:c6:0c:66:1e:c5:e6:61:11:91:43 jay@erie1.example.com
The key's randomart image is:
+--[ RSA 2048]----+
| ..o ...+E+ |
| . o=.o = |
| .+.A.o o |
| o..*.o |
| . Ao |
| |
| |
| |
| |
+-----------------+
The easiest approach will be to use the "ssh-copy-id" command: Example: ssh-copy-id -i ~/.ssh/id_rsa.pub localhost
ssh-copy-id -i ~/.ssh/id_rsa.pub kerbambari1.example.com
ssh-copy-id -i ~/.ssh/id_rsa.pub kerbambari2.example.com
ssh-copy-id -i ~/.ssh/id_rsa.pub kerbambari3.example.com
ssh-copy-id -i ~/.ssh/id_rsa.pub kerbambari4.example.com .
... View more
02-17-2017
09:32 AM
@Aruna Sameera Have you configured passwordless ssh? Are you able to do ssh no? May be you can take a look at: "Configuring passphraseless SSH" :
https://learninghadoopblog.wordpress.com/2013/08/03/hadoop-0-23-9-single-node-setup-on-ubuntu-13-04/ http://stackoverflow.com/questions/3663895/ssh-the-authenticity-of-host-hostname-cant-be-established . Also
i will suggest you to open separate threads for different issues it
makes us keeping the forum/community better by having specific query with a specific answer
that helps users more than having many issues discussed as part of one
single thread.
.
Also as the original issue that was asked as part of the original thread query is resolved hence please mark the correct answer as well. .
... View more
02-17-2017
08:14 AM
@Aruna Sameera then as mentioned earlier you should try installing the "openssh-server" : http://linux-sys-adm.com/how-to-install-and-configure-ssh-on-ubuntu-server-14.04-lts-step-by-step/
... View more
02-17-2017
07:44 AM
1 Kudo
@Aruna Sameera port below to 1024 are reserved ports. Can you check if the port 22 is open? And if user "aruna" has permission to open it? I ssh service is running on port 22 ? Also please check if the "openssh-server" is installed or not? If not then please install it as mentioned in: http://linux-sys-adm.com/how-to-install-and-configure-ssh-on-ubuntu-server-14.04-lts-step-by-step/ .
... View more
02-17-2017
07:34 AM
1 Kudo
@Aruna Sameera The spelling of "property" is wrong in hdfs-site.xml, throughout the file. <configuration> <propery> .
... View more
02-17-2017
07:03 AM
@Aruna Sameera Your error looks very much similar to the following: http://androidyou.blogspot.in/2011/08/how-to-hadoop-error-to-start-jobtracker.html Can you please check your configuration files if some tags are missing there (or not properly balanced) - In your case it might be "$HADOOP_CONF_DIR/core-site.xml" and "$HADOOP_CONF_DIR/hdfs-site.xml" file that you will need to check first. As per the source code, you should get this error when the number of <property> opening tag and the </property> closing tags are not equal/balanced. https://github.com/apache/hadoop/blob/release-2.7.3-RC1/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java#L2587-L2588 .
... View more