Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Way to debug DataNode adding to Hadoop cluster

avatar
Explorer

Hello Team,

 

I  have installed hadoop -1.2.1 in oracle virtual box and working fine without any issue.

 

Additionally, i have added one datanode also but it's not showing in hadoop cluster.

 

When i start services in masternode, below message was showing

 

/*

student@master:~/Installations/hadoop-1.2.1/bin$ ./start-all.sh
starting namenode, logging to /home/student/Installations/hadoop-1.2.1/libexec/../logs/hadoop-student-namenode-master.out
node2: bash: line 0: cd: /home/student/Installations/hadoop-1.2.1/libexec/..: No such file or directory
node2: bash: /home/student/Installations/hadoop-1.2.1/bin/hadoop-daemon.sh: No such file or directory
master: starting datanode, logging to /home/student/Installations/hadoop-1.2.1/libexec/../logs/hadoop-student-datanode-master.out
master: starting secondarynamenode, logging to /home/student/Installations/hadoop-1.2.1/libexec/../logs/hadoop-student-secondarynamenode-master.out
starting jobtracker, logging to /home/student/Installations/hadoop-1.2.1/libexec/../logs/hadoop-student-jobtracker-master.out
node2: bash: line 0: cd: /home/student/Installations/hadoop-1.2.1/libexec/..: No such file or directory
node2: bash: /home/student/Installations/hadoop-1.2.1/bin/hadoop-daemon.sh: No such file or directory
master: starting tasktracker, logging to /home/student/Installations/hadoop-1.2.1/libexec/../logs/hadoop-student-tasktracker-master.out
student@master:~/Installations/hadoop-1.2.1/bin$ jps
8786 SecondaryNameNode
9073 Jps
8860 JobTracker
8656 DataNode
9000 TaskTracker
8522 NameNode

*/

 

 how to debug the exact issue?

 

Regads,

Yknev

11 REPLIES 11

avatar
Champion

are you trying to configure single node cluster or multi node cluster.

did you trying starting the newly added data node in that node ?

/hadoop-daemon.sh start datanode

could you check in your node 2 to see if you have

/home/student/Installations/hadoop-1.2.1/libexec/
/home/student/Installations/hadoop-1.2.1/bin/hadoop-daemon.sh: 

avatar
Explorer

Hello,

 

are you trying to configure single node cluster or multi node cluster.

 

Multinode

did you trying starting the newly added data node in that node ?

 

Yes. able to start newly data node.

 

I have not foud any issue in node2.

 

Rgds,

Yknev

avatar
Champion

If you can start the datanode in node 2 , I am sorry where are you having issue ? 

avatar
Explorer

Hello,

 

I am facing the issue when i start services in master node.

 

I have added node entry in slaves file where the file exists in master node. After that when i am trying to start service in master node.. i am facing below issue

 

/*

node2: bash: line 0: cd: /home/student/Installations/hadoop-1.2.1/libexec/..: No such file or directory
node2: bash: /home/student/Installations/hadoop-1.2.1/bin/hadoop-daemon.sh: No such file or directory

*/

 

avatar
Champion

if you got to the below path in your terminal 

/home/student/Installations/hadoop-1.2.1/bin/

perform an ls - command .

could you let me know if you are seeing this file
hadoop-daemon.sh

looks like you are missing based on the error

avatar
Explorer

Hello,

 

I am able to see file in this path

 

student@master:~/Installations/hadoop-1.2.1/bin$ pwd
/home/student/Installations/hadoop-1.2.1/bin

 

student@master:~/Installations/hadoop-1.2.1/bin$ ls -lrt *hadoop*
-rwxr-xr-x 1 student student  1329 Jul 23  2013 hadoop-daemons.sh
-rwxr-xr-x 1 student student  5064 Jul 23  2013 hadoop-daemon.sh
-rwxr-xr-x 1 student student  2643 Jul 23  2013 hadoop-config.sh
-rwxr-xr-x 1 student student 15147 Jul 23  2013 hadoop

 

avatar
Champion

Did you in check the slave nodes as to see all the necessary hadoop liberaries and hadoop conf are in palce ? 

also could you paste the namenode,datanodes logs 

avatar
Explorer

Hello,

 

Please find conf log files from node2 which i need to configure...

 

/*

Core-site file

node2user@node2:~/hadoop-1.2.1/conf$ cat core-site.xml

<configuration>

<property>

    <name>fs.default.name</name>

    <value>hadoop://node2:9000</value>

</property>

<property>

            <name>hdfs.tmp.dir</name>

    <value>/home/node2user/hadoop_tmp_dir</value>

</property>

 

</configuration>

node2user@node2:~/hadoop-1.2.1/conf$

 

mapred-site file

 node2user@node2:~/hadoop-1.2.1/conf$ cat mapred-site.xml

<configuration>

<property>

<name>mapred.job.tracker</name>

    <value>node2:9001</value>

</property>

</configuration>

node2user@node2:~/hadoop-1.2.1/conf$

 

 

hdfs-site file

 node2user@node2:~/hadoop-1.2.1/conf$ cat hdfs-site.xml

<configuration>

<property>

    <name>dfs.replication</name>

    <value>2</value>

</property>

 

</configuration>

node2user@node2:~/hadoop-1.2.1/conf$

 

slaves file

 node2node2user@node2:~/hadoop-1.2.1/conf$ cat slaves

node2

 

node2user@node2:~/hadoop-1.2.1/bin$ jps

10908 JobTracker

11864 Jps

11043 TaskTracker

11741 DataNode

11828 NameNode

 */

 

datanode and namenode log files from master

 

/*

2017-06-24 02:12:03,291 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* heartbeatCheck: lost heartbeat from 127.0.0.1:50010
2017-06-24 02:12:03,307 INFO org.apache.hadoop.net.NetworkTopology: Removing a node: /default-rack/127.0.0.1:50010
2017-06-24 02:12:03,948 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* registerDatanode: node registration from 127.0.0.1:50010 storage DS-1235995543-10.0.0.75-50010-1497411172448
2017-06-24 02:12:03,951 INFO org.apache.hadoop.net.NetworkTopology: Removing a node: /default-rack/127.0.0.1:50010
2017-06-24 02:12:03,951 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/127.0.0.1:50010
2017-06-24 02:12:03,966 INFO org.apache.hadoop.hdfs.StateChange: *BLOCK* NameNode.blocksBeingWrittenReport: from 127.0.0.1:50010 0 blocks
2017-06-24 02:12:06,931 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:50010 is added to blk_-6809286271732960763_1011 size 4
2017-06-24 02:12:06,934 INFO org.apache.hadoop.hdfs.StateChange: *BLOCK* processReport: from 127.0.0.1:50010, blocks: 1, processing time: 3 msecs
2017-06-24 02:15:54,044 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 127.0.0.1
2017-06-24 02:15:54,044 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 0 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
2017-06-24 02:15:54,047 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: closing edit log: position=4, editlog=/home/student/Installations/hadoop-1.2.1/data/dfs/name/current/edits
2017-06-24 02:15:54,047 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: close success: truncate to 4, editlog=/home/student/Installations/hadoop-1.2.1/data/dfs/name/current/edits
2017-06-24 02:15:54,400 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Opening connection to http://0.0.0.0:50090/getimage?getimage=1
2017-06-24 02:15:54,411 INFO org.apache.hadoop.hdfs.server.namenode.GetImageServlet: Downloaded new fsimage with checksum: d99a4e8edd1253584c2e822cbd71d685
2017-06-24 02:15:54,417 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll FSImage from 127.0.0.1
2017-06-24 02:15:54,418 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 0 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 1 SyncTimes(ms): 113
2017-06-24 02:15:54,422 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: closing edit log: position=4, editlog=/home/student/Installations/hadoop-1.2.1/data/dfs/name/current/edits.new
2017-06-24 02:15:54,422 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: close success: truncate to 4, editlog=/home/student/Installations/hadoop-1.2.1/data/dfs/name/current/edits.new
2017-06-24 02:42:47,123 INFO org.apache.hadoop.hdfs.StateChange: *BLOCK* processReport: from 127.0.0.1:50010, blocks: 1, processing time: 1 msecs
2017-06-24 02:43:13,211 INFO org.apache.hadoop.util.HostsFileReader: Setting the includes file to
2017-06-24 02:43:13,212 INFO org.apache.hadoop.util.HostsFileReader: Setting the excludes file to
2017-06-24 02:43:13,212 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list
2017-06-24 02:46:05,011 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/10.0.0.75
************************************************************/
2017-06-24 02:47:35,303 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = master/10.0.0.75
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 1.2.1
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG:   java = 1.7.0_80
************************************************************/
2017-06-24 02:47:44,743 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2017-06-24 02:47:45,526 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2017-06-24 02:47:45,621 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2017-06-24 02:47:45,622 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2017-06-24 02:47:51,387 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2017-06-24 02:47:51,452 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2017-06-24 02:47:51,546 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2017-06-24 02:47:51,567 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
2017-06-24 02:47:52,022 INFO org.apache.hadoop.hdfs.util.GSet: Computing capacity for map BlocksMap
2017-06-24 02:47:52,022 INFO org.apache.hadoop.hdfs.util.GSet: VM type       = 64-bit
2017-06-24 02:47:52,023 INFO org.apache.hadoop.hdfs.util.GSet: 2.0% max memory = 1013645312
2017-06-24 02:47:52,023 INFO org.apache.hadoop.hdfs.util.GSet: capacity      = 2^21 = 2097152 entries
2017-06-24 02:47:52,023 INFO org.apache.hadoop.hdfs.util.GSet: recommended=2097152, actual=2097152
2017-06-24 02:47:52,532 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=student
2017-06-24 02:47:52,534 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2017-06-24 02:47:52,538 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2017-06-24 02:47:52,842 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
2017-06-24 02:47:52,842 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
2017-06-24 02:47:55,772 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
2017-06-24 02:47:56,510 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0
2017-06-24 02:47:56,511 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2017-06-24 02:47:56,827 INFO org.apache.hadoop.hdfs.server.common.Storage: Start loading image file /home/student/Installations/hadoop-1.2.1/data/dfs/name/current/fsimage
2017-06-24 02:47:56,828 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files = 9
2017-06-24 02:47:57,238 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction = 0
2017-06-24 02:47:57,239 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file /home/student/Installations/hadoop-1.2.1/data/dfs/name/current/fsimage of size 989 bytes loaded in 0 seconds.
2017-06-24 02:47:57,268 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Start loading edits file /home/student/Installations/hadoop-1.2.1/data/dfs/name/current/edits
2017-06-24 02:47:57,275 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: EOF of /home/student/Installations/hadoop-1.2.1/data/dfs/name/current/edits, reached end of edit log Number of transactions found: 0.  Bytes read: 4
2017-06-24 02:47:57,276 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Start checking end of edit log (/home/student/Installations/hadoop-1.2.1/data/dfs/name/current/edits) ...
2017-06-24 02:47:57,277 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Checked the bytes after the end of edit log (/home/student/Installations/hadoop-1.2.1/data/dfs/name/current/edits):
2017-06-24 02:47:57,278 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:   Padding position  = -1 (-1 means padding not found)
2017-06-24 02:47:57,278 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:   Edit log length   = 4
2017-06-24 02:47:57,279 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:   Read length       = 4
2017-06-24 02:47:57,279 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:   Corruption length = 0
2017-06-24 02:47:57,279 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:   Toleration length = 0 (= dfs.namenode.edits.toleration.length)
2017-06-24 02:47:57,497 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Summary: |---------- Read=4 ----------|-- Corrupt=0 --|-- Pad=0 --|
2017-06-24 02:47:57,514 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Edits file /home/student/Installations/hadoop-1.2.1/data/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
2017-06-24 02:47:57,567 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file /home/student/Installations/hadoop-1.2.1/data/dfs/name/current/fsimage of size 989 bytes saved in 0 seconds.
2017-06-24 02:47:58,030 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: closing edit log: position=4, editlog=/home/student/Installations/hadoop-1.2.1/data/dfs/name/current/edits
2017-06-24 02:47:58,031 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: close success: truncate to 4, editlog=/home/student/Installations/hadoop-1.2.1/data/dfs/name/current/edits
2017-06-24 02:47:58,595 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
2017-06-24 02:47:58,595 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 6399 msecs
2017-06-24 02:47:58,627 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.safemode.threshold.pct          = 0.9990000128746033
2017-06-24 02:47:58,627 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2017-06-24 02:47:58,628 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.safemode.extension              = 30000
2017-06-24 02:47:58,628 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of blocks excluded by safe block count: 0 total blocks: 1 and thus the safe blocks: 1
2017-06-24 02:47:58,674 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode ON
The reported blocks is only 0 but the threshold is 0.9990 and the total blocks 1. Safe mode will be turned off automatically.
2017-06-24 02:47:58,966 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list
2017-06-24 02:47:59,102 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source FSNamesystemMetrics registered.
2017-06-24 02:47:59,510 INFO org.apache.hadoop.ipc.Server: Starting SocketReader
2017-06-24 02:47:59,596 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort9000 registered.
2017-06-24 02:47:59,602 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort9000 registered.
2017-06-24 02:47:59,656 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: 127.0.0.1/127.0.0.1:9000
2017-06-24 02:48:02,547 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2017-06-24 02:48:03,814 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2017-06-24 02:48:04,106 INFO org.apache.hadoop.http.HttpServer: dfs.webhdfs.enabled = false
2017-06-24 02:48:04,401 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50070
2017-06-24 02:48:04,633 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50070 webServer.getConnectors()[0].getLocalPort() returned 50070
2017-06-24 02:48:04,633 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50070
2017-06-24 02:48:04,638 INFO org.mortbay.log: jetty-6.1.26
2017-06-24 02:48:12,038 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50070
2017-06-24 02:48:12,040 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at: 0.0.0.0:50070
2017-06-24 02:48:12,079 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9000: starting
2017-06-24 02:48:12,043 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2017-06-24 02:48:12,150 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 9000: starting
2017-06-24 02:48:12,154 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 9000: starting
2017-06-24 02:48:12,155 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 9000: starting
2017-06-24 02:48:12,156 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 9000: starting
2017-06-24 02:48:12,157 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 9000: starting
2017-06-24 02:48:12,150 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 9000: starting
2017-06-24 02:48:12,167 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 9000: starting
2017-06-24 02:48:12,163 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 9000: starting
2017-06-24 02:48:12,159 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 9000: starting
2017-06-24 02:48:12,190 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 9000: starting
2017-06-24 02:48:36,186 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* registerDatanode: node registration from 127.0.0.1:50010 storage DS-1235995543-10.0.0.75-50010-1497411172448
2017-06-24 02:48:36,280 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/127.0.0.1:50010
2017-06-24 02:48:36,285 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode ON
The reported blocks is only 0 but the threshold is 0.9990 and the total blocks 1. Safe mode will be turned off automatically.
2017-06-24 02:48:36,339 INFO org.apache.hadoop.hdfs.StateChange: *BLOCK* NameNode.blocksBeingWrittenReport: from 127.0.0.1:50010 0 blocks
2017-06-24 02:48:36,760 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode extension entered
The reported blocks 1 has reached the threshold 0.9990 of total blocks 1. Safe mode will be turned off automatically in 29 seconds.
2017-06-24 02:48:36,774 INFO org.apache.hadoop.hdfs.StateChange: *BLOCK* processReport: from 127.0.0.1:50010, blocks: 1, processing time: 167 msecs
2017-06-24 02:48:56,801 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode ON
The reported blocks 1 has reached the threshold 0.9990 of total blocks 1. Safe mode will be turned off automatically in 9 seconds.
2017-06-24 02:49:06,904 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks = 1
2017-06-24 02:49:06,907 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid blocks = 0
2017-06-24 02:49:06,907 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of under-replicated blocks = 1
2017-06-24 02:49:06,908 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of  over-replicated blocks = 0
2017-06-24 02:49:06,908 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode termination scan for invalid, over- and under-replicated blocks completed in 86 msec
2017-06-24 02:49:06,908 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 74 secs
2017-06-24 02:49:06,908 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode is OFF
2017-06-24 02:49:06,909 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 1 racks and 1 datanodes
2017-06-24 02:49:06,910 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 1 blocks
2017-06-24 02:49:08,084 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 68 msec
2017-06-24 02:49:08,085 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 68 msec processing time, 68 msec clock time, 1 cycles
2017-06-24 02:49:08,094 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 3 msec
2017-06-24 02:49:08,095 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 3 msec processing time, 3 msec clock time, 1 cycles
2017-06-24 02:49:08,089 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 1 Total time for transactions(ms): 12 Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
2017-06-24 02:49:08,364 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* addToInvalidates: blk_-6809286271732960763 to 127.0.0.1:50010
2017-06-24 02:49:09,919 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /home/student/Installations/hadoop-1.2.1/data/mapred/system/jobtracker.info. blk_8855125853968466302_1012
2017-06-24 02:49:11,098 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* ask 127.0.0.1:50010 to delete  blk_-6809286271732960763_1011
2017-06-24 02:49:12,074 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:50010 is added to blk_8855125853968466302_1012 size 4
2017-06-24 02:49:12,169 INFO org.apache.hadoop.hdfs.StateChange: Removing lease on  /home/student/Installations/hadoop-1.2.1/data/mapred/system/jobtracker.info from client DFSClient_NONMAPREDUCE_894096851_1
2017-06-24 02:49:12,186 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /home/student/Installations/hadoop-1.2.1/data/mapred/system/jobtracker.info is closed by DFSClient_NONMAPREDUCE_894096851_1
2017-06-24 02:53:31,418 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 127.0.0.1
2017-06-24 02:53:31,419 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 8 Total time for transactions(ms): 40 Number of transactions batched in Syncs: 0 Number of syncs: 6 SyncTimes(ms): 408
2017-06-24 02:53:31,438 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: closing edit log: position=935, editlog=/home/student/Installations/hadoop-1.2.1/data/dfs/name/current/edits
2017-06-24 02:53:31,440 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: close success: truncate to 935, editlog=/home/student/Installations/hadoop-1.2.1/data/dfs/name/current/edits
2017-06-24 02:53:39,031 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Opening connection to http://0.0.0.0:50090/getimage?getimage=1
2017-06-24 02:53:39,920 INFO org.apache.hadoop.hdfs.server.namenode.GetImageServlet: Downloaded new fsimage with checksum: 84c44e778370c4d66eb1b7d375d903a9
2017-06-24 02:53:39,931 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll FSImage from 127.0.0.1
2017-06-24 02:53:39,931 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 0 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 1 SyncTimes(ms): 186
2017-06-24 02:53:40,302 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: closing edit log: position=4, editlog=/home/student/Installations/hadoop-1.2.1/data/dfs/name/current/edits.new
2017-06-24 02:53:40,303 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: close success: truncate to 4, editlog=/home/student/Installations/hadoop-1.2.1/data/dfs/name/current/edits.new
2017-06-24 03:09:25,226 INFO org.apache.hadoop.hdfs.StateChange: *BLOCK* processReport: from 127.0.0.1:50010, blocks: 1, processing time: 1 msecs
2017-06-24 03:53:41,032 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 127.0.0.1
2017-06-24 03:53:41,035 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 0 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
2017-06-24 03:53:41,038 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: closing edit log: position=4, editlog=/home/student/Installations/hadoop-1.2.1/data/dfs/name/current/edits
2017-06-24 03:53:41,039 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: close success: truncate to 4, editlog=/home/student/Installations/hadoop-1.2.1/data/dfs/name/current/edits
2017-06-24 03:53:41,431 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Opening connection to http://0.0.0.0:50090/getimage?getimage=1
2017-06-24 03:53:41,477 INFO org.apache.hadoop.hdfs.server.namenode.GetImageServlet: Downloaded new fsimage with checksum: 240c280aad3c4f6c39dd9cb54479bf53
2017-06-24 03:53:41,482 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll FSImage from 127.0.0.1
2017-06-24 03:53:41,483 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 0 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 1 SyncTimes(ms): 124
2017-06-24 03:53:41,500 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: closing edit log: position=4, editlog=/home/student/Installations/hadoop-1.2.1/data/dfs/name/current/edits.new
2017-06-24 03:53:41,501 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: close success: truncate to 4, editlog=/home/student/Installations/hadoop-1.2.1/data/dfs/name/current/edits.new
2017-06-24 09:16:48,860 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* heartbeatCheck: lost heartbeat from 127.0.0.1:50010
2017-06-24 09:16:48,862 INFO org.apache.hadoop.net.NetworkTopology: Removing a node: /default-rack/127.0.0.1:50010
2017-06-24 09:16:49,090 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* registerDatanode: node registration from 127.0.0.1:50010 storage DS-1235995543-10.0.0.75-50010-1497411172448
2017-06-24 09:16:49,091 INFO org.apache.hadoop.net.NetworkTopology: Removing a node: /default-rack/127.0.0.1:50010
2017-06-24 09:16:49,092 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/127.0.0.1:50010
2017-06-24 09:16:49,130 INFO org.apache.hadoop.hdfs.StateChange: *BLOCK* NameNode.blocksBeingWrittenReport: from 127.0.0.1:50010 0 blocks
2017-06-24 09:16:52,020 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:50010 is added to blk_8855125853968466302_1012 size 4
2017-06-24 09:16:52,021 INFO org.apache.hadoop.hdfs.StateChange: *BLOCK* processReport: from 127.0.0.1:50010, blocks: 1, processing time: 1 msecs
2017-06-24 09:18:54,734 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 127.0.0.1
2017-06-24 09:18:54,735 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 0 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
2017-06-24 09:18:54,738 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: closing edit log: position=4, editlog=/home/student/Installations/hadoop-1.2.1/data/dfs/name/current/edits
2017-06-24 09:18:54,738 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: close success: truncate to 4, editlog=/home/student/Installations/hadoop-1.2.1/data/dfs/name/current/edits
2017-06-24 09:18:55,086 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Opening connection to http://0.0.0.0:50090/getimage?getimage=1
2017-06-24 09:18:55,101 INFO org.apache.hadoop.hdfs.server.namenode.GetImageServlet: Downloaded new fsimage with checksum: f35ad94fa30fe3d4569a1539fa604496
2017-06-24 09:18:55,106 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll FSImage from 127.0.0.1
2017-06-24 09:18:55,106 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 0 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 1 SyncTimes(ms): 18
2017-06-24 09:18:55,110 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: closing edit log: position=4, editlog=/home/student/Installations/hadoop-1.2.1/data/dfs/name/current/edits.new
2017-06-24 09:18:55,110 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: close success: truncate to 4, editlog=/home/student/Installations/hadoop-1.2.1/data/dfs/name/current/edits.new
2017-06-24 09:52:08,511 INFO org.apache.hadoop.hdfs.StateChange: *BLOCK* processReport: from 127.0.0.1:50010, blocks: 1, processing time: 0 msecs

*/

 

datanode log

 

/*

2017-06-24 02:12:03,936 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeCommand action: DNA_REGISTER
2017-06-24 02:12:03,959 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Finished generating blocks being written report for 1 volumes in 0 seconds
2017-06-24 02:12:03,972 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Finished asynchronous block report scan in 0ms
2017-06-24 02:12:06,954 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 1 blocks took 0 msec to generate and 25 msecs for RPC and NN processing
2017-06-24 02:42:44,114 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Finished asynchronous block report scan in 1ms
2017-06-24 02:42:47,124 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 1 blocks took 1 msec to generate and 5 msecs for RPC and NN processing
2017-06-24 02:46:05,438 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Call to localhost/127.0.0.1:9000 failed on local exception: java.io.IOException: Connection reset by peer
    at org.apache.hadoop.ipc.Client.wrapException(Client.java:1150)
    at org.apache.hadoop.ipc.Client.call(Client.java:1118)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
    at com.sun.proxy.$Proxy3.sendHeartbeat(Unknown Source)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:1031)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1588)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Connection reset by peer
    at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
    at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
    at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
    at sun.nio.ch.IOUtil.read(IOUtil.java:197)
    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:384)
    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
    at java.io.FilterInputStream.read(FilterInputStream.java:133)
    at org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:364)
    at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
    at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
    at java.io.DataInputStream.readInt(DataInputStream.java:387)
    at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:845)
    at org.apache.hadoop.ipc.Client$Connection.run(Client.java:790)

2017-06-24 02:46:09,301 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-06-24 02:46:10,308 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-06-24 02:46:10,939 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at master/10.0.0.75
************************************************************/
2017-06-24 02:47:17,628 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = master/10.0.0.75
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 1.2.1
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG:   java = 1.7.0_80
************************************************************/
2017-06-24 02:47:24,863 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2017-06-24 02:47:25,596 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2017-06-24 02:47:25,682 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2017-06-24 02:47:25,698 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2017-06-24 02:47:31,803 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2017-06-24 02:47:32,308 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2017-06-24 02:47:40,830 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-06-24 02:47:41,838 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-06-24 02:47:42,843 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-06-24 02:47:43,848 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-06-24 02:47:44,862 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-06-24 02:47:45,872 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-06-24 02:47:46,879 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-06-24 02:47:47,891 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-06-24 02:47:48,901 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-06-24 02:47:49,910 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-06-24 02:47:49,967 INFO org.apache.hadoop.ipc.RPC: Server at localhost/127.0.0.1:9000 not available yet, Zzzzz...
2017-06-24 02:47:51,979 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-06-24 02:47:52,988 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-06-24 02:47:53,994 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-06-24 02:47:55,004 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-06-24 02:47:56,011 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-06-24 02:47:57,014 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-06-24 02:47:58,024 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-06-24 02:47:59,027 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-06-24 02:48:00,035 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-06-24 02:48:13,938 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Registered FSDatasetStatusMBean
2017-06-24 02:48:14,241 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened data transfer server at 50010
2017-06-24 02:48:14,358 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s
2017-06-24 02:48:14,625 INFO org.apache.hadoop.util.NativeCodeLoader: Loaded the native-hadoop library
2017-06-24 02:48:16,551 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2017-06-24 02:48:18,914 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2017-06-24 02:48:19,408 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dfs.webhdfs.enabled = false
2017-06-24 02:48:19,408 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50075
2017-06-24 02:48:19,409 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50075 webServer.getConnectors()[0].getLocalPort() returned 50075
2017-06-24 02:48:19,414 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50075
2017-06-24 02:48:19,415 INFO org.mortbay.log: jetty-6.1.26
2017-06-24 02:48:31,395 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50075
2017-06-24 02:48:31,607 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2017-06-24 02:48:31,624 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source DataNode registered.
2017-06-24 02:48:35,894 INFO org.apache.hadoop.ipc.Server: Starting SocketReader
2017-06-24 02:48:36,019 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort50020 registered.
2017-06-24 02:48:36,022 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort50020 registered.
2017-06-24 02:48:36,042 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration = DatanodeRegistration(master:50010, storageID=DS-1235995543-10.0.0.75-50010-1497411172448, infoPort=50075, ipcPort=50020)
2017-06-24 02:48:36,299 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Finished generating blocks being written report for 1 volumes in 0 seconds
2017-06-24 02:48:36,359 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-1235995543-10.0.0.75-50010-1497411172448, infoPort=50075, ipcPort=50020)In DataNode.run, data = FSDataset{dirpath='/home/student/Installations/hadoop-1.2.1/data/dfs/data/current'}
2017-06-24 02:48:36,374 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
2017-06-24 02:48:36,383 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Finished asynchronous block report scan in 30ms
2017-06-24 02:48:36,448 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2017-06-24 02:48:36,476 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
2017-06-24 02:48:36,482 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 50020: starting
2017-06-24 02:48:36,485 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 50020: starting
2017-06-24 02:48:36,491 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 50020: starting
2017-06-24 02:48:37,062 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 1 blocks took 65 msec to generate and 473 msecs for RPC and NN processing
2017-06-24 02:48:37,095 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic block scanner
2017-06-24 02:48:37,115 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Generated rough (lockless) block report in 17 ms
2017-06-24 02:49:11,467 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving blk_8855125853968466302_1012 src: /127.0.0.1:37938 dest: /127.0.0.1:50010
2017-06-24 02:49:12,069 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:37938, dest: /127.0.0.1:50010, bytes: 4, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_894096851_1, offset: 0, srvID: DS-1235995543-10.0.0.75-50010-1497411172448, blockid: blk_8855125853968466302_1012, duration: 68396632
2017-06-24 02:49:12,134 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 0 for blk_8855125853968466302_1012 terminating
2017-06-24 02:49:12,623 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling blk_-6809286271732960763_1011 file /home/student/Installations/hadoop-1.2.1/data/dfs/data/current/blk_-6809286271732960763 for deletion
2017-06-24 02:49:12,656 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted blk_-6809286271732960763_1011 at file /home/student/Installations/hadoop-1.2.1/data/dfs/data/current/blk_-6809286271732960763
2017-06-24 02:53:55,377 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification succeeded blk_8855125853968466302_1012
2017-06-24 03:09:22,225 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Finished asynchronous block report scan in 1ms
2017-06-24 03:09:25,231 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 1 blocks took 1 msec to generate and 8 msecs for RPC and NN processing
2017-06-24 09:16:49,076 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeCommand action: DNA_REGISTER
2017-06-24 09:16:49,121 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Finished generating blocks being written report for 1 volumes in 0 seconds
2017-06-24 09:16:49,184 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Finished asynchronous block report scan in 13ms
2017-06-24 09:16:52,022 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 1 blocks took 1 msec to generate and 7 msecs for RPC and NN processing
2017-06-24 09:52:05,507 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Finished asynchronous block report scan in 1ms
2017-06-24 09:52:08,514 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 1 blocks took 0 msec to generate and 5 msecs for RPC and NN processing

*/

 

Thanks for inadvance to check log files...

avatar
Champion

How many namenodes do you have in your cluster ? 

I see one namenode in master and one namenode in slave . 

are you trying to confIigure High availabilty . 

That is to make one namenode active and while the other in stand by ?

 

if not remove the namenode it from node2.

 

 

is the below is same in master and other slave node ? Why mention it as  "node2 " change to the node were master  resides i,e namenode 

 

or 

 

to avoid fruther confustion . just copy the whole conf directory from master to the newly added slave .

/etc/hadoop/conf / *   to the node2 /etc/hadoop/conf/

<configuration>
<property>
    <name>fs.default.name</name>
    <value>hadoop://node2:9000</value>
</property>