Support Questions

Find answers, ask questions, and share your expertise

Failed to start Hadoop namenode and datanode Return value: 1

avatar
Explorer

unable to start name node service . hdfs format done succesfull

 

[root@master conf]# for x in `cd /etc/init.d ; ls hadoop-hdfs-*` ; do service $x start ; done
starting namenode, logging to /var/log/hadoop-hdfs/hadoop-hdfs-namenode-master.cluster.com.out
Failed to start Hadoop namenode. Return value: 1 [FAILED]
[root@master conf]# cat /var/log/hadoop-hdfs/hadoop-hdfs-namenode-master.cluster.com.out
ulimit -a for user hdfs
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 7336
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 32768
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

1 ACCEPTED SOLUTION

avatar
Explorer

I find the issue. issue was  with my core-site.xml   

below propery name was not correct.
<name>fs.defaultFS</name>

Please chek this log dile   /var/log/hadoop-hdfs/hadoop-hdfs-namenode-hadoop.cluster.com.log this log file rather than var/log/hadoop-hdfs/hadoop-hdfs-namenode-hadoop.cluster.com.out file

 

now able to start the name node.

 

View solution in original post

2 REPLIES 2

avatar
Explorer

I find the issue. issue was  with my core-site.xml   

below propery name was not correct.
<name>fs.defaultFS</name>

Please chek this log dile   /var/log/hadoop-hdfs/hadoop-hdfs-namenode-hadoop.cluster.com.log this log file rather than var/log/hadoop-hdfs/hadoop-hdfs-namenode-hadoop.cluster.com.out file

 

now able to start the name node.

 

avatar
New Contributor

Hi,

I'm getting the below error while connecting to name node 

Failed to start Hadoop namenode. Return value: 1 [FAILED]

i have checked the log file

[cloudera@quickstart hadoop-hdfs]$ cat hadoop-hdfs-namenode-quickstart.cloudera.out
ulimit -a for user hdfs
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 15211
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 32768
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

 

Can you please tell me where can i check core-site.xml file properties and how to change that as i'm new to big data