Created on 05-05-2018 08:36 AM - edited 09-16-2022 06:10 AM
unable to start name node service . hdfs format done succesfull
[root@master conf]# for x in `cd /etc/init.d ; ls hadoop-hdfs-*` ; do service $x start ; done
starting namenode, logging to /var/log/hadoop-hdfs/hadoop-hdfs-namenode-master.cluster.com.out
Failed to start Hadoop namenode. Return value: 1 [FAILED]
[root@master conf]# cat /var/log/hadoop-hdfs/hadoop-hdfs-namenode-master.cluster.com.out
ulimit -a for user hdfs
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 7336
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 32768
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Created 05-07-2018 02:53 AM
I find the issue. issue was with my core-site.xml
below propery name was not correct.
<name>fs.defaultFS</name>
Please chek this log dile /var/log/hadoop-hdfs/hadoop-hdfs-namenode-hadoop.cluster.com.log this log file rather than var/log/hadoop-hdfs/hadoop-hdfs-namenode-hadoop.cluster.com.out file
now able to start the name node.
Created 05-07-2018 02:53 AM
I find the issue. issue was with my core-site.xml
below propery name was not correct.
<name>fs.defaultFS</name>
Please chek this log dile /var/log/hadoop-hdfs/hadoop-hdfs-namenode-hadoop.cluster.com.log this log file rather than var/log/hadoop-hdfs/hadoop-hdfs-namenode-hadoop.cluster.com.out file
now able to start the name node.
Created 12-28-2018 11:24 AM
Hi,
I'm getting the below error while connecting to name node
Failed to start Hadoop namenode. Return value: 1 [FAILED]
i have checked the log file
[cloudera@quickstart hadoop-hdfs]$ cat hadoop-hdfs-namenode-quickstart.cloudera.out
ulimit -a for user hdfs
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 15211
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 32768
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Can you please tell me where can i check core-site.xml file properties and how to change that as i'm new to big data