New Contributor
Posts: 1
Registered: ‎07-24-2017

Cloudera namenode not starting due to ulimit error

I am trying to start the namenode using the cloudera package, but it fails. Checking the log, I found it is failing on ulimit. Can anyone tell what this error is exactly. I gave permissions 777 also for the /data directories but still did not work. I am trying to have single node cluster on CentOS 7 using google cloud Iaas.


root@hadoop admin]# sudo service hadoop-hdfs-namenode start
starting namenode, logging to /var/log/hadoop-hdfs/hadoop-hdfs-namenode-hadoop.out
Failed to start Hadoop namenode. Return value: 1           [FAILED]

[root@hadoop admin]# cat /var/log/hadoop-hdfs/hadoop-hdfs-namenode-hadoop.out
ulimit -a for user hdfs
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 14103
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 32768
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited