Support Questions

Find answers, ask questions, and share your expertise

Cloudera namenode not starting due to ulimit error

avatar
Contributor

Hi,
I am trying to start the namenode using the cloudera package, but it fails. Checking the log, I found it is failing on ulimit. Can anyone tell what this error is exactly. I gave permissions 777 also for the /data directories but still did not work. I am trying to have single node cluster on CentOS 7 using google cloud Iaas.

 

root@hadoop admin]# sudo service hadoop-hdfs-namenode start
starting namenode, logging to /var/log/hadoop-hdfs/hadoop-hdfs-namenode-hadoop.out
Failed to start Hadoop namenode. Return value: 1           [FAILED]


[root@hadoop admin]# cat /var/log/hadoop-hdfs/hadoop-hdfs-namenode-hadoop.out
ulimit -a for user hdfs
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 14103
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 32768
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
 
 
1 ACCEPTED SOLUTION

avatar
Contributor

I did not see the reply. It was my problem. I was looking in to .out file instead of .log. Its solved.


@csguna wrote:

not sure if you are still looking for the solution i am kind of late though . 


 

View solution in original post

2 REPLIES 2

avatar
Champion

not sure if you are still looking for the solution i am kind of late though . 

avatar
Contributor

I did not see the reply. It was my problem. I was looking in to .out file instead of .log. Its solved.


@csguna wrote:

not sure if you are still looking for the solution i am kind of late though .