Created on 07-24-2017 10:02 AM - edited 09-16-2022 04:58 AM
Hi,
I am trying to start the namenode using the cloudera package, but it fails. Checking the log, I found it is failing on ulimit. Can anyone tell what this error is exactly. I gave permissions 777 also for the /data directories but still did not work. I am trying to have single node cluster on CentOS 7 using google cloud Iaas.
root@hadoop admin]# sudo service hadoop-hdfs-namenode start starting namenode, logging to /var/log/hadoop-hdfs/hadoop-hdfs-namenode-hadoop.out Failed to start Hadoop namenode. Return value: 1 [FAILED] [root@hadoop admin]# cat /var/log/hadoop-hdfs/hadoop-hdfs-namenode-hadoop.out ulimit -a for user hdfs core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 14103 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 32768 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 65536 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited
Created 09-09-2017 08:19 AM
I did not see the reply. It was my problem. I was looking in to .out file instead of .log. Its solved.
@csguna wrote:not sure if you are still looking for the solution i am kind of late though .
Created 08-28-2017 10:38 PM
not sure if you are still looking for the solution i am kind of late though .
Created 09-09-2017 08:19 AM
I did not see the reply. It was my problem. I was looking in to .out file instead of .log. Its solved.
@csguna wrote:not sure if you are still looking for the solution i am kind of late though .