Support Questions
Find answers, ask questions, and share your expertise

Need Help on Datanode failed to start

Explorer

Traceback (most recent call last): File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py", line 167, in DataNode().execute() File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute method(env) File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py", line 62, in start datanode(action="start") File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk return fn(*args, **kwargs) File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_datanode.py", line 72, in datanode create_log_dir=True File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/utils.py", line 267, in service Execute(daemon_cmd, not_if=process_id_exists_command, environment=hadoop_env_exports) File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__ self.env.run() File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 158, in run self.run_action(resource, action) File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 121, in run_action provider_action() File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 238, in action_run tries=self.resource.tries, try_sleep=self.resource.try_sleep) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner result = function(command, **kwargs) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call tries=tries, try_sleep=try_sleep) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper result = _call(command, **kwargs_copy) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call raise Fail(err_msg) resource_management.core.exceptions.Fail: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/current/hadoop-cli ent/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start datanode'' returned 1. starting datanode, logging to /data1/var/log/hado op/hdfs/hadoop-hdfs-datanode-genome-dev16.axs.out Usage: hdfs [-a ALIAS] [--info] [-j] [-d DEPTH] [RPATH] hdfs [-a ALIAS] --read RPATH hdfs [-a ALIAS] --write [-o] RPATH hdfs [-a ALIAS] --download [-o] [-t THREADS] RPATH LPATH hdfs -h | --help | -l | --log | -v | --version

Usage: hdfs [-a ALIAS] [--info] [-j] [-d DEPTH] [RPATH] hdfs [-a ALIAS] --read RPATH hdfs [-a ALIAS] --write [-o] RPATH hdfs [-a ALIAS] --download [-o] [-t THREADS] RPATH LPATH hdfs -h | --help | -l | --log | -v | --version ulimit -a for user hdfs core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 127629 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 128000 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 65536 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited

Thanks & Regards
Aggy
1 REPLY 1

Super Mentor

@Vinod Thorwat

Do you see any error in the "/data1/var/log/hadoop/hdfs/hadoop-hdfs-datanode-genome-dev16.axs.out" file ?

What is the HDP and Ambari version are you using?

Is this error occurring only in One of the DataNode or other DataNodes are also facing the same issue?

Have you recently performed Ambari or HDP upgrade?

Is this a fresh DataNode ( I mean, was this DataNode starting fine earlier).

.

Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.