Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Data node does not starting

avatar
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py", line 155, in <module>
    DataNode().execute()
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 375, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py", line 62, in start
    datanode(action="start")
  File "/usr/lib/ambari-agent/lib/ambari_commons/os_family_impl.py", line 89, in thunk
    return fn(*args, **kwargs)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_datanode.py", line 68, in datanode
    create_log_dir=True
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/utils.py", line 276, in service
    Execute(daemon_cmd, not_if=process_id_exists_command, environment=hadoop_env_exports)
  File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
    self.env.run()
  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 262, in action_run
    tries=self.resource.tries, try_sleep=self.resource.try_sleep)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
    result = function(command, **kwargs)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
    tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 303, in _call
    raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ;  /usr/hdp/2.6.5.1175-1/hadoop/sbin/hadoop-daemon.sh --config /usr/hdp/2.6.5.1175-1/hadoop/conf start datanode'' returned 1. starting datanode, logging to /var/log/hadoop/hdfs/hadoop-hdfs-datanode-worker2.sip.com.out
Error: could not find libjava.so
Error: Could not find Java SE Runtime Environment.

stdout: /var/lib/ambari-agent/data/output-310.txt

2019-07-12 04:26:04,927 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.5.1175-1 -> 2.6.5.1175-1
2019-07-12 04:26:04,965 - Using hadoop conf dir: /usr/hdp/2.6.5.1175-1/hadoop/conf
2019-07-12 04:26:05,332 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.5.1175-1 -> 2.6.5.1175-1
2019-07-12 04:26:05,344 - Using hadoop conf dir: /usr/hdp/2.6.5.1175-1/hadoop/conf
2019-07-12 04:26:05,346 - Group['hdfs'] {}
2019-07-12 04:26:05,348 - Group['hadoop'] {}
2019-07-12 04:26:05,349 - Group['users'] {}
2019-07-12 04:26:05,350 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2019-07-12 04:26:05,352 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2019-07-12 04:26:05,354 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2019-07-12 04:26:05,356 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs'], 'uid': None}
2019-07-12 04:26:05,358 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2019-07-12 04:26:05,361 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2019-07-12 04:26:05,373 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2019-07-12 04:26:05,374 - Group['hdfs'] {}
2019-07-12 04:26:05,375 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', u'hdfs']}
2019-07-12 04:26:05,376 - FS Type: 
2019-07-12 04:26:05,376 - Directory['/etc/hadoop'] {'mode': 0755}
2019-07-12 04:26:05,407 - File['/usr/hdp/2.6.5.1175-1/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2019-07-12 04:26:05,408 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2019-07-12 04:26:05,438 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2019-07-12 04:26:05,453 - Skipping Execute[('setenforce', '0')] due to not_if
2019-07-12 04:26:05,454 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2019-07-12 04:26:05,459 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2019-07-12 04:26:05,460 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2019-07-12 04:26:05,469 - File['/usr/hdp/2.6.5.1175-1/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2019-07-12 04:26:05,473 - File['/usr/hdp/2.6.5.1175-1/hadoop/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2019-07-12 04:26:05,484 - File['/usr/hdp/2.6.5.1175-1/hadoop/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2019-07-12 04:26:05,507 - File['/usr/hdp/2.6.5.1175-1/hadoop/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2019-07-12 04:26:05,509 - File['/usr/hdp/2.6.5.1175-1/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2019-07-12 04:26:05,511 - File['/usr/hdp/2.6.5.1175-1/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2019-07-12 04:26:05,521 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644}
2019-07-12 04:26:05,530 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2019-07-12 04:26:06,195 - Using hadoop conf dir: /usr/hdp/2.6.5.1175-1/hadoop/conf
2019-07-12 04:26:06,210 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.5.1175-1 -> 2.6.5.1175-1
2019-07-12 04:26:06,258 - Using hadoop conf dir: /usr/hdp/2.6.5.1175-1/hadoop/conf
2019-07-12 04:26:06,292 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'}
2019-07-12 04:26:06,311 - File['/etc/security/limits.d/hdfs.conf'] {'content': Template('hdfs.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2019-07-12 04:26:06,312 - XmlConfig['hadoop-policy.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.5.1175-1/hadoop/conf', 'configuration_attributes': {}, 'configurations': ...}
2019-07-12 04:26:06,333 - Generating config: /usr/hdp/2.6.5.1175-1/hadoop/conf/hadoop-policy.xml
2019-07-12 04:26:06,334 - File['/usr/hdp/2.6.5.1175-1/hadoop/conf/hadoop-policy.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2019-07-12 04:26:06,359 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.5.1175-1/hadoop/conf', 'configuration_attributes': {}, 'configurations': ...}
2019-07-12 04:26:06,377 - Generating config: /usr/hdp/2.6.5.1175-1/hadoop/conf/ssl-client.xml
2019-07-12 04:26:06,377 - File['/usr/hdp/2.6.5.1175-1/hadoop/conf/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2019-07-12 04:26:06,394 - Directory['/usr/hdp/2.6.5.1175-1/hadoop/conf/secure'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2019-07-12 04:26:06,396 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.5.1175-1/hadoop/conf/secure', 'configuration_attributes': {}, 'configurations': ...}
2019-07-12 04:26:06,414 - Generating config: /usr/hdp/2.6.5.1175-1/hadoop/conf/secure/ssl-client.xml
2019-07-12 04:26:06,414 - File['/usr/hdp/2.6.5.1175-1/hadoop/conf/secure/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2019-07-12 04:26:06,430 - XmlConfig['ssl-server.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.5.1175-1/hadoop/conf', 'configuration_attributes': {}, 'configurations': ...}
2019-07-12 04:26:06,444 - Generating config: /usr/hdp/2.6.5.1175-1/hadoop/conf/ssl-server.xml
2019-07-12 04:26:06,445 - File['/usr/hdp/2.6.5.1175-1/hadoop/conf/ssl-server.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2019-07-12 04:26:06,459 - XmlConfig['hdfs-site.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.5.1175-1/hadoop/conf', 'configuration_attributes': {u'final': {u'dfs.support.append': u'true', u'dfs.datanode.data.dir': u'true', u'dfs.namenode.http-address': u'true', u'dfs.namenode.name.dir': u'true', u'dfs.webhdfs.enabled': u'true', u'dfs.datanode.failed.volumes.tolerated': u'true'}}, 'configurations': ...}
2019-07-12 04:26:06,474 - Generating config: /usr/hdp/2.6.5.1175-1/hadoop/conf/hdfs-site.xml
2019-07-12 04:26:06,475 - File['/usr/hdp/2.6.5.1175-1/hadoop/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2019-07-12 04:26:06,559 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.5.1175-1/hadoop/conf', 'mode': 0644, 'configuration_attributes': {u'final': {u'fs.defaultFS': u'true'}}, 'owner': 'hdfs', 'configurations': ...}
2019-07-12 04:26:06,572 - Generating config: /usr/hdp/2.6.5.1175-1/hadoop/conf/core-site.xml
2019-07-12 04:26:06,572 - File['/usr/hdp/2.6.5.1175-1/hadoop/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2019-07-12 04:26:06,620 - File['/usr/hdp/2.6.5.1175-1/hadoop/conf/slaves'] {'content': Template('slaves.j2'), 'owner': 'hdfs'}
2019-07-12 04:26:06,621 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.5.1175-1 -> 2.6.5.1175-1
2019-07-12 04:26:06,627 - Directory['/var/lib/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'group': 'hadoop', 'mode': 0751}
2019-07-12 04:26:06,628 - Directory['/var/lib/ambari-agent/data/datanode'] {'create_parents': True, 'mode': 0755}
2019-07-12 04:26:06,639 - Host contains mounts: ['/sys', '/proc', '/dev', '/sys/kernel/security', '/dev/shm', '/dev/pts', '/run', '/sys/fs/cgroup', '/sys/fs/cgroup/systemd', '/sys/fs/pstore', '/sys/fs/cgroup/cpu,cpuacct', '/sys/fs/cgroup/blkio', '/sys/fs/cgroup/net_cls,net_prio', '/sys/fs/cgroup/devices', '/sys/fs/cgroup/freezer', '/sys/fs/cgroup/perf_event', '/sys/fs/cgroup/hugetlb', '/sys/fs/cgroup/pids', '/sys/fs/cgroup/memory', '/sys/fs/cgroup/cpuset', '/sys/kernel/config', '/', '/proc/sys/fs/binfmt_misc', '/dev/hugepages', '/dev/mqueue', '/sys/kernel/debug', '/boot', '/var/lib/nfs/rpc_pipefs', '/run/user/0', '/run/user/1002'].
2019-07-12 04:26:06,640 - Mount point for directory /hadoop/hdfs/data is /
2019-07-12 04:26:06,640 - Mount point for directory /hadoop/hdfs/data is /
2019-07-12 04:26:06,640 - Forcefully ensuring existence and permissions of the directory: /hadoop/hdfs/data
2019-07-12 04:26:06,641 - Directory['/hadoop/hdfs/data'] {'group': 'hadoop', 'cd_access': 'a', 'create_parents': True, 'ignore_failures': True, 'mode': 0750, 'owner': 'hdfs'}
2019-07-12 04:26:06,643 - Changing permission for /hadoop/hdfs/data from 755 to 750
2019-07-12 04:26:06,653 - Host contains mounts: ['/sys', '/proc', '/dev', '/sys/kernel/security', '/dev/shm', '/dev/pts', '/run', '/sys/fs/cgroup', '/sys/fs/cgroup/systemd', '/sys/fs/pstore', '/sys/fs/cgroup/cpu,cpuacct', '/sys/fs/cgroup/blkio', '/sys/fs/cgroup/net_cls,net_prio', '/sys/fs/cgroup/devices', '/sys/fs/cgroup/freezer', '/sys/fs/cgroup/perf_event', '/sys/fs/cgroup/hugetlb', '/sys/fs/cgroup/pids', '/sys/fs/cgroup/memory', '/sys/fs/cgroup/cpuset', '/sys/kernel/config', '/', '/proc/sys/fs/binfmt_misc', '/dev/hugepages', '/dev/mqueue', '/sys/kernel/debug', '/boot', '/var/lib/nfs/rpc_pipefs', '/run/user/0', '/run/user/1002'].
2019-07-12 04:26:06,654 - Mount point for directory /hadoop/hdfs/data is /
2019-07-12 04:26:06,654 - File['/var/lib/ambari-agent/data/datanode/dfs_data_dir_mount.hist'] {'content': '\n# This file keeps track of the last known mount-point for each dir.\n# It is safe to delete, since it will get regenerated the next time that the component of the service starts.\n# However, it is not advised to delete this file since Ambari may\n# re-create a dir that used to be mounted on a drive but is now mounted on the root.\n# Comments begin with a hash (#) symbol\n# dir,mount_point\n/hadoop/hdfs/data,/\n', 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2019-07-12 04:26:06,658 - Directory['/var/run/hadoop'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0755}
2019-07-12 04:26:06,659 - Changing owner for /var/run/hadoop from 0 to hdfs
2019-07-12 04:26:06,659 - Changing group for /var/run/hadoop from 0 to hadoop
2019-07-12 04:26:06,660 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True}
2019-07-12 04:26:06,661 - Directory['/var/log/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True}
2019-07-12 04:26:06,662 - File['/var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid'] {'action': ['delete'], 'not_if': 'ambari-sudo.sh  -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid && ambari-sudo.sh  -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid'}
2019-07-12 04:26:06,693 - Deleting File['/var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid']
2019-07-12 04:26:06,694 - Execute['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ;  /usr/hdp/2.6.5.1175-1/hadoop/sbin/hadoop-daemon.sh --config /usr/hdp/2.6.5.1175-1/hadoop/conf start datanode''] {'environment': {'HADOOP_LIBEXEC_DIR': '/usr/hdp/2.6.5.1175-1/hadoop/libexec'}, 'not_if': 'ambari-sudo.sh  -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid && ambari-sudo.sh  -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid'}
2019-07-12 04:26:11,141 - Execute['find /var/log/hadoop/hdfs -maxdepth 1 -type f -name '*' -exec echo '==> {} <==' \; -exec tail -n 40 {} \;'] {'logoutput': True, 'ignore_failures': True, 'user': 'hdfs'}
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-worker2.sip.com.out.5 <==
Error: could not find libjava.so
Error: Could not find Java SE Runtime Environment.
ulimit -a for user hdfs
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 97256
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 32768
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/jsvc.out <==
tail: cannot open ‘/var/log/hadoop/hdfs/jsvc.out’ for reading: Permission denied
==> /var/log/hadoop/hdfs/jsvc.err <==
tail: cannot open ‘/var/log/hadoop/hdfs/jsvc.err’ for reading: Permission denied
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-worker2.sip.com.out.1 <==
Error: could not find libjava.so
Error: Could not find Java SE Runtime Environment.
ulimit -a for user hdfs
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 97256
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 32768
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-worker2.sip.com.out <==
Error: could not find libjava.so
Error: Could not find Java SE Runtime Environment.
ulimit -a for user hdfs
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 97256
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 32768
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-worker2.sip.com.out.4 <==
Error: could not find libjava.so
Error: Could not find Java SE Runtime Environment.
ulimit -a for user hdfs
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 97256
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 32768
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-worker2.sip.com.out.3 <==
Error: could not find libjava.so
Error: Could not find Java SE Runtime Environment.
ulimit -a for user hdfs
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 97256
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 32768
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-worker2.sip.com.out.2 <==
Error: could not find libjava.so
Error: Could not find Java SE Runtime Environment.
ulimit -a for user hdfs
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 97256
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 32768
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

Command failed after 1 tries
1 REPLY 1

avatar
Master Mentor

@Habtamu Wubneh

Can you check your Java environment

$ which -a java

Are you executing the java command as a non-root user? /var/log/hadoop/hdfs/jsvc.out’ for reading: Permission denied What are the permissions and ownership?

Modify the JAVA_HOME value in the hadoop-env.sh file:

export JAVA_HOME=/usr/java/default


Start the data node

su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-datanode/../hadoop/sbin/hadoop-daemon.sh start datanode"

HTH