I have a problem with new nodes added to the cluster HDP 3.0.1, hdfs service is ok but NodeManager service not start with this errors:
/var/lib/ambari-agent/data/errors-26141.txt
resource_management.core.exceptions.ExecutionFailed: Execution of 'ulimit -c unlimited; export HADOOP_LIBEXEC_DIR=/usr/hdp/3.0.1.0-187/hadoop/libexec && /usr/hdp/3.0.1.0-187/hadoop-yarn/bin/yarn --config /usr/hdp/3.0.1.0-187/hadoop/conf --daemon start nodemanager' returned 1. Usage: grep [OPTION]... PATTERN [FILE]... Try 'grep --help' for more information. Command line is not complete. Try option "help" TERM environment variable not set. ERROR: Cannot set priority of nodemanager process 34389
TERM env variable is set
NodeManager.log
STARTUP_MSG: java = 1.8.0_112
************************************************************/
2020-03-04 15:18:35,735 INFO nodemanager.NodeManager (LogAdapter.java:info(51)) - registered UNIX signal handlers for [TERM, HUP, INT]
2020-03-04 15:18:36,133 INFO recovery.NMLeveldbStateStoreService (NMLeveldbStateStoreService.java:openDatabase(1540)) - Using state database at /var/log/hadoop-yarn/n
odemanager/recovery-state/yarn-nm-state for recovery
2020-03-04 15:18:36,143 ERROR nodemanager.NodeManager (NodeManager.java:initAndStartNodeManager(936)) - Error starting NodeManager
java.lang.UnsatisfiedLinkError: Could not load library. Reasons: [no leveldbjni64-1.8 in java.library.path, no leveldbjni-1.8 in java.library.path, no leveldbjni in ja
va.library.path, Permission denied]
at org.fusesource.hawtjni.runtime.Library.doLoad(Library.java:182)
at org.fusesource.hawtjni.runtime.Library.load(Library.java:140)
at org.fusesource.leveldbjni.JniDBFactory.<clinit>(JniDBFactory.java:48)
at org.apache.hadoop.yarn.server.nodemanager.recovery.NMLeveldbStateStoreService.openDatabase(NMLeveldbStateStoreService.java:1543)
at org.apache.hadoop.yarn.server.nodemanager.recovery.NMLeveldbStateStoreService.initStorage(NMLeveldbStateStoreService.java:1531)
at org.apache.hadoop.yarn.server.nodemanager.recovery.NMStateStoreService.serviceInit(NMStateStoreService.java:353)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartRecoveryStore(NodeManager.java:285)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:358)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:933)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:1013)
2020-03-04 15:18:36,149 INFO service.AbstractService (AbstractService.java:noteFailure(267)) - Service NodeManager failed in state STOPPED
I have reviewed the execution permissions of the temporary directories.
The java.library.path files were also copied from an old node.
The NodeManager service runs with root user but with yarn user don´t start.
Created 03-04-2020 03:21 PM
@san_t_o
Can you please check few things:
1). Please verify what is the value set for "" property in the NodeManager option? (If it starts even for few seconds)
# ps -ef | grep NodeManager
Things to look for:
2). If above does not start due to the "" error then please check the permissions set for this directory:
Example:
# ls -ld /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir
drwxrwxrwt. 8 hdfs hadoop 4096 Feb 25 07:23 /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir
# ls -ld /var/lib/ambari-agent/tmp/
drwxrwxrwt. 12 ambari hadoop 4096 Mar 4 01:48 /var/lib/ambari-agent/tmp/
Why we wanted to check permissions on "/var/lib/ambari-agent/tmp/" and "/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir" directory because the "" is usually set to this directory so the yarn user should have proper read/write access on the directory listed here.
Example:
# grep 'JAVA_LIBRARY_PATH' /etc/hadoop/conf/yarn-env.sh
export JAVA_LIBRARY_PATH="${JAVA_LIBRARY_PATH}:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir"
3). Also please check if you see any "" related files here. Ideally those should be owned by "yarn" user like "yarn:hadoop" (hadoop is group). This directory and it's content should be writable be yarn user.
Example:
# ls -lart /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir/libleveldbjni*
-rwxr-xr-x. 1 yarn hadoop 752803 Dec 2 06:33 /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir/libleveldbjni-64-1-2529926063314066012.8
Possible Cause:
So if by mistake if you would have ever restarted YARN NodeManager /RM with "root" user then the permissions on those directories/files might have changed and it might not be writable by Yarn user. So please check the directory permissions if they are writable or not?
Created on 03-04-2020 03:28 PM - edited 03-04-2020 03:29 PM
In addition to my previous comment:
Which path do you see when you run following on failing NodeManager node?
# source /etc/hadoop/conf/yarn-env.sh
# echo $JAVA_LIBRARY_PATH
:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir
(OR)
# echo $HADOOP_OPTS
-Dyarn.id.str= -Dyarn.policy.file=hadoop-policy.xml -Djava.io.tmpdir=/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir
If that path writable for the yarn user? Or if that user belongs to the correct group?
# id yarn
.
Created 03-04-2020 10:57 PM
This usually happens if the directory configured for "-Djava.io.tmpdir" is mounted with noexec option. Removing noexec from the mount option would help to fix the issue
Created 03-05-2020 08:25 AM
Hi,
The directory configured for "-Djava.io.tmpdir" has exec options.
With root user NodeManager runs but with yarn user from Ambari not runs.
Regards.
Created 03-05-2020 08:14 AM
Thanks for your answers. I have checked the appropriate permissions and apparently everything is fine.
I share the command outputs.
1). The output is the same with the old nodes the problem is with four new nodes installed with root user.
# ps -ef | grep NodeManager
/usr/jdk64/jdk1.8.0_112/bin/java -Dproc_nodemanager -Dhdp.version=3.0.1.0-187 -Djava.net.preferIPv4Stack=true -Dhdp.version=3.0.1.0-187 -Dyarn.id.str= -Dyarn.policy.file=hadoop-policy.xml -Djava.io.tmpdir=/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir -Djavax.security.auth.useSubjectCredsOnly=false -Djava.security.auth.login.config=/etc/hadoop/3.0.1.0-187/0/yarn_nm_jaas.conf -Dsun.security.krb5.rcache=none -Dnm.audit.logger=INFO,NMAUDIT -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.file=hadoop-yarn-nodemanager-pgdacl-hdpdat13.log -Dyarn.home.dir=/usr/hdp/3.0.1.0-187/hadoop-yarn -Dyarn.root.logger=INFO,console -Djava.library.path=:/usr/hdp/3.0.1.0-187/hadoop/lib/native/Linux-amd64-64:/usr/hdp/3.0.1.0-187/hadoop/lib/native/Linux-amd64-64:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/3.0.1.0-187/hadoop/lib/native -Xmx4096m -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=hadoop-yarn-nodemanager-NewHost.log -Dhadoop.home.dir=/usr/hdp/3.0.1.0-187/hadoop -Dhadoop.id.str=yarn -Dhadoop.root.logger=INFO,RFA -Dhadoop.policy.file=hadoop-policy.xml -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.yarn.server.nodemanager.NodeManager
2) Permissions for temp directories
# grep 'JAVA_LIBRARY_PATH' /etc/hadoop/conf/yarn-env.sh
export JAVA_LIBRARY_PATH="${JAVA_LIBRARY_PATH}:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir"
# ls -ld /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir
drwxrwxrwt+ 36 hdfs hadoop 8192 Mar 4 09:55 /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir
# ls -ld /var/lib/ambari-agent/tmp/
drwxrwxrwt+ 6 root root 4096 Mar 3 10:49 /var/lib/ambari-agent/tmp/
3) Owner for content in "hadoop_java_io_tmpdir"
ls -lart /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir/libleveldbjni*
-rwxrwxr-x+ 1 yarn hadoop 752803 Mar 3 12:33 /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir/libleveldbjni-64-1-1195312416327962249.8
I have compared the directories permissions with an old node and apparently they are similar. Suddenly any additional suggestions?
Created 03-05-2020 08:44 AM
Could you please share the result of below commands from one problematic and a working node
1. namei -l /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir
2. mount
Created 03-05-2020 09:01 AM
Hi @venkatsambath,
I share the outputs:
Problematic Node:
# namei -l /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir
f: /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir
dr-xr-xr-x root root /
drwxr-xr-x root root var
drwxr-xr-x root root lib
drwxr-xr-x root root ambari-agent
drwxrwxrwt root root tmp
drwxrwxrwt hdfs hadoop hadoop_java_io_tmpdir
# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs (rw,nosuid,size=131760652k,nr_inodes=32940163,mode=755)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_prio,net_cls)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
configfs on /sys/kernel/config type configfs (rw,relatime)
/dev/mapper/system-root on / type xfs (rw,relatime,attr2,inode64,sunit=512,swidth=512,noquota)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=22,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=78969)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
/dev/sdf1 on /grid/2 type ext3 (rw,relatime,stripe=64,data=ordered)
/dev/sdg1 on /grid/3 type ext3 (rw,relatime,stripe=64,data=ordered)
/dev/sdc1 on /grid/11 type ext3 (rw,relatime,stripe=64,data=ordered)
/dev/sde1 on /grid/1 type ext3 (rw,relatime,stripe=64,data=ordered)
/dev/sdm1 on /grid/9 type ext3 (rw,relatime,stripe=64,data=ordered)
/dev/sda2 on /boot type xfs (rw,relatime,attr2,inode64,sunit=512,swidth=512,noquota)
/dev/sdd1 on /grid/12 type ext3 (rw,relatime,stripe=64,data=ordered)
/dev/sdb1 on /grid/10 type ext3 (rw,relatime,stripe=64,data=ordered)
/dev/sdl1 on /grid/8 type ext3 (rw,relatime,stripe=64,data=ordered)
/dev/sdk1 on /grid/7 type ext3 (rw,relatime,stripe=64,data=ordered)
/dev/sdi1 on /grid/5 type ext3 (rw,relatime,stripe=64,data=ordered)
/dev/sdj1 on /grid/6 type ext3 (rw,relatime,stripe=64,data=ordered)
/dev/sdh1 on /grid/4 type ext3 (rw,relatime,stripe=64,data=ordered)
/dev/sdn1 on /grid/0 type ext3 (rw,relatime,stripe=64,data=ordered)
/dev/sda1 on /boot/efi type vfat (rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=winnt,errors=remount-ro)
/dev/mapper/system-tmp on /tmp type xfs (rw,relatime,attr2,inode64,sunit=512,swidth=512,noquota)
/dev/mapper/system-LVusuarios on /usuarios type xfs (rw,relatime,attr2,inode64,sunit=512,swidth=512,noquota)
/dev/mapper/system-opt on /opt type xfs (rw,relatime,attr2,inode64,sunit=512,swidth=512,noquota)
/dev/mapper/system-var on /var type xfs (rw,relatime,attr2,inode64,sunit=512,swidth=512,noquota)
/dev/mapper/system-home on /home type xfs (rw,relatime,attr2,inode64,sunit=512,swidth=512,noquota)
/dev/mapper/system-usr_hdp on /usr/hdp type xfs (rw,relatime,attr2,inode64,sunit=512,swidth=512,noquota)
/dev/mapper/system-var_log on /var/log type xfs (rw,relatime,attr2,inode64,sunit=512,swidth=512,noquota)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
tmpfs on /run/user/1051 type tmpfs (rw,nosuid,nodev,relatime,size=26354564k,mode=700,uid=1051,gid=1051)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=26354564k,mode=700)
Working Node:
# namei -l /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir
f: /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir
dr-xr-xr-x root root /
drwxr-xr-x root root var
drwxr-xr-x root root lib
drwxr-xr-x root root ambari-agent
drwxrwxrwt root root tmp
drwxrwxrwt hdfs hadoop hadoop_java_io_tmpdir
# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs (rw,nosuid,size=96883504k,nr_inodes=24220876,mode=755)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_prio,net_cls)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
configfs on /sys/kernel/config type configfs (rw,relatime)
/dev/mapper/system-root on / type xfs (rw,relatime,attr2,inode64,noquota)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=32,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=16126)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime)
/dev/sdf1 on /grid/2 type ext3 (rw,relatime,data=ordered)
/dev/sdj1 on /grid/6 type ext3 (rw,relatime,data=ordered)
/dev/sdl1 on /grid/8 type ext3 (rw,relatime,data=ordered)
/dev/sdd1 on /grid/0 type ext3 (rw,relatime,data=ordered)
/dev/sdh1 on /grid/4 type ext3 (rw,relatime,data=ordered)
/dev/sde1 on /grid/1 type ext3 (rw,relatime,data=ordered)
/dev/sdg1 on /grid/3 type ext3 (rw,relatime,data=ordered)
/dev/sdk1 on /grid/7 type ext3 (rw,relatime,data=ordered)
/dev/sdi1 on /grid/5 type ext3 (rw,relatime,data=ordered)
/dev/mapper/system-opt on /opt type xfs (rw,relatime,attr2,inode64,noquota)
/dev/mapper/system-home on /home type xfs (rw,relatime,attr2,inode64,noquota)
/dev/mapper/system-var on /var type xfs (rw,relatime,attr2,inode64,noquota)
/dev/mapper/system-tmp on /tmp type xfs (rw,relatime,attr2,inode64,noquota)
/dev/mapper/system-var--log on /var/log type xfs (rw,relatime,attr2,inode64,noquota)
/dev/sda1 on /boot type xfs (rw,relatime,attr2,inode64,noquota)
/dev/mapper/hadoop-hdp on /usr/hdp type xfs (rw,relatime,attr2,inode64,noquota)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=19379104k,mode=700)
tmpfs on /run/user/1062 type tmpfs (rw,nosuid,nodev,relatime,size=19379104k,mode=700,uid=1062,gid=1062)
Any sugestion?
Created on 03-05-2020 07:55 PM - edited 03-05-2020 08:24 PM
Hi @san_t_o I wanted to validate once if the mounts and permission are same. Those look exactly same except the additional "sunit=512,swidth=512" for /var mount but that cant be an issue.
At this point its unclear what exactly is denied permission.
What is the selinux status? Is it disabled on both working and nonworking node. Please run below command on both working and non-working node
getenforce
If its same in both nodes, Can you clear the entries under this directory
/var/log/hadoop-yarn/nodemanager/recovery-state/yarn-nm-state/* and also on /var/lib/ambari-agent/tmp/
If that doesn't work, Can you try pointing JAVA_LIBRARY_PATH in yarn-env.sh to a different directory
How exactly are you starting nodemanager? Is it by running commands manually? If yes can you try running the command with strace -f -s 2000 <command> [Strace allows to capture all syscalls and we can get more debug info]
Created 03-06-2020 10:04 AM
Hi @venkatsambath.
Personally I consier the same, mount options is not the problem, becausse with root user, the NM service start.
About selinux is disabled in both; old nodes and new nodes.
Old Node
# getenforce
Disabled
New Node
# getenforce
Disabled
I can't find out which directories the permission is denied. I have recreated the directory structure however the error is the same.
I am trying to start the NM service from Ambari and also from the command line with the user YANR. I have executed with the --debug option and the error is:
# export HADOOP_LIBEXEC_DIR=/usr/hdp/3.0.1.0-187/hadoop/libexec && /usr/hdp/3.0.1.0-187/hadoop-yarn/bin/yarn --debug --config /usr/hdp/3.0.1.0-187/hadoop/conf --daemon start nodemanager
DEBUG: hadoop_parse_args: processing --config
DEBUG: hadoop_parse_args: processing --daemon
DEBUG: hadoop_parse_args: processing nodemanager
DEBUG: hadoop_parse: asking caller to skip 5
DEBUG: HADOOP_CONF_DIR=/usr/hdp/3.0.1.0-187/hadoop/conf
DEBUG: shellprofiles: /usr/hdp/3.0.1.0-187/hadoop/libexec/shellprofile.d/hadoop-aliyun.sh /usr/hdp/3.0.1.0-187/hadoop/libexec/shellprofile.d/hadoop-archive-logs.sh /usr/hdp/3.0.1.0-187/hadoop/libexec/shellprofile.d/hadoop-archives.sh /usr/hdp/3.0.1.0-187/hadoop/libexec/shellprofile.d/hadoop-aws.sh /usr/hdp/3.0.1.0-187/hadoop/libexec/shellprofile.d/hadoop-azure-datalake.sh /usr/hdp/3.0.1.0-187/hadoop/libexec/shellprofile.d/hadoop-azure.sh /usr/hdp/3.0.1.0-187/hadoop/libexec/shellprofile.d/hadoop-distcp.sh /usr/hdp/3.0.1.0-187/hadoop/libexec/shellprofile.d/hadoop-extras.sh /usr/hdp/3.0.1.0-187/hadoop/libexec/shellprofile.d/hadoop-gridmix.sh /usr/hdp/3.0.1.0-187/hadoop/libexec/shellprofile.d/hadoop-hdfs.sh /usr/hdp/3.0.1.0-187/hadoop/libexec/shellprofile.d/hadoop-httpfs.sh /usr/hdp/3.0.1.0-187/hadoop/libexec/shellprofile.d/hadoop-kafka.sh /usr/hdp/3.0.1.0-187/hadoop/libexec/shellprofile.d/hadoop-kms.sh /usr/hdp/3.0.1.0-187/hadoop/libexec/shellprofile.d/hadoop-mapreduce.sh /usr/hdp/3.0.1.0-187/hadoop/libexec/shellprofile.d/hadoop-openstack.sh /usr/hdp/3.0.1.0-187/hadoop/libexec/shellprofile.d/hadoop-rumen.sh /usr/hdp/3.0.1.0-187/hadoop/libexec/shellprofile.d/hadoop-s3guard.sh /usr/hdp/3.0.1.0-187/hadoop/libexec/shellprofile.d/hadoop-streaming.sh /usr/hdp/3.0.1.0-187/hadoop/libexec/shellprofile.d/hadoop-yarn.sh
DEBUG: Profiles: importing /usr/hdp/3.0.1.0-187/hadoop/libexec/shellprofile.d/hadoop-aliyun.sh
DEBUG: Profiles: importing /usr/hdp/3.0.1.0-187/hadoop/libexec/shellprofile.d/hadoop-archive-logs.sh
DEBUG: Profiles: importing /usr/hdp/3.0.1.0-187/hadoop/libexec/shellprofile.d/hadoop-archives.sh
DEBUG: Profiles: importing /usr/hdp/3.0.1.0-187/hadoop/libexec/shellprofile.d/hadoop-aws.sh
DEBUG: Profiles: importing /usr/hdp/3.0.1.0-187/hadoop/libexec/shellprofile.d/hadoop-azure-datalake.sh
DEBUG: Profiles: importing /usr/hdp/3.0.1.0-187/hadoop/libexec/shellprofile.d/hadoop-azure.sh
DEBUG: Profiles: importing /usr/hdp/3.0.1.0-187/hadoop/libexec/shellprofile.d/hadoop-distcp.sh
DEBUG: Profiles: importing /usr/hdp/3.0.1.0-187/hadoop/libexec/shellprofile.d/hadoop-extras.sh
DEBUG: Profiles: importing /usr/hdp/3.0.1.0-187/hadoop/libexec/shellprofile.d/hadoop-gridmix.sh
DEBUG: Profiles: importing /usr/hdp/3.0.1.0-187/hadoop/libexec/shellprofile.d/hadoop-hdfs.sh
DEBUG: HADOOP_SHELL_PROFILES accepted hdfs
DEBUG: Profiles: importing /usr/hdp/3.0.1.0-187/hadoop/libexec/shellprofile.d/hadoop-httpfs.sh
DEBUG: Profiles: importing /usr/hdp/3.0.1.0-187/hadoop/libexec/shellprofile.d/hadoop-kafka.sh
DEBUG: Profiles: importing /usr/hdp/3.0.1.0-187/hadoop/libexec/shellprofile.d/hadoop-kms.sh
DEBUG: Profiles: importing /usr/hdp/3.0.1.0-187/hadoop/libexec/shellprofile.d/hadoop-mapreduce.sh
DEBUG: HADOOP_SHELL_PROFILES accepted mapred
DEBUG: Profiles: importing /usr/hdp/3.0.1.0-187/hadoop/libexec/shellprofile.d/hadoop-openstack.sh
DEBUG: Profiles: importing /usr/hdp/3.0.1.0-187/hadoop/libexec/shellprofile.d/hadoop-rumen.sh
DEBUG: Profiles: importing /usr/hdp/3.0.1.0-187/hadoop/libexec/shellprofile.d/hadoop-s3guard.sh
DEBUG: Profiles: importing /usr/hdp/3.0.1.0-187/hadoop/libexec/shellprofile.d/hadoop-streaming.sh
DEBUG: Profiles: importing /usr/hdp/3.0.1.0-187/hadoop/libexec/shellprofile.d/hadoop-yarn.sh
DEBUG: HADOOP_SHELL_PROFILES accepted yarn
DEBUG: Initialize CLASSPATH
DEBUG: Rejected colonpath(JAVA_LIBRARY_PATH): /usr/hdp/3.0.1.0-187/hadoop/build/native
DEBUG: Append colonpath(JAVA_LIBRARY_PATH): /usr/hdp/3.0.1.0-187/hadoop/lib/native
DEBUG: Initial CLASSPATH=/usr/hdp/3.0.1.0-187/hadoop/lib/*
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/hadoop/.//*
DEBUG: Profiles: hdfs classpath
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/hadoop-hdfs/./
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/hadoop-hdfs/lib/*
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/hadoop-hdfs/.//*
DEBUG: Profiles: mapred classpath
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/hadoop-mapreduce/lib/*
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/hadoop-mapreduce/.//*
DEBUG: Profiles: yarn classpath
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/hadoop-yarn/./
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/hadoop-yarn/lib/*
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/hadoop-yarn/.//*
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/hadoop-yarn/.//timelineservice/*
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/hadoop-yarn/.//timelineservice/lib/*
DEBUG: Appending YARN_NODEMANAGER_OPTS onto HADOOP_OPTS
DEBUG: No secure classname defined.
DEBUG: Profiles: yarn finalize
DEBUG: HADOOP_OPTS accepted -Dyarn.log.dir=/var/log/hadoop-yarn/yarn
DEBUG: HADOOP_OPTS accepted -Dyarn.log.file=hadoop-root-nodemanager-pgdacl-hdpdat13.log
DEBUG: HADOOP_OPTS accepted -Dyarn.home.dir=/usr/hdp/3.0.1.0-187/hadoop-yarn
DEBUG: HADOOP_OPTS accepted -Dyarn.root.logger=INFO,console
DEBUG: Prepend CLASSPATH: /usr/hdp/3.0.1.0-187/hadoop/conf
DEBUG: Dupe CLASSPATH: /usr/hdp/3.0.1.0-187/tez/conf
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/conf_llap
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/doc
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/hadoop-shim-0.9.1.3.0.1.0-187.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/hadoop-shim-2.8-0.9.1.3.0.1.0-187.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/lib
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/man
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/tez-api-0.9.1.3.0.1.0-187.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/tez-common-0.9.1.3.0.1.0-187.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/tez-dag-0.9.1.3.0.1.0-187.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/tez-examples-0.9.1.3.0.1.0-187.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/tez-history-parser-0.9.1.3.0.1.0-187.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/tez-javadoc-tools-0.9.1.3.0.1.0-187.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/tez-job-analyzer-0.9.1.3.0.1.0-187.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/tez-mapreduce-0.9.1.3.0.1.0-187.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/tez-protobuf-history-plugin-0.9.1.3.0.1.0-187.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/tez-runtime-internals-0.9.1.3.0.1.0-187.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/tez-runtime-library-0.9.1.3.0.1.0-187.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/tez-tests-0.9.1.3.0.1.0-187.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/tez-yarn-timeline-cache-plugin-0.9.1.3.0.1.0-187.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/tez-yarn-timeline-history-0.9.1.3.0.1.0-187.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/tez-yarn-timeline-history-with-acls-0.9.1.3.0.1.0-187.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/tez-yarn-timeline-history-with-fs-0.9.1.3.0.1.0-187.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/ui
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/lib/async-http-client-1.9.40.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/lib/commons-cli-1.2.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/lib/commons-codec-1.4.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/lib/commons-collections-3.2.2.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/lib/commons-collections4-4.1.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/lib/commons-io-2.4.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/lib/commons-lang-2.6.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/lib/commons-math3-3.1.1.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/lib/gcs-connector-1.9.0.3.0.1.0-187-shaded.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/lib/guava-11.0.2.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/lib/hadoop-aws-3.1.1.3.0.1.0-187.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/lib/hadoop-azure-3.1.1.3.0.1.0-187.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/lib/hadoop-azure-datalake-3.1.1.3.0.1.0-187.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/lib/hadoop-hdfs-client-3.1.1.3.0.1.0-187.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/lib/hadoop-mapreduce-client-common-3.1.1.3.0.1.0-187.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/lib/hadoop-mapreduce-client-core-3.1.1.3.0.1.0-187.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/lib/hadoop-yarn-server-timeline-pluginstorage-3.1.1.3.0.1.0-187.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/lib/jersey-client-1.19.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/lib/jersey-json-1.19.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/lib/jettison-1.3.4.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/lib/jetty-server-9.3.22.v20171030.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/lib/jetty-util-9.3.22.v20171030.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/lib/jsr305-3.0.0.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/lib/metrics-core-3.1.0.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/lib/protobuf-java-2.5.0.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/lib/RoaringBitmap-0.4.9.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/lib/servlet-api-2.5.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/lib/slf4j-api-1.7.10.jar
DEBUG: Append CLASSPATH: /usr/hdp/3.0.1.0-187/tez/lib/tez.tar.gz
DEBUG: Dupe CLASSPATH: /usr/hdp/3.0.1.0-187/tez/conf
DEBUG: HADOOP_OPTS accepted -Djava.library.path=:/usr/hdp/3.0.1.0-187/hadoop/lib/native/Linux-amd64-64:/usr/hdp/3.0.1.0-187/hadoop/lib/native/Linux-amd64-64:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/3.0.1.0-187/hadoop/lib/native
DEBUG: HADOOP_OPTS accepted -Xmx4096m
DEBUG: HADOOP_OPTS declined -Xmx5120m
DEBUG: HADOOP_OPTS accepted -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn
DEBUG: HADOOP_OPTS accepted -Dhadoop.log.file=hadoop-root-nodemanager-pgdacl-hdpdat13.log
DEBUG: HADOOP_OPTS accepted -Dhadoop.home.dir=/usr/hdp/3.0.1.0-187/hadoop
DEBUG: HADOOP_OPTS accepted -Dhadoop.id.str=root
DEBUG: HADOOP_OPTS accepted -Dhadoop.root.logger=INFO,RFA
DEBUG: HADOOP_OPTS accepted -Dhadoop.policy.file=hadoop-policy.xml
DEBUG: HADOOP_OPTS accepted -Dhadoop.security.logger=INFO,NullAppender
ERROR: Cannot set priority of nodemanager process 9633
When cleaning the directories indicated, the libraries are copied automatically or it is necessary to copy them manually?
/var/log/hadoop-yarn/nodemanager/recovery-state/yarn-nm-state/* and also on /var/lib/ambari-agent/tmp/
What would be the recommended procedure to change the directory JAVA_LIBRARY_PATH? considering that I have NM in production and processing applications.