Created 12-15-2015 04:32 PM
Ambari sets up set-hdfs-plugin-env.sh with root:hadoop 700 permissions when it restarts HDFS, this causes ranger integration to break as the hdfs user cannot execute this script when the namenode starts.
I can fix the problem if I restart the namenode manually, but that means to deploy config changes I need to restart from Ambari, then correct permissions and restart manually.
How
Created 12-17-2015 08:45 PM
You can use the following work-around:
On the namenode box,
# cp /etc/hadoop/conf/set-hdfs-plugin-env.sh /etc/hadoop/conf/set-hdfs-plugin-env-permfix.sh # chown hdfs:hadoop /etc/hadoop/conf/set-hdfs-plugin-env-permfix.sh
This workaround should help you to start the namenode from Ambari without having to change permission manually every time.
Created 12-15-2015 05:44 PM
what hdp version is this?
Created 12-15-2015 07:46 PM
This is HDP 2.2.6, Ambari 2.1.2.1
Created 12-17-2015 08:29 PM
what is umask value for the root user ? This may have set the file to have no execute permission for the group.
Created 12-17-2015 09:13 PM
The root user's umask is 0027
Created 12-17-2015 08:45 PM
You can use the following work-around:
On the namenode box,
# cp /etc/hadoop/conf/set-hdfs-plugin-env.sh /etc/hadoop/conf/set-hdfs-plugin-env-permfix.sh # chown hdfs:hadoop /etc/hadoop/conf/set-hdfs-plugin-env-permfix.sh
This workaround should help you to start the namenode from Ambari without having to change permission manually every time.
Created 12-17-2015 09:09 PM
Yeah, I've thought about that, I don't like it... it's one more thing that's hacked together manually and has to be tracked and maintained. I'd much prefer a fix to Ambari's config...