Archives of Support Questions (Read Only)

This is an archived board for historical reference. Information and links may no longer be available or relevant
Announcements
This board is archived and read-only for historical reference. To ask a new question, please post a new topic on the appropriate active board.

How can I fix incorrect file permissions for set-hdfs-plugin-env.sh when restarting from ambari

avatar
Rising Star

Ambari sets up set-hdfs-plugin-env.sh with root:hadoop 700 permissions when it restarts HDFS, this causes ranger integration to break as the hdfs user cannot execute this script when the namenode starts.

I can fix the problem if I restart the namenode manually, but that means to deploy config changes I need to restart from Ambari, then correct permissions and restart manually.

How

1 ACCEPTED SOLUTION

avatar
New Member

You can use the following work-around:

On the namenode box,

  • login as root
  • execute:
# cp /etc/hadoop/conf/set-hdfs-plugin-env.sh /etc/hadoop/conf/set-hdfs-plugin-env-permfix.sh
# chown hdfs:hadoop /etc/hadoop/conf/set-hdfs-plugin-env-permfix.sh

  • Then edit, /usr/hdp/current/hadoop-client/libexec/hadoop-config.sh file to modify the references to set-hdfs-plugin-env.sh to " set-hdfs-plugin-env-permfix.sh"

This workaround should help you to start the namenode from Ambari without having to change permission manually every time.

View solution in original post

6 REPLIES 6

avatar

what hdp version is this?

avatar
Rising Star

This is HDP 2.2.6, Ambari 2.1.2.1

avatar
New Member

what is umask value for the root user ? This may have set the file to have no execute permission for the group.

avatar
Rising Star

The root user's umask is 0027

avatar
New Member

You can use the following work-around:

On the namenode box,

  • login as root
  • execute:
# cp /etc/hadoop/conf/set-hdfs-plugin-env.sh /etc/hadoop/conf/set-hdfs-plugin-env-permfix.sh
# chown hdfs:hadoop /etc/hadoop/conf/set-hdfs-plugin-env-permfix.sh

  • Then edit, /usr/hdp/current/hadoop-client/libexec/hadoop-config.sh file to modify the references to set-hdfs-plugin-env.sh to " set-hdfs-plugin-env-permfix.sh"

This workaround should help you to start the namenode from Ambari without having to change permission manually every time.

avatar
Rising Star

Yeah, I've thought about that, I don't like it... it's one more thing that's hacked together manually and has to be tracked and maintained. I'd much prefer a fix to Ambari's config...