Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Installing HDP via Ambari without having Sudo/Root privileges

avatar

I have a locked down environment where I can't have sudo/root privileges. Am I still available to install Ambari and HDP? If so, how?

I was able to create a local repo following the docs. Furthermore I have created a user called 'Ambari' to run the Ambari service as.

1 ACCEPTED SOLUTION

avatar
Rising Star

The ambari user must be configured for sudo access, but the required access can be restricted.

Take careful note of the ambari user's PATH (echo $PATH as ambari), you may need to change the full path of the sudo entires below (mkdir, cat, etc) as Ambari doesn't use fully qualified paths when executing commands.

Note that the Default lines need to appear after other Default lines in /etc/sudoers (easiest to just put this at the end).

# Add Sudo Rules
ambari ALL=(ALL) NOPASSWD:SETENV: /bin/su hdfs *, /usr/bin/su hdfs *, /bin/su ambari-qa *, /usr/bin/su ambari-qa *, /bin/su ranger *, /usr/bin/su ranger *, /bin/su zookeeper *, /usr/bin/su zookeeper *, /bin/su knox *, /usr/bin/su knox *, /bin/su falcon *, /usr/bin/su falcon *, /bin/su ams *, /usr/bin/su ams *, /bin/su flume *, /usr/bin/su flume *, /bin/su hbase *, /usr/bin/su hbase *, /bin/su spark *, /usr/bin/su spark *, /bin/su accumulo *, /usr/bin/su accumulo *, /bin/su hive *, /usr/bin/su hive *, /bin/su hcat *, /usr/bin/su hcat *, /bin/su kafka *, /usr/bin/su kafka *, /bin/su mapred *, /usr/bin/su mapred *, /bin/su oozie *, /usr/bin/su oozie *, /bin/su sqoop *, /usr/bin/su sqoop *, /bin/su storm *, /usr/bin/su storm *, /bin/su tez *, /usr/bin/su tez *, /bin/su atlas *, /usr/bin/su atlas *, /bin/su yarn *, /usr/bin/su yarn *, /bin/su kms *, /usr/bin/su kms *, /bin/su mysql *, /usr/bin/su mysql *, /usr/bin/yum, /usr/bin/zypper, /usr/bin/apt-get, /bin/mkdir, /usr/bin/mkdir, /usr/bin/test, /bin/ln, /usr/bin/ln, /bin/chown, /usr/bin/chown, /bin/chmod, /usr/bin/chmod, /bin/chgrp, /usr/bin/chgrp, /usr/sbin/groupadd, /usr/sbin/groupmod, /usr/sbin/useradd, /usr/sbin/usermod, /bin/cp, /usr/bin/cp, /usr/sbin/setenforce, /usr/bin/test, /usr/bin/stat, /bin/mv, /usr/bin/mv, /bin/sed, /usr/bin/sed, /bin/rm, /usr/bin/rm, /bin/kill, /usr/bin/kill, /bin/readlink, /usr/bin/readlink, /usr/bin/pgrep, /bin/cat, /usr/bin/cat, /usr/bin/unzip, /bin/tar, /usr/bin/tar, /usr/bin/tee, /bin/touch, /usr/bin/touch, /usr/bin/hdp-select, /usr/bin/conf-select, /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh, /usr/lib/hadoop/bin/hadoop-daemon.sh, /usr/lib/hadoop/sbin/hadoop-daemon.sh, /sbin/chkconfig gmond off, /sbin/chkconfig gmetad off, /etc/init.d/httpd *, /sbin/service hdp-gmetad start, /sbin/service hdp-gmond start, /usr/sbin/gmond, /usr/sbin/update-rc.d ganglia-monitor *, /usr/sbin/update-rc.d gmetad *, /etc/init.d/apache2 *, /usr/sbin/service hdp-gmond *, /usr/sbin/service hdp-gmetad *, /sbin/service mysqld *, /usr/bin/python2.6 /var/lib/ambari-agent/data/tmp/validateKnoxStatus.py *, /usr/hdp/current/knox-server/bin/knoxcli.sh *, /usr/hdp/*/ranger-usersync/setup.sh, /usr/bin/ranger-usersync-stop, /usr/bin/ranger-usersync-start, /usr/hdp/*/ranger-admin/setup.sh *, /usr/hdp/*/ranger-knox-plugin/disable-knox-plugin.sh *, /usr/hdp/*/ranger-storm-plugin/disable-storm-plugin.sh *, /usr/hdp/*/ranger-hbase-plugin/disable-hbase-plugin.sh *, /usr/hdp/*/ranger-hdfs-plugin/disable-hdfs-plugin.sh *, /usr/hdp/current/ranger-admin/ranger_credential_helper.py, /usr/hdp/current/ranger-kms/ranger_credential_helper.py
Defaults exempt_group = ambari
Defaults !env_reset,env_delete-=PATH
Defaults: ambari !requiretty

You will also want to manually install Ambari Agent on all nodes and modify the agent configuration BEFORE starting Ambari Agent for the first time. Otherwise, the agent will start as root and you will need to manually fix the ownership of several directories (don't let Ambari Server install the agents via ssh). In /etc/ambari-agent/conf/ambari-agent.ini modify the run_as_user and server properties and start the agents.

Use the manual registeration option when going through the cluster install wizard.

I've used the above several times now with success.

View solution in original post

3 REPLIES 3

avatar

avatar

Andrew you have to have at least sudo access to install the RPM. Same goes for Ambari Server setup. Once Ambari has been configured to run as a non-root user and the agents have been configured as well, and sudo configuration for the agents have been deployed then you won't need root access.

avatar
Rising Star

The ambari user must be configured for sudo access, but the required access can be restricted.

Take careful note of the ambari user's PATH (echo $PATH as ambari), you may need to change the full path of the sudo entires below (mkdir, cat, etc) as Ambari doesn't use fully qualified paths when executing commands.

Note that the Default lines need to appear after other Default lines in /etc/sudoers (easiest to just put this at the end).

# Add Sudo Rules
ambari ALL=(ALL) NOPASSWD:SETENV: /bin/su hdfs *, /usr/bin/su hdfs *, /bin/su ambari-qa *, /usr/bin/su ambari-qa *, /bin/su ranger *, /usr/bin/su ranger *, /bin/su zookeeper *, /usr/bin/su zookeeper *, /bin/su knox *, /usr/bin/su knox *, /bin/su falcon *, /usr/bin/su falcon *, /bin/su ams *, /usr/bin/su ams *, /bin/su flume *, /usr/bin/su flume *, /bin/su hbase *, /usr/bin/su hbase *, /bin/su spark *, /usr/bin/su spark *, /bin/su accumulo *, /usr/bin/su accumulo *, /bin/su hive *, /usr/bin/su hive *, /bin/su hcat *, /usr/bin/su hcat *, /bin/su kafka *, /usr/bin/su kafka *, /bin/su mapred *, /usr/bin/su mapred *, /bin/su oozie *, /usr/bin/su oozie *, /bin/su sqoop *, /usr/bin/su sqoop *, /bin/su storm *, /usr/bin/su storm *, /bin/su tez *, /usr/bin/su tez *, /bin/su atlas *, /usr/bin/su atlas *, /bin/su yarn *, /usr/bin/su yarn *, /bin/su kms *, /usr/bin/su kms *, /bin/su mysql *, /usr/bin/su mysql *, /usr/bin/yum, /usr/bin/zypper, /usr/bin/apt-get, /bin/mkdir, /usr/bin/mkdir, /usr/bin/test, /bin/ln, /usr/bin/ln, /bin/chown, /usr/bin/chown, /bin/chmod, /usr/bin/chmod, /bin/chgrp, /usr/bin/chgrp, /usr/sbin/groupadd, /usr/sbin/groupmod, /usr/sbin/useradd, /usr/sbin/usermod, /bin/cp, /usr/bin/cp, /usr/sbin/setenforce, /usr/bin/test, /usr/bin/stat, /bin/mv, /usr/bin/mv, /bin/sed, /usr/bin/sed, /bin/rm, /usr/bin/rm, /bin/kill, /usr/bin/kill, /bin/readlink, /usr/bin/readlink, /usr/bin/pgrep, /bin/cat, /usr/bin/cat, /usr/bin/unzip, /bin/tar, /usr/bin/tar, /usr/bin/tee, /bin/touch, /usr/bin/touch, /usr/bin/hdp-select, /usr/bin/conf-select, /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh, /usr/lib/hadoop/bin/hadoop-daemon.sh, /usr/lib/hadoop/sbin/hadoop-daemon.sh, /sbin/chkconfig gmond off, /sbin/chkconfig gmetad off, /etc/init.d/httpd *, /sbin/service hdp-gmetad start, /sbin/service hdp-gmond start, /usr/sbin/gmond, /usr/sbin/update-rc.d ganglia-monitor *, /usr/sbin/update-rc.d gmetad *, /etc/init.d/apache2 *, /usr/sbin/service hdp-gmond *, /usr/sbin/service hdp-gmetad *, /sbin/service mysqld *, /usr/bin/python2.6 /var/lib/ambari-agent/data/tmp/validateKnoxStatus.py *, /usr/hdp/current/knox-server/bin/knoxcli.sh *, /usr/hdp/*/ranger-usersync/setup.sh, /usr/bin/ranger-usersync-stop, /usr/bin/ranger-usersync-start, /usr/hdp/*/ranger-admin/setup.sh *, /usr/hdp/*/ranger-knox-plugin/disable-knox-plugin.sh *, /usr/hdp/*/ranger-storm-plugin/disable-storm-plugin.sh *, /usr/hdp/*/ranger-hbase-plugin/disable-hbase-plugin.sh *, /usr/hdp/*/ranger-hdfs-plugin/disable-hdfs-plugin.sh *, /usr/hdp/current/ranger-admin/ranger_credential_helper.py, /usr/hdp/current/ranger-kms/ranger_credential_helper.py
Defaults exempt_group = ambari
Defaults !env_reset,env_delete-=PATH
Defaults: ambari !requiretty

You will also want to manually install Ambari Agent on all nodes and modify the agent configuration BEFORE starting Ambari Agent for the first time. Otherwise, the agent will start as root and you will need to manually fix the ownership of several directories (don't let Ambari Server install the agents via ssh). In /etc/ambari-agent/conf/ambari-agent.ini modify the run_as_user and server properties and start the agents.

Use the manual registeration option when going through the cluster install wizard.

I've used the above several times now with success.