Support Questions
Find answers, ask questions, and share your expertise
Announcements
Check out our newest addition to the community, the Cloudera Innovation Accelerator group hub.

Unable to query for supported packages using /usr/bin/hdp-select

I am installing a 6 node cluster with 1 as ambari-server and rest other as agents.

Ambari-version: 2.7.0.0

HDP version:3.0

I am getting error at the installation stage where it is not able to find hdp-select file for versions.

Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/stack-hooks/before-INSTALL/scripts/hook.py", line 37, in <module>
    BeforeInstallHook().execute()
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 363, in execute
    self.save_component_version_to_structured_out(self.command_name)
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 222, in save_component_version_to_structured_out
    stack_select_package_name = stack_select.get_package_name()
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/stack_select.py", line 109, in get_package_name
    package = get_packages(PACKAGE_SCOPE_STACK_SELECT, service_name, component_name)
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/stack_select.py", line 223, in get_packages
    supported_packages = get_supported_packages()
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/stack_select.py", line 147, in get_supported_packages
    raise Fail("Unable to query for supported packages using {0}".format(stack_selector_path))
resource_management.core.exceptions.Fail: Unable to query for supported packages using /usr/bin/hdp-select
14 REPLIES 14

Hi @Shraddha Singh,

are you running the ambari-agent as root user ? and its given the sud opermission for :

/usr/bin/ambari-python-wrap *

is the below command running correctly when run manually?

/usr/bin/hdp-select versions

Hi @Akhil S Naik

I am getting 'permission denied' error on two hosts and on other hosts it is 'command not found'.

Super Mentor

@Shraddha Singh

If you are running ambari agent as non root user then you must have to give him some command execute permissions and sudoer permission as mentioned in :

https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.2.2/bk_ambari-security/content/commands_agent.h...

sudo entries that should be placed in/etc/sudoersby running thevisudocommand

Example (snippet): (notice hdp-select in below command list.)

# Ambari: Hadoop and Configuration Commands
ambari ALL=(ALL) NOPASSWD:SETENV: /usr/bin/hdp-select, /usr/bin/conf-select, /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh, /usr/lib/hadoop/bin/hadoop-daemon.sh, /usr/lib/hadoop/sbin/hadoop-daemon.sh, /usr/bin/ambari-python-wrap *

.

You can get the list of all needed commands in the mentioned Doc.

You must do this sudo configuration on every node in the cluster. To ensure that the configuration has been done properly, you can su to the ambari user and run sudo -l. There, you can double check that there are no warnings, and that the configuration output matches what was just applied.

Hi @Jay Kumar SenSharma,

I am running the ambari-agent command being the root user only.

Super Mentor

@Shraddha Singh

Can you please share the "/etc/ambari-agent/conf/ambari-agent.ini" file or check & share the output of the following commands :

# grep 'run_as_user' /etc/ambari-agent/conf/ambari-agent.ini

# ps -ef | grep main.py

.

Super Mentor

@Shraddha Singh

You can find similar issue as yours here:

http://knowledge.teradata.com/KCS/id/KCS015527

run_as_user = root

nxautom+ 6789 1 0 Feb02 ? 00:01:09 python /opt/microsoft/omsconfig/modules/nxOMSAutomationWorker/DSCResources/MSFT_nxOMSAutomationWorkerResource/automationworker/worker/main.py /var/opt/microsoft/omsagent/state/automationworker/oms.conf rworkspace:2cdafa97-1d38-4ad7-afc8-68a74145a77c 1.6.3.0 root 97419 97415 9 08:48 pts/0 00:02:02 /usr/bin/python /usr/lib/ambari-agent/lib/ambari_agent/main.py start root 109563 96746 0 09:11 pts/0 00:00:00 grep --color=auto main.py

This is the output I am getting after running the above commands.

Super Mentor

@Shraddha Singh

Please try this on the problematic host and then check if it works,

# yum reinstall hdp-select -y
# yum clean all

.

Also please share the output of the following commands:

# hdp-select versions
# hdp-select | grep hadoop
# ls -lart /usr/hdp
# ls -lart /usr/hdp/current/

.

@Jay Kumar SenSharma

While checking /usr/hdp/current folder there are no files available. Can you provide me a way to install HDP correctly to get the files and resolve the error. I tried scp'ing it from one of the node out of 5 which is working correctly but they are stored as directories.

Mentor

@Shraddha Singh

The current directory has links to /usr/hdp/3..0.x/{hdp_component} in your case, the below is from my HDP 2.6.5. so you should have copied those directories to /usr/hdp/3.0.x/ and do the tedious work to recreate symlinks from /usr/hdp/current as seen below, quite a good exercise. If this is a test cluster and most probably a single node

# tree /usr/hdp/current/
/usr/hdp/current/
├── atlas-client -> /usr/hdp/2.6.5.0-292/atlas
├── atlas-server -> /usr/hdp/2.6.5.0-292/atlas
├── falcon-client -> /usr/hdp/2.6.5.0-292/falcon
├── falcon-server -> /usr/hdp/2.6.5.0-292/falcon
├── hadoop-client -> /usr/hdp/2.6.5.0-292/hadoop
├── hadoop-hdfs-client -> /usr/hdp/2.6.5.0-292/hadoop-hdfs
├── hadoop-hdfs-datanode -> /usr/hdp/2.6.5.0-292/hadoop-hdfs
├── hadoop-hdfs-journalnode -> /usr/hdp/2.6.5.0-292/hadoop-hdfs
├── hadoop-hdfs-namenode -> /usr/hdp/2.6.5.0-292/hadoop-hdfs
├── hadoop-hdfs-nfs3 -> /usr/hdp/2.6.5.0-292/hadoop-hdfs
├── hadoop-hdfs-portmap -> /usr/hdp/2.6.5.0-292/hadoop-hdfs
├── hadoop-hdfs-secondarynamenode -> /usr/hdp/2.6.5.0-292/hadoop-hdfs
├── hadoop-hdfs-zkfc -> /usr/hdp/2.6.5.0-292/hadoop-hdfs
├── hadoop-httpfs -> /usr/hdp/2.6.5.0-292/hadoop-httpfs
├── hadoop-mapreduce-client -> /usr/hdp/2.6.5.0-292/hadoop-mapreduce
├── hadoop-mapreduce-historyserver -> /usr/hdp/2.6.5.0-292/hadoop-mapreduce
├── hadoop-yarn-client -> /usr/hdp/2.6.5.0-292/hadoop-yarn
├── hadoop-yarn-nodemanager -> /usr/hdp/2.6.5.0-292/hadoop-yarn
├── hadoop-yarn-resourcemanager -> /usr/hdp/2.6.5.0-292/hadoop-yarn
├── hadoop-yarn-timelineserver -> /usr/hdp/2.6.5.0-292/hadoop-yarn
├── hbase-client -> /usr/hdp/2.6.5.0-292/hbase
├── hbase-master -> /usr/hdp/2.6.5.0-292/hbase
├── hbase-regionserver -> /usr/hdp/2.6.5.0-292/hbase
├── hive-client -> /usr/hdp/2.6.5.0-292/hive
├── hive-metastore -> /usr/hdp/2.6.5.0-292/hive
├── hive-server2 -> /usr/hdp/2.6.5.0-292/hive
├── hive-server2-hive2 -> /usr/hdp/2.6.5.0-292/hive2
├── hive-webhcat -> /usr/hdp/2.6.5.0-292/hive-hcatalog
├── kafka-broker -> /usr/hdp/2.6.5.0-292/kafka
├── knox-server -> /usr/hdp/2.6.5.0-292/knox
├── livy2-client -> /usr/hdp/2.6.5.0-292/livy2
├── livy2-server -> /usr/hdp/2.6.5.0-292/livy2
├── livy-client -> /usr/hdp/2.6.5.0-292/livy
├── oozie-client -> /usr/hdp/2.6.5.0-292/oozie
├── oozie-server -> /usr/hdp/2.6.5.0-292/oozie
├── phoenix-client -> /usr/hdp/2.6.5.0-292/phoenix
├── phoenix-server -> /usr/hdp/2.6.5.0-292/phoenix
├── pig-client -> /usr/hdp/2.6.5.0-292/pig
├── ranger-admin -> /usr/hdp/2.6.5.0-292/ranger-admin
├── ranger-tagsync -> /usr/hdp/2.6.5.0-292/ranger-tagsync
├── ranger-usersync -> /usr/hdp/2.6.5.0-292/ranger-usersync
├── shc -> /usr/hdp/2.6.5.0-292/shc
├── slider-client -> /usr/hdp/2.6.5.0-292/slider
├── spark2-client -> /usr/hdp/2.6.5.0-292/spark2
├── spark2-historyserver -> /usr/hdp/2.6.5.0-292/spark2
├── spark2-thriftserver -> /usr/hdp/2.6.5.0-292/spark2
├── spark-client -> /usr/hdp/2.6.5.0-292/spark
├── spark-historyserver -> /usr/hdp/2.6.5.0-292/spark
├── spark_llap -> /usr/hdp/2.6.5.0-292/spark_llap
├── spark-thriftserver -> /usr/hdp/2.6.5.0-292/spark
├── sqoop-client -> /usr/hdp/2.6.5.0-292/sqoop
├── sqoop-server -> /usr/hdp/2.6.5.0-292/sqoop
├── storm-slider-client -> /usr/hdp/2.6.5.0-292/storm-slider-client
├── tez-client -> /usr/hdp/2.6.5.0-292/tez
├── zeppelin-server -> /usr/hdp/2.6.5.0-292/zeppelin
├── zookeeper-client -> /usr/hdp/2.6.5.0-292/zookeeper
└── zookeeper-server -> /usr/hdp/2.6.5.0-292/zookeeper
55 directories, 2 files

I would advise you if possible to re-install it completely and have a clean environment.

HTH

I have reinstalled and ran yum clean all, below is the output I am getting

#hdp-select versions

Traceback (most recent call last): File "/bin/hdp-select", line 456, in <module> printVersions() File "/bin/hdp-select", line 295, in printVersions for f in os.listdir(root): OSError: [Errno 2] No such file or directory: '/usr/hdp'

# hdp-select | grep hadoop

Traceback (most recent call last): File "/bin/hdp-select", line 456, in <module> printVersions() File "/bin/hdp-select", line 295, in printVersions for f in os.listdir(root): OSError: [Errno 2] No such file or directory: '/usr/hdp' [root@devaz01 ~]# hdp-select | grep hadoop hadoop-client - None hadoop-hdfs-client - None hadoop-hdfs-datanode - None hadoop-hdfs-journalnode - None hadoop-hdfs-namenode - None hadoop-hdfs-nfs3 - None hadoop-hdfs-portmap - None hadoop-hdfs-secondarynamenode - None hadoop-hdfs-zkfc - None hadoop-httpfs - None hadoop-mapreduce-client - None hadoop-mapreduce-historyserver - None hadoop-yarn-client - None hadoop-yarn-nodemanager - None hadoop-yarn-registrydns - None hadoop-yarn-resourcemanager - None hadoop-yarn-timelinereader - None hadoop-yarn-timelineserver - None

#ls -lart /usr/hdp

ls: cannot access /usr/hdp: No such file or directory

#ls -lart /usr/hdp/current/

ls: cannot access /usr/hdp/current/: No such file or directory

Super Mentor

@Shraddha Singh

While checking /usr/hdp/current folder there are no files available.

.

This is normal ... If your ambari cluster installation is not done successfully via ambari then the HDP binaries will not be installed to the cluster hosts and hence you will not see the directories inside the "/usr/hdp"

Once the HDP installation is started/completed then you will see the hsp-select command returning results by looking the contents inside the "/usr/hdp" directory.

I deleted the conf file under /usr/hdp/3.0.1.0-187/hadoop/ and reinstalled services using Ambari UI and it worked.

Explorer

For me the solution to this issue was just reinstalling the ambari-agent. Maybe it was partially installed somehow.