Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Unable to query for supported packages using /usr/bin/hdp-select

avatar
Contributor

I am installing a 6 node cluster with 1 as ambari-server and rest other as agents.

Ambari-version: 2.7.0.0

HDP version:3.0

I am getting error at the installation stage where it is not able to find hdp-select file for versions.

Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/stack-hooks/before-INSTALL/scripts/hook.py", line 37, in <module>
    BeforeInstallHook().execute()
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 363, in execute
    self.save_component_version_to_structured_out(self.command_name)
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 222, in save_component_version_to_structured_out
    stack_select_package_name = stack_select.get_package_name()
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/stack_select.py", line 109, in get_package_name
    package = get_packages(PACKAGE_SCOPE_STACK_SELECT, service_name, component_name)
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/stack_select.py", line 223, in get_packages
    supported_packages = get_supported_packages()
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/stack_select.py", line 147, in get_supported_packages
    raise Fail("Unable to query for supported packages using {0}".format(stack_selector_path))
resource_management.core.exceptions.Fail: Unable to query for supported packages using /usr/bin/hdp-select
14 REPLIES 14

avatar
Master Mentor

@Shraddha Singh

The current directory has links to /usr/hdp/3..0.x/{hdp_component} in your case, the below is from my HDP 2.6.5. so you should have copied those directories to /usr/hdp/3.0.x/ and do the tedious work to recreate symlinks from /usr/hdp/current as seen below, quite a good exercise. If this is a test cluster and most probably a single node

# tree /usr/hdp/current/
/usr/hdp/current/
├── atlas-client -> /usr/hdp/2.6.5.0-292/atlas
├── atlas-server -> /usr/hdp/2.6.5.0-292/atlas
├── falcon-client -> /usr/hdp/2.6.5.0-292/falcon
├── falcon-server -> /usr/hdp/2.6.5.0-292/falcon
├── hadoop-client -> /usr/hdp/2.6.5.0-292/hadoop
├── hadoop-hdfs-client -> /usr/hdp/2.6.5.0-292/hadoop-hdfs
├── hadoop-hdfs-datanode -> /usr/hdp/2.6.5.0-292/hadoop-hdfs
├── hadoop-hdfs-journalnode -> /usr/hdp/2.6.5.0-292/hadoop-hdfs
├── hadoop-hdfs-namenode -> /usr/hdp/2.6.5.0-292/hadoop-hdfs
├── hadoop-hdfs-nfs3 -> /usr/hdp/2.6.5.0-292/hadoop-hdfs
├── hadoop-hdfs-portmap -> /usr/hdp/2.6.5.0-292/hadoop-hdfs
├── hadoop-hdfs-secondarynamenode -> /usr/hdp/2.6.5.0-292/hadoop-hdfs
├── hadoop-hdfs-zkfc -> /usr/hdp/2.6.5.0-292/hadoop-hdfs
├── hadoop-httpfs -> /usr/hdp/2.6.5.0-292/hadoop-httpfs
├── hadoop-mapreduce-client -> /usr/hdp/2.6.5.0-292/hadoop-mapreduce
├── hadoop-mapreduce-historyserver -> /usr/hdp/2.6.5.0-292/hadoop-mapreduce
├── hadoop-yarn-client -> /usr/hdp/2.6.5.0-292/hadoop-yarn
├── hadoop-yarn-nodemanager -> /usr/hdp/2.6.5.0-292/hadoop-yarn
├── hadoop-yarn-resourcemanager -> /usr/hdp/2.6.5.0-292/hadoop-yarn
├── hadoop-yarn-timelineserver -> /usr/hdp/2.6.5.0-292/hadoop-yarn
├── hbase-client -> /usr/hdp/2.6.5.0-292/hbase
├── hbase-master -> /usr/hdp/2.6.5.0-292/hbase
├── hbase-regionserver -> /usr/hdp/2.6.5.0-292/hbase
├── hive-client -> /usr/hdp/2.6.5.0-292/hive
├── hive-metastore -> /usr/hdp/2.6.5.0-292/hive
├── hive-server2 -> /usr/hdp/2.6.5.0-292/hive
├── hive-server2-hive2 -> /usr/hdp/2.6.5.0-292/hive2
├── hive-webhcat -> /usr/hdp/2.6.5.0-292/hive-hcatalog
├── kafka-broker -> /usr/hdp/2.6.5.0-292/kafka
├── knox-server -> /usr/hdp/2.6.5.0-292/knox
├── livy2-client -> /usr/hdp/2.6.5.0-292/livy2
├── livy2-server -> /usr/hdp/2.6.5.0-292/livy2
├── livy-client -> /usr/hdp/2.6.5.0-292/livy
├── oozie-client -> /usr/hdp/2.6.5.0-292/oozie
├── oozie-server -> /usr/hdp/2.6.5.0-292/oozie
├── phoenix-client -> /usr/hdp/2.6.5.0-292/phoenix
├── phoenix-server -> /usr/hdp/2.6.5.0-292/phoenix
├── pig-client -> /usr/hdp/2.6.5.0-292/pig
├── ranger-admin -> /usr/hdp/2.6.5.0-292/ranger-admin
├── ranger-tagsync -> /usr/hdp/2.6.5.0-292/ranger-tagsync
├── ranger-usersync -> /usr/hdp/2.6.5.0-292/ranger-usersync
├── shc -> /usr/hdp/2.6.5.0-292/shc
├── slider-client -> /usr/hdp/2.6.5.0-292/slider
├── spark2-client -> /usr/hdp/2.6.5.0-292/spark2
├── spark2-historyserver -> /usr/hdp/2.6.5.0-292/spark2
├── spark2-thriftserver -> /usr/hdp/2.6.5.0-292/spark2
├── spark-client -> /usr/hdp/2.6.5.0-292/spark
├── spark-historyserver -> /usr/hdp/2.6.5.0-292/spark
├── spark_llap -> /usr/hdp/2.6.5.0-292/spark_llap
├── spark-thriftserver -> /usr/hdp/2.6.5.0-292/spark
├── sqoop-client -> /usr/hdp/2.6.5.0-292/sqoop
├── sqoop-server -> /usr/hdp/2.6.5.0-292/sqoop
├── storm-slider-client -> /usr/hdp/2.6.5.0-292/storm-slider-client
├── tez-client -> /usr/hdp/2.6.5.0-292/tez
├── zeppelin-server -> /usr/hdp/2.6.5.0-292/zeppelin
├── zookeeper-client -> /usr/hdp/2.6.5.0-292/zookeeper
└── zookeeper-server -> /usr/hdp/2.6.5.0-292/zookeeper
55 directories, 2 files

I would advise you if possible to re-install it completely and have a clean environment.

HTH

avatar
Contributor

I have reinstalled and ran yum clean all, below is the output I am getting

#hdp-select versions

Traceback (most recent call last): File "/bin/hdp-select", line 456, in <module> printVersions() File "/bin/hdp-select", line 295, in printVersions for f in os.listdir(root): OSError: [Errno 2] No such file or directory: '/usr/hdp'

# hdp-select | grep hadoop

Traceback (most recent call last): File "/bin/hdp-select", line 456, in <module> printVersions() File "/bin/hdp-select", line 295, in printVersions for f in os.listdir(root): OSError: [Errno 2] No such file or directory: '/usr/hdp' [root@devaz01 ~]# hdp-select | grep hadoop hadoop-client - None hadoop-hdfs-client - None hadoop-hdfs-datanode - None hadoop-hdfs-journalnode - None hadoop-hdfs-namenode - None hadoop-hdfs-nfs3 - None hadoop-hdfs-portmap - None hadoop-hdfs-secondarynamenode - None hadoop-hdfs-zkfc - None hadoop-httpfs - None hadoop-mapreduce-client - None hadoop-mapreduce-historyserver - None hadoop-yarn-client - None hadoop-yarn-nodemanager - None hadoop-yarn-registrydns - None hadoop-yarn-resourcemanager - None hadoop-yarn-timelinereader - None hadoop-yarn-timelineserver - None

#ls -lart /usr/hdp

ls: cannot access /usr/hdp: No such file or directory

#ls -lart /usr/hdp/current/

ls: cannot access /usr/hdp/current/: No such file or directory

avatar
Master Mentor

@Shraddha Singh

While checking /usr/hdp/current folder there are no files available.

.

This is normal ... If your ambari cluster installation is not done successfully via ambari then the HDP binaries will not be installed to the cluster hosts and hence you will not see the directories inside the "/usr/hdp"

Once the HDP installation is started/completed then you will see the hsp-select command returning results by looking the contents inside the "/usr/hdp" directory.

avatar
Contributor

I deleted the conf file under /usr/hdp/3.0.1.0-187/hadoop/ and reinstalled services using Ambari UI and it worked.

avatar
Explorer

For me the solution to this issue was just reinstalling the ambari-agent. Maybe it was partially installed somehow.