Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Services not starting on HDP sandbox 2.6.3

avatar
Super Collaborator

For the host machine config. and other screenshots, please refer the background thread .

The network is 'NAT'. Note that the 'Bridged Adapter' network setting doesn't work - I can neither access Ambari or the VM via putty. I am able to log-in Ambari at http://localhost:8080, also, via PuttY:

[root@sandbox-hdp ~]#
[root@sandbox-hdp ~]# ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:02
          inet addr:172.17.0.2  Bcast:0.0.0.0  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:741159 errors:0 dropped:0 overruns:0 frame:0
          TX packets:535534 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:288135779 (274.7 MiB)  TX bytes:351577113 (335.2 MiB)
lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:40787371 errors:0 dropped:0 overruns:0 frame:0
          TX packets:40787371 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:27903918067 (25.9 GiB)  TX bytes:27903918067 (25.9 GiB)

On Ambari log-in, all the services were stopped, including HDFS.

42877-ambari-all-services-stopped.jpg

I tried starting the HDFS but received errors:

stderr: /var/lib/ambari-agent/data/errors-489.txt

Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_client.py", line 73, in <module>
    HdfsClient().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 367, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_client.py", line 35, in install
    import params
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/params.py", line 25, in <module>
    from params_linux import *
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/params_linux.py", line 391, in <module>
    lzo_packages = get_lzo_packages(stack_version_unformatted)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/get_lzo_packages.py", line 45, in get_lzo_packages
    lzo_packages += [script_instance.format_package_name("hadooplzo_${stack_version}"),
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 538, in format_package_name
    raise Fail("Cannot match package for regexp name {0}. Available packages: {1}".format(name, self.available_packages_in_repos))
resource_management.core.exceptions.Fail: Cannot match package for regexp name hadooplzo_${stack_version}. Available packages: ['atlas-metadata_2_6_3_0_235', 'atlas-metadata_2_6_3_0_235-falcon-plugin', 'atlas-metadata_2_6_3_0_235-hive-plugin', 'atlas-metadata_2_6_3_0_235-sqoop-plugin', 'atlas-metadata_2_6_3_0_235-storm-plugin', 'bigtop-jsvc', 'bigtop-tomcat', 'datafu_2_6_3_0_235', 'falcon_2_6_3_0_235', 'flume_2_6_3_0_235', 'hadoop_2_6_3_0_235', 'hadoop_2_6_3_0_235-client', 'hadoop_2_6_3_0_235-hdfs', 'hadoop_2_6_3_0_235-libhdfs', 'hadoop_2_6_3_0_235-mapreduce', 'hadoop_2_6_3_0_235-yarn', 'hbase_2_6_3_0_235', 'hdp-select', 'hive2_2_6_3_0_235', 'hive2_2_6_3_0_235-jdbc', 'hive_2_6_3_0_235', 'hive_2_6_3_0_235-hcatalog', 'hive_2_6_3_0_235-jdbc', 'hive_2_6_3_0_235-webhcat', 'hue', 'hue-beeswax', 'hue-common', 'hue-hcatalog', 'hue-oozie', 'hue-pig', 'hue-server', 'kafka_2_6_3_0_235', 'knox_2_6_3_0_235', 'livy2_2_6_3_0_235', 'oozie_2_6_3_0_235', 'oozie_2_6_3_0_235-client', 'oozie_2_6_3_0_235-common', 'oozie_2_6_3_0_235-sharelib', 'oozie_2_6_3_0_235-sharelib-distcp', 'oozie_2_6_3_0_235-sharelib-hcatalog', 'oozie_2_6_3_0_235-sharelib-hive', 'oozie_2_6_3_0_235-sharelib-hive2', 'oozie_2_6_3_0_235-sharelib-mapreduce-streaming', 'oozie_2_6_3_0_235-sharelib-pig', 'oozie_2_6_3_0_235-sharelib-spark', 'oozie_2_6_3_0_235-sharelib-sqoop', 'oozie_2_6_3_0_235-webapp', 'phoenix_2_6_3_0_235', 'pig_2_6_3_0_235', 'ranger_2_6_3_0_235-admin', 'ranger_2_6_3_0_235-atlas-plugin', 'ranger_2_6_3_0_235-hbase-plugin', 'ranger_2_6_3_0_235-hdfs-plugin', 'ranger_2_6_3_0_235-hive-plugin', 'ranger_2_6_3_0_235-kafka-plugin', 'ranger_2_6_3_0_235-kms', 'ranger_2_6_3_0_235-knox-plugin', 'ranger_2_6_3_0_235-solr-plugin', 'ranger_2_6_3_0_235-storm-plugin', 'ranger_2_6_3_0_235-tagsync', 'ranger_2_6_3_0_235-usersync', 'ranger_2_6_3_0_235-yarn-plugin', 'shc_2_6_3_0_235', 'slider_2_6_3_0_235', 'spark2_2_6_3_0_235', 'spark2_2_6_3_0_235-python', 'spark2_2_6_3_0_235-yarn-shuffle', 'spark_2_6_3_0_235', 'spark_2_6_3_0_235-python', 'spark_2_6_3_0_235-yarn-shuffle', 'spark_llap_2_6_3_0_235', 'sqoop_2_6_3_0_235', 'storm_2_6_3_0_235', 'storm_2_6_3_0_235-slider-client', 'tez_2_6_3_0_235', 'tez_hive2_2_6_3_0_235', 'zeppelin_2_6_3_0_235', 'zookeeper_2_6_3_0_235', 'zookeeper_2_6_3_0_235-server', 'extjs']

stdout: /var/lib/ambari-agent/data/output-489.txt

2017-12-04 08:01:38,824 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=None -> 2.6
2017-12-04 08:01:38,825 - Using hadoop conf dir: /usr/hdp/2.6.3.0-235/hadoop/conf
2017-12-04 08:01:38,826 - Group['livy'] {}
2017-12-04 08:01:38,829 - Group['spark'] {}
2017-12-04 08:01:38,830 - Group['ranger'] {}
2017-12-04 08:01:38,830 - Group['hdfs'] {}
2017-12-04 08:01:38,830 - Group['zeppelin'] {}
2017-12-04 08:01:38,830 - Group['hadoop'] {}
2017-12-04 08:01:38,830 - Group['users'] {}
2017-12-04 08:01:38,831 - Group['knox'] {}
2017-12-04 08:01:38,831 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,832 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,835 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,835 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,836 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users'], 'uid': None}
2017-12-04 08:01:38,837 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,839 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users'], 'uid': None}
2017-12-04 08:01:38,840 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['ranger'], 'uid': None}
2017-12-04 08:01:38,841 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users'], 'uid': None}
2017-12-04 08:01:38,842 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['zeppelin', 'hadoop'], 'uid': None}
2017-12-04 08:01:38,843 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,844 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,847 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users'], 'uid': None}
2017-12-04 08:01:38,848 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,849 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,850 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs'], 'uid': None}
2017-12-04 08:01:38,852 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,853 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,853 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,857 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,860 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,861 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,864 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-12-04 08:01:38,868 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-12-04 08:01:38,891 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2017-12-04 08:01:38,891 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2017-12-04 08:01:38,892 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-12-04 08:01:38,896 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-12-04 08:01:38,897 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {}
2017-12-04 08:01:38,919 - call returned (0, '1002')
2017-12-04 08:01:38,920 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1002'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2017-12-04 08:01:38,940 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1002'] due to not_if
2017-12-04 08:01:38,940 - Group['hdfs'] {}
2017-12-04 08:01:38,941 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hdfs']}
2017-12-04 08:01:38,941 - FS Type: 
2017-12-04 08:01:38,941 - Directory['/etc/hadoop'] {'mode': 0755}
2017-12-04 08:01:38,963 - File['/usr/hdp/2.6.3.0-235/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-12-04 08:01:38,963 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-12-04 08:01:38,985 - Repository['HDP-2.6-repo-1'] {'append_to_file': False, 'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.6.3.0', 'action': ['create'], 'components': ['HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None}
2017-12-04 08:01:38,997 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': '[HDP-2.6-repo-1]\nname=HDP-2.6-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.6.3.0\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-12-04 08:01:38,999 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match
2017-12-04 08:01:39,000 - Repository['HDP-UTILS-1.1.0.21-repo-1'] {'append_to_file': True, 'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos6', 'action': ['create'], 'components': ['HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None}
2017-12-04 08:01:39,006 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': '[HDP-2.6-repo-1]\nname=HDP-2.6-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.6.3.0\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-UTILS-1.1.0.21-repo-1]\nname=HDP-UTILS-1.1.0.21-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos6\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-12-04 08:01:39,007 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match
2017-12-04 08:01:39,007 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-12-04 08:01:39,200 - Skipping installation of existing package unzip
2017-12-04 08:01:39,200 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-12-04 08:01:39,316 - Skipping installation of existing package curl
2017-12-04 08:01:39,316 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-12-04 08:01:39,420 - Skipping installation of existing package hdp-select
2017-12-04 08:01:39,422 - The repository with version 2.6.3.0-235 for this command has been marked as resolved. It will be used to report the version of the component which was installed
2017-12-04 08:01:39,739 - Using hadoop conf dir: /usr/hdp/2.6.3.0-235/hadoop/conf
2017-12-04 08:01:39,750 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=None -> 2.6
2017-12-04 08:01:39,754 - Using hadoop conf dir: /usr/hdp/2.6.3.0-235/hadoop/conf
2017-12-04 08:01:39,760 - Command repositories: HDP-2.6-repo-1, HDP-UTILS-1.1.0.21-repo-1
2017-12-04 08:01:39,760 - Applicable repositories: HDP-2.6-repo-1, HDP-UTILS-1.1.0.21-repo-1
2017-12-04 08:01:39,769 - Looking for matching packages in the following repositories: HDP-2.6-repo-1, HDP-UTILS-1.1.0.21-repo-1
2017-12-04 08:02:41,328 - No package found for hadooplzo_${stack_version}(hadooplzo_(\d|_)+$)
2017-12-04 08:02:41,329 - The repository with version 2.6.3.0-235 for this command has been marked as resolved. It will be used to report the version of the component which was installed

Command failed after 1 tries
1 ACCEPTED SOLUTION

avatar
Super Guru

@Kaliyug Antagonist,

can you try installing hadooplzo package in all the nodes and try restarting the services

yum install -y hadooplzo hadooplzo-native

Thanks,

Aditya

View solution in original post

3 REPLIES 3

avatar
Super Guru

@Kaliyug Antagonist,

can you try installing hadooplzo package in all the nodes and try restarting the services

yum install -y hadooplzo hadooplzo-native

Thanks,

Aditya

avatar
Super Collaborator

That worked 🙂
Never faced that issue in the previous versions of the sandbox, is this a new post-installation step or a sporadic, package-related error or something else?

avatar
Explorer

@asirna 

I am facing the similar issue. But unable to install the hadoop-lzo package as the package in not avilable in RHEL/centos repo. Also hdp public repo is no longer available as it is now giving 401 unauthorized error.

 

]# yum install -y hadooplzo hadooplzo-native
Loaded plugins: amazon-id, product-id, search-disabled-repos, subscription-manager

This system is not registered with an entitlement server. You can use subscription-manager to register.

No package hadooplzo available.
No package hadooplzo-native available.
Error: Nothing to do

 

Any Suggestions to fix this?