Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Unable to install Solr in HDP 2.6 Sandbox for Docker

Highlighted

Unable to install Solr in HDP 2.6 Sandbox for Docker

New Contributor

I tried to install Solr in which fails. Please find the output of sandbox-version, 'uname -a' below and my docker version below

	[root@sandbox ~]# sandbox-version
	Sandbox information:
	Created on: 05_05_2017_13_16_20 for
	Hadoop stack version:  Hadoop 2.7.3.2.6.0.3-8
	Ambari Version: 2.5.0.5-1
	Ambari Hash: 0b5e975972e7a0b265e87b2e38eefde9039ef44c
	Ambari build:  Release : 1
	Java version:  1.8.0_131
	OS Version:  CentOS release 6.9 (Final)

[root@sandbox ~]# uname -a
Linux sandbox.hortonworks.com 4.9.27-moby #1 SMP Thu May 11 04:01:18 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

$docker version
Client:
 Version:      17.03.1-ce
 API version:  1.27
 Go version:   go1.7.5
 Git commit:   c6d412e
 Built:        Tue Mar 28 00:40:02 2017
 OS/Arch:      darwin/amd64

Server:
 Version:      17.03.1-ce
 API version:  1.27 (minimum version 1.12)
 Go version:   go1.7.5
 Git commit:   c6d412e
 Built:        Fri Mar 24 00:00:50 2017
 OS/Arch:      linux/amd64
 Experimental: true



I saw someone posting a similar issue on HDP2.5 earlier this year.

Is there a workaround available? Or is the conclusion that I cannot use Solr with HDP2.6 sandbox on Docker?

6 REPLIES 6
Highlighted

Re: Unable to install Solr in HDP 2.6 Sandbox for Docker

New Contributor

hey , i have the same problem did you solved it ,

i ll be thankful

Highlighted

Re: Unable to install Solr in HDP 2.6 Sandbox for Docker

New Contributor

No, I did not solve it. I tried to work around it by using the VM image which did not work either. So, I now use a pure Solr docker image to get familiar with Solr.

Highlighted

Re: Unable to install Solr in HDP 2.6 Sandbox for Docker

New Contributor

really :( thanks , maybe you have y tutorial about how to install solr and how to configure it to store index images or files in hdfs

thanks a lot ;)

Re: Unable to install Solr in HDP 2.6 Sandbox for Docker

Contributor

Same issue: install of Solr on Docker HDP sandbox fails.

Sandbox information:
Created on: 28_07_2017_14_21_49 for
Hadoop stack version:  Hadoop 2.7.3.2.6.1.0-129
Ambari Version: 2.5.1.0-159
Ambari Hash: 7a4fc4d6ab88f423fb6e6d960c26a961572d0507
Ambari build:  Release : 159
Java version:  1.8.0_141
OS Version:  CentOS release 6.9 (Final)

Errors:

stderr: 
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/SOLR/5.5.2.2.5/package/scripts/solr.py", line 101, in <module>
    Solr().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 329, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/SOLR/5.5.2.2.5/package/scripts/solr.py", line 17, in install
    self.install_packages(env)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 693, in install_packages
    retry_count=agent_stack_retry_count)
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 54, in action_install
    self.install_package(package_name, self.resource.use_repos, self.resource.skip_repos)
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py", line 51, in install_package
    self.checked_call_with_retries(cmd, sudo=True, logoutput=self.get_logoutput())
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 86, in checked_call_with_retries
    return self._call_with_retries(cmd, is_checked=True, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 98, in _call_with_retries
    code, out = func(cmd, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 72, in inner
    result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 102, in checked_call
    tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 150, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call
    raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of '/usr/bin/yum -d 0 -e 0 -y install lucidworks-hdpsearch' returned 1. Error: Cannot retrieve repository metadata (repomd.xml) for repository: sandbox. Please verify its path and try again
 stdout:
2017-11-01 14:33:57,361 - Stack Feature Version Info: stack_version=2.6, version=2.6.1.0-129, current_cluster_version=2.6.1.0-129 -> 2.6.1.0-129
2017-11-01 14:33:57,366 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
User Group mapping (user_group) is missing in the hostLevelParams
2017-11-01 14:33:57,374 - Group['livy'] {}
2017-11-01 14:33:57,381 - Group['spark'] {}
2017-11-01 14:33:57,387 - Group['solr'] {}
2017-11-01 14:33:57,388 - Group['ranger'] {}
2017-11-01 14:33:57,388 - Group['zeppelin'] {}
2017-11-01 14:33:57,389 - Group['hadoop'] {}
2017-11-01 14:33:57,389 - Group['nifi'] {}
2017-11-01 14:33:57,389 - Group['users'] {}
2017-11-01 14:33:57,389 - Group['knox'] {}
2017-11-01 14:33:57,390 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-01 14:33:57,391 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-01 14:33:57,392 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-01 14:33:57,393 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-01 14:33:57,394 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-01 14:33:57,395 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-11-01 14:33:57,396 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-11-01 14:33:57,396 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['ranger']}
2017-11-01 14:33:57,397 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-11-01 14:33:57,409 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['zeppelin', 'hadoop']}
2017-11-01 14:33:57,417 - User['nifi'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-01 14:33:57,423 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-01 14:33:57,425 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-01 14:33:57,426 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-11-01 14:33:57,428 - User['solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-01 14:33:57,429 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-01 14:33:57,430 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-01 14:33:57,431 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-01 14:33:57,433 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-01 14:33:57,436 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-01 14:33:57,439 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-01 14:33:57,441 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-01 14:33:57,442 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-01 14:33:57,443 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-11-01 14:33:57,444 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-01 14:33:57,457 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-11-01 14:33:57,745 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2017-11-01 14:33:57,745 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2017-11-01 14:33:57,747 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-01 14:33:57,750 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2017-11-01 14:33:57,967 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2017-11-01 14:33:57,967 - Group['hdfs'] {}
2017-11-01 14:33:57,968 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'hdfs']}
2017-11-01 14:33:57,969 - FS Type: 
2017-11-01 14:33:57,969 - Directory['/etc/hadoop'] {'mode': 0755}
2017-11-01 14:33:58,007 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-11-01 14:33:58,009 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-11-01 14:33:58,046 - Initializing 2 repositories
2017-11-01 14:33:58,048 - Repository['HDP-2.6'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.6.1.0', 'action': ['create'], 'components': ['HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP', 'mirror_list': None}
2017-11-01 14:33:58,069 - File['/etc/yum.repos.d/HDP.repo'] {'content': '[HDP-2.6]\nname=HDP-2.6\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.6.1.0\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-11-01 14:33:58,071 - Repository['HDP-UTILS-1.1.0.21'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos6', 'action': ['create'], 'components': ['HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None}
2017-11-01 14:33:58,080 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': '[HDP-UTILS-1.1.0.21]\nname=HDP-UTILS-1.1.0.21\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos6\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-11-01 14:33:58,083 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-11-01 14:33:58,521 - Skipping installation of existing package unzip
2017-11-01 14:33:58,521 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-11-01 14:33:58,831 - Skipping installation of existing package curl
2017-11-01 14:33:58,832 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-11-01 14:33:59,144 - Skipping installation of existing package hdp-select
2017-11-01 14:33:59,863 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-11-01 14:33:59,866 - Version 2.6.1.0-129 was provided as effective cluster version.  Using package version 2_6_1_0_129
2017-11-01 14:33:59,869 - Package['lucidworks-hdpsearch'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-11-01 14:34:00,363 - Installing package lucidworks-hdpsearch ('/usr/bin/yum -d 0 -e 0 -y install lucidworks-hdpsearch')
2017-11-01 14:34:02,131 - Execution of '/usr/bin/yum -d 0 -e 0 -y install lucidworks-hdpsearch' returned 1. Error: Cannot retrieve repository metadata (repomd.xml) for repository: sandbox. Please verify its path and try again
2017-11-01 14:34:02,131 - Failed to install package lucidworks-hdpsearch. Executing '/usr/bin/yum clean metadata'
2017-11-01 14:34:03,119 - Retrying to install package lucidworks-hdpsearch after 30 seconds


Command failed after 1 tries
Highlighted

Re: Unable to install Solr in HDP 2.6 Sandbox for Docker

i also have the same issue. does anybody resolve it?

thanks

Highlighted

Re: Unable to install Solr in HDP 2.6 Sandbox for Docker

Explorer

Any hortonworks team is reading these forum in the community and try to help solve the problem ?

Don't have an account?
Coming from Hortonworks? Activate your account here