Member since
05-16-2019
25
Posts
1
Kudos Received
0
Solutions
09-18-2018
09:08 AM
Hi Akhil, i do before yum repolist and this was the result: status
HDF-3.2-repo-1 HDF-3.2-repo-1 0
HDP-UTILS-1.1.0.22-repo-1 HDP-UTILS-1.1.0.22-repo-1 0
ambari-2.7.0.0 ambari Version - ambari-2.7.0.0 0
base/7/x86_64 CentOS-7 - Base 0
epel/x86_64 Extra Packages for Enterprise Linux 7 - x86_64 0
extras/7/x86_64 CentOS-7 - Extras 0
updates/7/x86_64 CentOS-7 - Updates 0 This two repositories: HDF-3.2-repo-1
HDP-UTILS-1.1.0.22-repo-1 Dont was added by me, i think that was installed by the mpack. Are you sure it's a good idea to remove them? Test also the yum clean and the update before and the result was the same.
... View more
09-17-2018
01:22 PM
Hello. I tried installing unzip and it gives me the same error: yum install unzip
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
One of the configured repositories failed (Unknown),
and yum doesn't have enough cached data to continue. At this point the only
safe thing yum can do is fail. There are a few ways to work "fix" this:
1. Contact the upstream for the repository and get them to fix the problem.
2. Reconfigure the baseurl/etc. for the repository, to point to a working
upstream. This is most often useful if you are using a newer
distribution release than is supported by the repository (and the
packages for the previous distribution release still work).
3. Run the command with the repository temporarily disabled
yum --disablerepo=<repoid> ...
4. Disable the repository permanently, so yum won't use it by default. Yum
will then just ignore the repository until you permanently enable it
again or use --enablerepo for temporary usage:
yum-config-manager --disable <repoid>
or
subscription-manager repos --disable=<repoid>
5. Configure the failing repository to be skipped, if it is unavailable.
Note that yum will try to contact the repo. when it runs most commands,
so will have to try and fail each time (and thus. yum will be be much
slower). If it is a very temporary problem though, this is often a nice
compromise:
yum-config-manager --save --setopt=<repoid>.skip_if_unavailable=true
Cannot find a valid baseurl for repo: HDF-3.2-repo-1 Also perform the clean, try to reinstall it and same error. I've checked the yum.repos.d folder on all hosts and the only difference is that the host that has the error has one more repository than the rest of the hosts. ambari-hdf-1.repo name=HDF-3.2-repo-1
baseurl=
path=/
enabled=1
gpgcheck=0
[HDP-UTILS-1.1.0.22-repo-1]
name=HDP-UTILS-1.1.0.22-repo-1
baseurl= By writing yum repolist I get the following: Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
Loading mirror speeds from cached hostfile
Loading mirror speeds from cached hostfile
Loading mirror speeds from cached hostfile
Loading mirror speeds from cached hostfile
Loading mirror speeds from cached hostfile
Loading mirror speeds from cached hostfile
Loading mirror speeds from cached hostfile
epel/x86_64/metalink | 21 kB 00:00:00
repo id repo name status
HDF-3.2-repo-1 HDF-3.2-repo-1 0
HDP-UTILS-1.1.0.22-repo-1 HDP-UTILS-1.1.0.22-repo-1 0
ambari-2.7.0.0 ambari Version - ambari-2.7.0.0 0
base/7/x86_64 CentOS-7 - Base 0
epel/x86_64 Extra Packages for Enterprise Linux 7 - x86_64 0
extras/7/x86_64 CentOS-7 - Extras 0
updates/7/x86_64 CentOS-7 - Updates 0 Please note that in the installation wizard you request that it be installed from public repositories. greetings
... View more
09-17-2018
12:09 PM
I am installing in Centos 7, Ambari 2.7 and HDF 3.2. All compatible according to the support matrix. But I get the following error and I don't know how to fix it, someone can help me a little please. stderr: /var/lib/ambari-agent/data/errors-8.txt Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/stack-hooks/before-INSTALL/scripts/hook.py", line 37, in <module>
BeforeInstallHook().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 353, in execute
method(env)
File "/var/lib/ambari-agent/cache/stack-hooks/before-INSTALL/scripts/hook.py", line 34, in hook
install_packages()
File "/var/lib/ambari-agent/cache/stack-hooks/before-INSTALL/scripts/shared_initialization.py", line 37, in install_packages
retry_count=params.agent_stack_retry_count)
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 125, in __new__
cls(names_list.pop(0), env, provider, **kwargs)
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/ambari-agent/lib/resource_management/core/providers/packaging.py", line 30, in action_install
self._pkg_manager.install_package(package_name, self.__create_context())
File "/usr/lib/ambari-agent/lib/ambari_commons/repo_manager/yum_manager.py", line 219, in install_package
shell.repository_manager_executor(cmd, self.properties, context)
File "/usr/lib/ambari-agent/lib/ambari_commons/shell.py", line 749, in repository_manager_executor
raise RuntimeError(message)
RuntimeError: Failed to execute command '/usr/bin/yum -y install unzip', exited with code '1', message: '
One of the configured repositories failed (Unknown),
and yum doesn't have enough cached data to continue. At this point the only
safe thing yum can do is fail. There are a few ways to work "fix" this:
1. Contact the upstream for the repository and get them to fix the problem.
2. Reconfigure the baseurl/etc. for the repository, to point to a working
upstream. This is most often useful if you are using a newer
distribution release than is supported by the repository (and the
packages for the previous distribution release still work).
3. Run the command with the repository temporarily disabled
yum --disablerepo=<repoid> ...
4. Disable the repository permanently, so yum won't use it by default. Yum
will then just ignore the repository until you permanently enable it
again or use --enablerepo for temporary usage:
yum-config-manager --disable <repoid>
or
subscription-manager repos --disable=<repoid>
5. Configure the failing repository to be skipped, if it is unavailable.
Note that yum will try to contact the repo. when it runs most commands,
so will have to try and fail each time (and thus. yum will be be much
slower). If it is a very temporary problem though, this is often a nice
compromise:
yum-config-manager --save --setopt=<repoid>.skip_if_unavailable=true
Cannot find a valid baseurl for repo: HDF-3.2-repo-1
' stdout: /var/lib/ambari-agent/data/output-8.txt 2018-09-17 13:51:15,473 - Stack Feature Version Info: Cluster Stack=3.2, Command Stack=None, Command Version=None -> 3.2
2018-09-17 13:51:15,476 - Group['hadoop'] {}
2018-09-17 13:51:15,477 - Adding group Group['hadoop']
2018-09-17 13:51:15,783 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-09-17 13:51:15,784 - Adding user User['zookeeper']
2018-09-17 13:51:17,287 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-09-17 13:51:17,288 - Adding user User['ams']
2018-09-17 13:51:18,016 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-09-17 13:51:18,016 - Adding user User['ambari-qa']
2018-09-17 13:51:18,866 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-09-17 13:51:18,871 - Writing File['/var/lib/ambari-agent/tmp/changeUid.sh'] because it doesn't exist
2018-09-17 13:51:18,871 - Changing permission for /var/lib/ambari-agent/tmp/changeUid.sh from 644 to 555
2018-09-17 13:51:18,871 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2018-09-17 13:51:18,874 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2018-09-17 13:51:18,884 - Repository['HDF-3.2-repo-1'] {'append_to_file': False, 'base_url': '', 'action': ['create'], 'components': [u'HDF', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdf-1', 'mirror_list': None}
2018-09-17 13:51:18,896 - File['/etc/yum.repos.d/ambari-hdf-1.repo'] {'content': InlineTemplate(...)}
2018-09-17 13:51:18,896 - Writing File['/etc/yum.repos.d/ambari-hdf-1.repo'] because it doesn't exist
2018-09-17 13:51:18,896 - Repository['HDP-UTILS-1.1.0.22-repo-1'] {'append_to_file': True, 'base_url': '', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdf-1', 'mirror_list': None}
2018-09-17 13:51:18,898 - File['/etc/yum.repos.d/ambari-hdf-1.repo'] {'content': '[HDF-3.2-repo-1]\nname=HDF-3.2-repo-1\nbaseurl=\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-UTILS-1.1.0.22-repo-1]\nname=HDP-UTILS-1.1.0.22-repo-1\nbaseurl=\n\npath=/\nenabled=1\ngpgcheck=0'}
2018-09-17 13:51:18,898 - Writing File['/etc/yum.repos.d/ambari-hdf-1.repo'] because contents don't match
2018-09-17 13:51:18,899 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-09-17 13:51:18,946 - Installing package unzip ('/usr/bin/yum -y install unzip')
2018-09-17 13:51:19,087 - Skipping stack-select on AMBARI_METRICS because it does not exist in the stack-select package structure.
Command failed after 1 tries I provide more information about the installation process: wget -nv http://public-repo-1.hortonworks.com/ambari/centos7/2.x/updates/2.7.0.0/ambari.repo -O /etc/yum.repos.d/ambari.repo
yum install ambari-server
wget http://public-repo-1.hortonworks.com/HDF/centos7/3.x/updates/3.2.0.0/tars/hdf_ambari_mp/hdf-ambari-mpack-3.2.0.0-520.tar.gz -P /tmp/
ambari-server install-mpack \
--mpack=/tmp/hdf-ambari-mpack-3.2.0.0-520.tar.gz \
--purge \
--verbose Greetings and thanks
... View more
Labels:
- Labels:
-
Apache Ambari
-
Cloudera DataFlow (CDF)
02-18-2018
05:21 PM
Hi. If I run a job with yarn that makes use of hdfs data, I understand that yarn will search for hardware resources to run it. But how is the interaction of the yarn with the namenode. In other words, the yarn has to communicate with the namenode at some point in order to know where are located the hdfs files that the job requires. When he does that? Can someone please clarify the matter for me. Regards
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache YARN
12-20-2017
08:23 AM
Thank you very much for your help, I didn't consider the expression support.
... View more
12-18-2017
09:33 AM
1 Kudo
Hello good morning, I am using the PutTCP processor, and through an attribute I want to update the port number to use. The problem is I got the error: java. lang. NumberFormatException: For input string:"" I think the problem is that the attribute comes as a string and what is expected is a number. So in the putTCP processor, change the port value as follows: Before: ${port} After: ${port:toNumber()} But it still doesn't work. Can someone please help me with the problem. Greetings
... View more
Labels:
- Labels:
-
Apache NiFi
12-11-2017
03:00 PM
Thank you so much for the help, I appreciate it.
... View more
12-11-2017
01:04 PM
Hello, everybody. Someone can tell me if it is possible to determine the retention of kafka topics individually and not globally. In Ambari I have found the "log.retention.hours" option, but it applies to all topics and I would be interested in having a different configuration for the different topics I have. Greetings
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Kafka
11-22-2017
02:29 PM
I'll try to look at the dump, see if I get anything clear from them. I wonder in case the processor is hung up, is there a way to restart it, without having to reboot nifi? Somehow I would like to be able to put NiFi into production and have an alternative to having to restart the entire service. Thanks for the help Matt.
... View more
11-22-2017
01:21 PM
Hi,
I'm having trouble with Apache NiFi. Occasionally, NiFi is not able to erase the flowfiles of a queue. I stop the two processors that are before and after the queue, then with a right click, I display the menu in which I select 'Emply queue' but it doesn't work. This worries me a lot because the only way I have found to fix this is to restart NiFi, but this is not viable in a production environment. Someone please know a way to solve this problem; starting the processor ahead of the queue is not a solution, as the queue stays the same way. attached images Greetings
... View more
Labels:
- Labels:
-
Apache NiFi