Member since
05-21-2017
29
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3266 | 05-26-2017 05:51 AM |
06-24-2017
07:34 AM
I have lot of text files in a repository which I want to filter using Spark. After filtering, I want the same number of filtered files as output. (for example, if I give 1000 files as input, I want corresponding 1000 filtered files as output). I want to output to retain the order of lines as it was in input) I want to do it in the fastest way possible. From what I understand is that if i break the files into lines and process each line in each mapper, then i run into problem of combining lines and sorting them, clustering them in reducer step. I am wondering if this is the right approach. I am new to spark. So I am not sure the best way to do this. Any ideas?
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Spark
06-20-2017
03:16 PM
Is Zookeeper required if HBase is NOT used?
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache HBase
06-03-2017
03:55 AM
The reason for failure of these services looks similar : ("Bad Gateway" while transferring files to HDFS) I am able to put the tars/files (as shown in failure logs) in HDFS by hdfs dfs -put src dest command but not through curl. I am able to access name node @ 50070 and able to see stats of data nodes as well through browser. History Server : Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/historyserver.py", line 190, in <module>
HistoryServer().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 314, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/historyserver.py", line 101, in start
skip=params.sysprep_skip_copy_tarballs_hdfs)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/copy_tarball.py", line 267, in copy_to_hdfs
replace_existing_files=replace_existing_files,
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 555, in action_create_on_execute
self.action_delayed("create")
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 552, in action_delayed
self.get_hdfs_resource_executor().action_delayed(action_name, self)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 287, in action_delayed
self._create_resource()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 303, in _create_resource
self._create_file(self.main_resource.resource.target, source=self.main_resource.resource.source, mode=self.mode)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 418, in _create_file
self.util.run_command(target, 'CREATE', method='PUT', overwrite=True, assertable_result=False, file_to_put=source, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 199, in run_command
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'curl -sS -L -w '%{http_code}' -X PUT --data-binary @/usr/hdp/2.6.0.3-8/hadoop/mapreduce.tar.gz -H 'Content-Type: application/octet-stream' 'http://xxx.net:50070/webhdfs/v1/hdp/apps/2.6.0.3-8/mapreduce/mapreduce.tar.gz?op=CREATE&user.name=hdfs&overwrite=True&permission=444'' returned status_code=502.
<html><head><title>502 Bad Gateway</title></head>
<body><h1>DNS error</h1>
<p>DNS error (the host name of the page you are looking for does not exist)<br><br>Please check that the host name has been spelled correctly.<br></p>
<!--Zscaler/5.3--></body></html> Falcon : Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/FALCON/0.5.0.2.1/package/scripts/falcon_server.py", line 177, in <module>
FalconServer().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 314, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/FALCON/0.5.0.2.1/package/scripts/falcon_server.py", line 49, in start
self.configure(env, upgrade_type=upgrade_type)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 117, in locking_configure
original_configure(obj, *args, **kw)
File "/var/lib/ambari-agent/cache/common-services/FALCON/0.5.0.2.1/package/scripts/falcon_server.py", line 44, in configure
falcon('server', action='config', upgrade_type=upgrade_type)
File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/FALCON/0.5.0.2.1/package/scripts/falcon.py", line 185, in falcon
source = params.falcon_extensions_source_dir)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 555, in action_create_on_execute
self.action_delayed("create")
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 552, in action_delayed
self.get_hdfs_resource_executor().action_delayed(action_name, self)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 287, in action_delayed
self._create_resource()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 306, in _create_resource
self._copy_from_local_directory(self.main_resource.resource.target, self.main_resource.resource.source)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 315, in _copy_from_local_directory
self._copy_from_local_directory(new_target, new_source)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 315, in _copy_from_local_directory
self._copy_from_local_directory(new_target, new_source)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 317, in _copy_from_local_directory
self._create_file(new_target, new_source)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 418, in _create_file
self.util.run_command(target, 'CREATE', method='PUT', overwrite=True, assertable_result=False, file_to_put=source, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 199, in run_command
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'curl -sS -L -w '%{http_code}' -X PUT --data-binary @/usr/hdp/current/falcon-server/extensions/hdfs-mirroring/META/hdfs-mirroring-properties.json -H 'Content-Type: application/octet-stream' 'http://xxx.net:50070/webhdfs/v1/apps/falcon/extensions/hdfs-mirroring/META/hdfs-mirroring-properties.json?op=CREATE&user.name=hdfs&overwrite=True'' returned status_code=502.
<html><head><title>502 Bad Gateway</title></head>
<body><h1>DNS error</h1>
<p>DNS error (the host name of the page you are looking for does not exist)<br><br>Please check that the host name has been spelled correctly.<br></p>
<!--Zscaler/5.3--></body></html> HiveServer 2: File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 199, in run_command
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'curl -sS -L -w '%{http_code}' -X PUT --data-binary @/usr/hdp/2.6.0.3-8/tez/lib/tez.tar.gz -H 'Content-Type: application/octet-stream' 'http://xxx.net:50070/webhdfs/v1/hdp/apps/2.6.0.3-8/tez/tez.tar.gz?op=CREATE&user.name=hdfs&overwrite=True&permission=444'' returned status_code=502.
<html><head><title>502 Bad Gateway</title></head>
<body><h1>DNS error</h1>
<p>DNS error (the host name of the page you are looking for does not exist)<br><br>Please check that the host name has been spelled correctly.<br></p>
<!--Zscaler/5.3--></body></html>
Spark HistoryServer : File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 199, in run_command
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'curl -sS -L -w '%{http_code}' -X PUT --data-binary @/usr/hdp/2.6.0.3-8/spark/lib/spark-hdp-assembly.jar -H 'Content-Type: application/octet-stream' 'http://xxx.net:50070/webhdfs/v1/hdp/apps/2.6.0.3-8/spark/spark-hdp-assembly.jar?op=CREATE&user.name=hdfs&overwrite=True&permission=444'' returned status_code=502.
<html><head><title>502 Bad Gateway</title></head>
<body><h1>DNS error</h1>
<p>DNS error (the host name of the page you are looking for does not exist)<br><br>Please check that the host name has been spelled correctly.<br></p>
<!--Zscaler/5.3--></body></html>
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Falcon
-
Apache Hive
05-30-2017
06:41 PM
Nope. No special characters.
... View more
05-30-2017
06:28 PM
HCat Client Install is failing with Ambari 2.5/ HDP 2.6 on slave nodes. I am using Local repository. yum replist : repo id repo name status HDP-2.6 HDP-2.6 232 HDP-UTILS-1.1.0.21 HDP-UTILS-1.1.0.2 164 ambari-2.5.0.3 ambari Version - ambari-2.5.0.3 12 I tried yum clean all. Didn't help 2017-05-30 19:50:41,748 - Repository['HDP-2.6'] {'base_url': 'http://xxx.net/hdp/centos7/HDP-2.6.0.3/', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0\nproxy=_none_', 'repo_file_name': 'HDP', 'mirror_list': None}
2017-05-30 19:50:41,755 - File['/etc/yum.repos.d/HDP.repo'] {'content': '[HDP-2.6]\nname=HDP-2.6\nbaseurl=http://xxx.net/hdp/centos7/HDP-2.6.0.3/\n\npath=/\nenabled=1\ngpgcheck=0\nproxy=_none_'}
2017-05-30 19:50:41,756 - Repository['HDP-UTILS-1.1.0.21'] {'base_url': 'http://xxx.net/hdp/centos7/HDP-UTILS-1.1.0.21/', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0\nproxy=_none_', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None}
2017-05-30 19:50:41,758 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': '[HDP-UTILS-1.1.0.21]\nname=HDP-UTILS-1.1.0.21\nbaseurl=http://xxx.net/hdp/centos7/HDP-UTILS-1.1.0.21/\n\npath=/\nenabled=1\ngpgcheck=0\nproxy=_none_'}
2017-05-30 19:50:41,759 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-05-30 19:50:41,840 - Skipping installation of existing package unzip
2017-05-30 19:50:41,840 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-05-30 19:50:41,854 - Skipping installation of existing package curl
2017-05-30 19:50:41,854 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-05-30 19:50:41,868 - Skipping installation of existing package hdp-select
2017-05-30 19:50:42,099 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-05-30 19:50:42,102 - Stack Feature Version Info: stack_version=2.6, version=None, current_cluster_version=None -> 2.6
2017-05-30 19:50:42,122 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-05-30 19:50:42,136 - checked_call['rpm -q --queryformat '%{version}-%{release}' hdp-select | sed -e 's/\.el[0-9]//g''] {'stderr': -1}
Command aborted. Reason: 'Server considered task failed and automatically aborted it'
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
05-30-2017
05:56 PM
I am getting Installation failure (HDP 2.6/ Ambari 2.5 ) for Activity Explorer.
Retrying doesn't help.
Log :
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/stacks/HDP/2.1/services/SMARTSENSE/package/scripts/activity_explorer.py", line 13, in <module>
Activity('explorer').execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 314, in execute
method(env)
File "/var/lib/ambari-agent/cache/stacks/HDP/2.1/services/SMARTSENSE/package/scripts/activity.py", line 54, in install
self.deploy_component_specific_config(env)
File "/var/lib/ambari-agent/cache/stacks/HDP/2.1/services/SMARTSENSE/package/scripts/activity.py", line 219, in deploy_component_specific_config
self.configure_activity_explorer(env)
File "/var/lib/ambari-agent/cache/stacks/HDP/2.1/services/SMARTSENSE/package/scripts/activity.py", line 332, in configure_activity_explorer
self.deploy_ini_config(params.activity_conf_dir + "/shiro.ini", shiro_configs)
File "/var/lib/ambari-agent/cache/stacks/HDP/2.1/services/SMARTSENSE/package/scripts/activity.py", line 386, in deploy_ini_config
if (str(v).lower() == "null") and config.has_option(key[0], key[1]):
UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-5: ordinal not in range(128)
... View more
Labels:
05-28-2017
05:40 PM
I tried the command explicitly. It had effect on only few of components install (Probably the later ones in the list.). I have to try a clean install again. By the way will Red hat spacewalk option be of any help? I see a defect (https://issues.apache.org/jira/browse/AMBARI-20119) regarding persistance of configuration is resolved in Ambari 2.5.
... View more
05-28-2017
05:27 PM
I queried "version1" of cluster-env (original). It shows "fetch_nonlocal_groups" : "true" I modified cluster-env after step 8. It gave me "version1495990248470687046". This has fetch_nonlocal_groups : false. Basically, it appears that configuration is all mixed up. Somethings are taken from old, some from new. It would be great if there is someway of setting configuration which doesn't get overwritten.
... View more
05-28-2017
05:22 PM
Yes. But the current configuration queried gives me "fetch_nonlocal_groups" to false. Appears, that I have to do a clean install again. Problem is, since ambari overwrites the configuration after Step 8 (Review), I am not sure if I will ever get out of this mess.
... View more
05-28-2017
05:13 PM
Do I have to do this for different configurations (e.g. hadoop-env etc)?
... View more
05-28-2017
05:10 PM
I checked. It seems to be already OFF. /var/lib/ambari-server/resources/scripts/configs.sh -u admin -p admin get xxx.net MY_HDP cluster-env
USERID=admin PASSWORD=admin ########## Performing 'GET' on (Site:cluster-env, Tag:version1495990248470687046) "properties" : {
"agent_mounts_ignore_list" : "",
"alerts_repeat_tolerance" : "1",
"enable_external_ranger" : "false",
"fetch_nonlocal_groups" : "false",
"hide_yarn_memory_widget" : "false. ..... ....
... View more
05-28-2017
04:48 PM
It failed. (I think it timed out because its taking too much time for each step.) However, install is progressing on retry. Earlier it used to fail on first component install. Now its passing the steps. I verified the transferred repos on slaves are having correct proxy setting.
... View more
05-28-2017
04:21 PM
I tried a workaround which seems to work. As I mentioned, after step 8, installation failed. I corrected the configuration again using CURL and did NOT restart services. Then I hit the "retry failed" in Ambari-UI. This time modified repos got transfered and installation proceeded further. This definitely appears to be a bug in Ambari-server.
... View more
05-28-2017
04:14 PM
From the logs, it appears that each step is taking about 2-3 minutes. There is something wrong. 2017-05-28 16:34:06,847 - Stack Feature Version Info: stack_version=2.6, version=None, current_cluster_version=None -> 2.6 2017-05-28 16:34:06,854 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
User Group mapping (user_group) is missing in the hostLevelParams 2017-05-28 16:34:06,855 - Group['hadoop'] {} 2017-05-28 16:34:06,856 - Group['users'] {} 2017-05-28 16:34:06,856 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-05-28 16:36:58,098 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-05-28 16:39:48,629 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']} 2017-05-28 16:42:38,065 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-05-28 16:45:31,929 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2017-05-28 16:45:31,931 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'} 2017-05-28 16:45:31,940 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if 2017-05-28 16:45:31,941 - Group['hdfs'] {} 2017-05-28 16:45:31,941 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']} 2017-05-28 16:48:23,572 - FS Type:
2017-05-28 16:48:23,572 - Directory['/etc/hadoop'] {'mode': 0755}
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
05-28-2017
12:48 PM
OK. I did it. I verified that settings were set to desired at the server. However, as I ran through Ambari UI after step 8 (Review), the settings were overwritten and original template was reapplied. So in all this setting through REST was useless since it was overwritten again. Back to square one 😞 Any ideas ?
... View more
05-28-2017
07:49 AM
This command gives me properties : /var/lib/ambari-server/resources/scripts/configs.sh -u admin -p admin get xxx.net MY_HDP cluster-env USERID=admin PASSWORD=admin ########## Performing 'GET' on (Site:cluster-env, Tag:version1) "properties" : {
"agent_mounts_ignore_list" : "",
"alerts_repeat_tolerance" : "1",
"enable_external_ranger" : "false",
"fetch_nonlocal_groups" : "true",
... ... "repo_suse_rhel_template" : "[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0", .... ...
... View more
05-28-2017
07:39 AM
Here is what I got : curl --user admin:admin -i -H 'X-Requested-By: ambari' -X GET http://xxx.net:8080/api/v1/clusters/MY_HDP/configurations?type=cluster-env HTTP/1.1 200 OK X-Frame-Options: DENY X-XSS-Protection: 1; mode=block X-Content-Type-Options: nosniff Cache-Control: no-store Pragma: no-cache Set-Cookie: AMBARISESSIONID=1933cf5xbbrqwikunjhcc8otv;Path=/;HttpOnly Expires: Thu, 01 Jan 1970 00:00:00 GMT User: admin Content-Type: text/plain
Vary: Accept-Encoding, User-Agent
Content-Length: 444 Server: Jetty(8.1.19.v20160209) {
"href" : "http://xxx.net:8080/api/v1/clusters/AT_HDP/configurations?type=cluster-env",
"items" : [
{
"href" : "http://xxx.net:8080/api/v1/clusters/MY_HDP/configurations?type=cluster-env&tag=version1",
"tag" : "version1",
"type" : "cluster-env",
"version" : 1,
"Config" : {
"cluster_name" : "MY_HDP",
"stack_id" : "HDP-2.6"
}
}
]
... View more
05-28-2017
07:07 AM
Thanks ! Is there a particular point I have to get the cluster configuration? I mean should I do this right after ambari-server installation OR after a particular step in Cluster installation through Ambari ? I ask this because I didn't get properties at all when i fired curl get request.
... View more
05-28-2017
05:26 AM
I am trying to install HDP 2.6 using Ambari 2.5.3. I have setup local repository. I have to access internet through proxy server. However, I want to skip proxy for accessing local repository. I have done "no_proxy" setting in shell, -Dhttp.nonProxyHosts setting for ambari-server. Similarly, for yum repository, i have to include "proxy=_none". However, doing so manually isn't working since repos are overwritten by ambari. I tried firing REST APIs to setup "desired_config" for "cluster-env" with modified "repo_suse_rhel_template" property. However, that does not seem to be working. Logs: 2017-05-27 15:16:30,235 - Initializing 2 repositories 2017-05-27 15:16:30,236 - Repository['HDP-2.5'] {'base_url': 'http://xxx.net/hdp/centos7/HDP-2.5.3.0/', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP', 'mirror_list': None}
2017-05-27 15:16:30,244 - File['/etc/yum.repos.d/HDP.repo'] {'content': '[HDP-2.5]\nname=HDP-2.5\nbaseurl=http://xxx.net/hdp/centos7/HDP-2.5.3.0/\n\npath=/\nenabled=1\ngpgcheck=0'} 2017-05-27 15:16:30,244 - Writing File['/etc/yum.repos.d/HDP.repo'] because contents don't match 2017-05-27 15:16:30,245 - Repository['HDP-UTILS-1.1.0.21'] {'base_url': 'http://xxx.net/hdp/centos7/HDP-UTILS-1.1.0.21/', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None} 2017-05-27 15:16:30,248 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': '[HDP-UTILS-1.1.0.21]\nname=HDP-UTILS-1.1.0.21\nbaseurl=http://xxx.net/hdp/centos7/HDP-UTILS-1.1.0.21/\n\npath=/\nenabled=1\ngpgcheck=0'} 2017-05-27 15:16:30,248 - Writing File['/etc/yum.repos.d/HDP-UTILS.repo'] because contents don't match
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
05-26-2017
05:51 AM
Finally solved!! The problem was that even if SSH was setup without password, sudo user ('hdpuser') was setup with password (This means for sudo commands, i was prompted for password). I modified sudoers entry to make it password less on all cluster machines. That did it!
... View more
05-25-2017
02:56 PM
I fixed DNS & proxy settings. Getting exactly the same error. Bootstrap timing out.
... View more
05-25-2017
02:55 PM
I fixed proxy setting for curl, wget, ambari-server and yum. Still I am getting exactly the same problem.
... View more
05-24-2017
06:41 PM
If I do "curl <localrepository>", I am getting DNS error. Just wondering if this is the cause of problem.
... View more
05-24-2017
06:31 PM
Yes. I can manually install agents on all the three systems. (As I said, I have added noproxy to repo files. Without that it wasn't working)
... View more
05-24-2017
06:22 PM
I am able to SSH from ambari server (system 1) system to data node systems (system 2 and 3) I have added all the three hosts in /etc/hosts of all the three systems. "hostname -f" on all the three systems gives me FQDN Just thinking could this be proxy server issue ? I have to bypass proxy in FireFox to access localhost. I have done proxy=_none_ in .repo files to skip proxy for yum. Not sure if I have to do the same for curl ?
... View more
05-24-2017
02:25 PM
Log attached : ambari-server.txt I am trying to install a 3 node cluster using ambari Machine 1 : Master Node + Local repository + Ambari Server Machine 2 and 3 : Data Node Local repository is on the same system as master node and ambari-server. I am using ambari server 2.4.2 to install hdp 2.5 repolist command lists : 1) HDP-2.5 2) HDP-UTILS-1.1.0.21 3) Updates-ambari-2.4.2.0 I am using passwordless SSH for user hdpuser. This is the logged in user in the system. It is also sudo user. I have confirmed that password SSH works for this user. After lot of install failures, I am trying to install ONLY master node to understand problem. I am not get past auto-registration step itself. I am getting lot of messages like following. Not sure if they are "normal" or cause of problem : 24 May 2017 15:43:14,842 ERROR [alert-event-bus-2] AlertReceivedListener:480 - Unable to process alert ambari_agent_disk_usage for an invalid cluster named HDP 24 May 2017 15:17:17,424 INFO [ambari-client-thread-25] RepoUtil:156 - Found 0 service repos: [] Registration failure logs : 24 May 2017 15:22:33,895 INFO [ambari-client-thread-25] BootStrapImpl:108 - BootStrapping hosts machine1.domain.net:
24 May 2017 15:22:33,901 INFO [Thread-44] BSRunner:190 - Kicking off the scheduler for polling on logs in /var/run/ambari-server/bootstrap/1 24 May 2017 15:22:33,901 INFO [Thread-44] BSRunner:257 - Host= machine1.domain.net bs=/usr/lib/python2.6/site-packages/ambari_server/bootstrap.py requestDir=/var/run/ambari-server/bootstrap/1 user=hdpuser sshPort=22 keyfile=/var/run/ambari-server/bootstrap/1/sshKey passwordFile null server=machine1.domain.net version=2.4.2.0 serverPort=8080 userRunAs=root timeout=300 24 May 2017 15:22:33,903 INFO [pool-16-thread-1] BSHostStatusCollector:55 - Request directory /var/run/ambari-server/bootstrap/1 24 May 2017 15:22:33,903 INFO [pool-16-thread-1] BSHostStatusCollector:62 - HostList for polling on [machine1.domain.net] 24 May 2017 15:22:33,906 INFO [Thread-44] BSRunner:285 - Bootstrap output, log=/var/run/ambari-server/bootstrap/1/bootstrap.err /var/run/ambari-server/bootstrap/1/bootstrap.out at machine1.domain.net 24 May 2017 15:22:43,906 INFO [pool-16-thread-1] BSHostStatusCollector:55 - Request directory /var/run/ambari-server/bootstrap/1 24 May 2017 15:26:53,930 INFO [pool-16-thread-1] BSHostStatusCollector:62 - HostList for polling on [machine1.domain.net] 24 May 2017 15:27:34,092 WARN [Thread-44] BSRunner:292 - Bootstrap process timed out. It will be destroyed. 24 May 2017 15:27:34,093 INFO [Thread-44] BSRunner:309 - Script log Mesg
INFO:root:BootStrapping hosts ['machine1.domain.net'] using /usr/lib/python2.6/site-packages/ambari_server cluster primary OS: redhat7 with user 'hdpuser'with ssh Port '22' sshKey File /var/run/ambari-server/bootstrap/1/sshKey password File null using tmp dir /var/run/ambari-server/bootstrap/1 ambari: machine1.domain.net; server_port: 8080; ambari version: 2.4.2.0; user_run_as: root
INFO:root:Executing parallel bootstrap
Bootstrap process timed out. It was destroyed.
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
05-21-2017
05:59 PM
I am getting logs (ambari-server.log) like following for all the stacks after I install and start ambari-server : 21 May 2017 19:52:13,766 INFO [main] StackServiceDirectory:116 - No repository information defined for , serviceName=SPARK, repoFolder=/var/lib/ambari-server/resources/stacks/HDP/2.5/services/SPARK/repos 21 May 2017 19:52:13,766 INFO [main] StackServiceDirectory:116 - No repository information defined for , serviceName=YARN, repoFolder=/var/lib/ambari-server/resources/stacks/HDP/2.5/services/YARN/repos 21 May 2017 19:52:13,767 INFO [main] StackServiceDirectory:116 - No repository information defined for , serviceName=KNOX, repoFolder=/var/lib/ambari-server/resources/stacks/HDP/2.5/services/KNOX/repos 21 May 2017 19:52:13,768 INFO [main] StackServiceDirectory:116 - No repository information defined for , serviceName=ZOOKEEPER, repoFolder=/var/lib/ambari-server/resources/stacks/HDP/2.5/services/ZOOKEEPER/repos What does these indicate? I checked and folders do not exist : "/var/lib/ambari-server/resources/stacks/HDP/2.5/services/ZOOKEEPER/repos" Are these "normal" traces ?
... View more
Labels:
- Labels:
-
Apache Ambari