Member since
03-17-2016
8
Posts
6
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5888 | 03-18-2016 02:32 AM |
03-22-2016
04:29 AM
Thank you, I got it with manual download hdp-select && hdfs-client.
... View more
03-21-2016
08:22 AM
@Scott Shaw @Neeraj Sabharwal
@Artem Ervits
... View more
03-18-2016
07:38 AM
As the photos,hadoop54.example.com is NameNode,others are DataNode.I got the problem on this step.Every error is the same. Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/hbase_client.py", line 80, in <module>
HbaseClient().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/hbase_client.py", line 35, in install
self.configure(env)
File "/var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/hbase_client.py", line 40, in configure
hbase(name='client')
File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/hbase.py", line 105, in hbase
group=params.user_group
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 158, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 121, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/xml_config.py", line 67, in action_create
encoding = self.resource.encoding
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 158, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 121, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 87, in action_create
raise Fail("Applying %s failed, parent directory %s doesn't exist" % (self.resource, dirname))
resource_management.core.exceptions.Fail: Applying File['/usr/hdp/current/hadoop-client/conf/hdfs-site.xml'] failed, parent directory /usr/hdp/current/hadoop-client/conf doesn't exist Then I download the package:HDP-2.4.0.0-centos7-rpm.tar.gz. I found that there is no conf either. So,how to fix it?
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
03-18-2016
02:32 AM
2 Kudos
Hey,thank both of you.i got it.The character in China is UTF8,and Ambari is ascii.And Python 2.6 for UTF8 has a bug.
... View more
03-18-2016
12:44 AM
1 Kudo
Yes,that's correct.
... View more
03-17-2016
09:26 AM
1 Kudo
I realy confirm all pre-checks.I don't know the problem like this Exception in thread Thread-3:
Traceback (most recent call last):
File "/usr/lib64/python2.7/threading.py", line 811, in __bootstrap_inner
self.run()
File "/usr/lib/python2.6/site-packages/ambari_agent/Controller.py", line 377, in run
self.register = Register(self.config)
File "/usr/lib/python2.6/site-packages/ambari_agent/Register.py", line 34, in __init__
self.hardware = Hardware()
File "/usr/lib/python2.6/site-packages/ambari_agent/Hardware.py", line 43, in __init__
self.hardware['mounts'] = Hardware.osdisks()
File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/usr/lib/python2.6/site-packages/ambari_agent/Hardware.py", line 96, in osdisks
if mountinfo != None and Hardware._chk_mount(mountinfo['mountpoint']):
File "/usr/lib/python2.6/site-packages/ambari_agent/Hardware.py", line 105, in _chk_mount
return call(['test', '-w', mountpoint], sudo=True, timeout=int(Hardware.CHECK_REMOTE_MOUNTS_TIMEOUT_DEFAULT)/2)[0] == 0
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 59, in inner
log_msg = Logger.get_function_repr("{0}['{1}']".format(function.__name__, command_alias), kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/logger.py", line 147, in get_function_repr
return unicode("{0} {{{1}}}").format(name, arguments_str)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe6 in position 15: ordinal not in range(128)
Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/ambari_agent/main.py", line 306, in <module>
main(heartbeat_stop_callback)
File "/usr/lib/python2.6/site-packages/ambari_agent/main.py", line 297, in main
ExitHelper.execute_cleanup()
TypeError: unbound method execute_cleanup() must be called with ExitHelper instance as first argument (got nothing instead)
Do you know what's the reason maybe?
... View more
03-17-2016
04:18 AM
1 Kudo
yes ,i'm using centos7,and the python version is 2.6.
... View more
03-17-2016
03:55 AM
1 Kudo
hadoop54.example.com is my ambari-server.55/56 are agent.Ambari is ok,when I install HDP got a problem on the step "Confirm Hosts".the log is: ==========================
Creating target directory...
==========================
Command start time 2016-03-17 11:48:06
Connection to hadoop54.example.com closed.
SSH command execution finished
host=hadoop54.example.com, exitcode=0
Command end time 2016-03-17 11:48:06
==========================
Copying common functions script...
==========================
Command start time 2016-03-17 11:48:06
scp /usr/lib/python2.6/site-packages/ambari_commons
host=hadoop54.example.com, exitcode=0
Command end time 2016-03-17 11:48:06
==========================
Copying OS type check script...
==========================
Command start time 2016-03-17 11:48:06
scp /usr/lib/python2.6/site-packages/ambari_server/os_check_type.py
host=hadoop54.example.com, exitcode=0
Command end time 2016-03-17 11:48:07
==========================
Running OS type check...
==========================
Command start time 2016-03-17 11:48:07
Cluster primary/cluster OS family is redhat7 and local/current OS family is redhat7
Connection to hadoop54.example.com closed.
SSH command execution finished
host=hadoop54.example.com, exitcode=0
Command end time 2016-03-17 11:48:07
==========================
Checking 'sudo' package on remote host...
==========================
Command start time 2016-03-17 11:48:07
sudo-1.8.6p7-16.el7.x86_64
Connection to hadoop54.example.com closed.
SSH command execution finished
host=hadoop54.example.com, exitcode=0
Command end time 2016-03-17 11:48:08
==========================
Copying repo file to 'tmp' folder...
==========================
Command start time 2016-03-17 11:48:08
scp /etc/yum.repos.d/ambari.repo
host=hadoop54.example.com, exitcode=0
Command end time 2016-03-17 11:48:08
==========================
Moving file to repo dir...
==========================
Command start time 2016-03-17 11:48:08
Connection to hadoop54.example.com closed.
SSH command execution finished
host=hadoop54.example.com, exitcode=0
Command end time 2016-03-17 11:48:08
==========================
Changing permissions for ambari.repo...
==========================
Command start time 2016-03-17 11:48:08
Connection to hadoop54.example.com closed.
SSH command execution finished
host=hadoop54.example.com, exitcode=0
Command end time 2016-03-17 11:48:08
==========================
Copying setup script file...
==========================
Command start time 2016-03-17 11:48:08
scp /usr/lib/python2.6/site-packages/ambari_server/setupAgent.py
host=hadoop54.example.com, exitcode=0
Command end time 2016-03-17 11:48:08
==========================
Running setup agent script...
==========================
Command start time 2016-03-17 11:48:08
('INFO 2016-03-17 11:48:16,908 ExitHelper.py:53 - Performing cleanup before exiting...
INFO 2016-03-17 11:48:17,338 main.py:71 - loglevel=logging.INFO
INFO 2016-03-17 11:48:17,338 main.py:71 - loglevel=logging.INFO
INFO 2016-03-17 11:48:17,340 DataCleaner.py:39 - Data cleanup thread started
INFO 2016-03-17 11:48:17,341 DataCleaner.py:120 - Data cleanup started
INFO 2016-03-17 11:48:17,342 DataCleaner.py:122 - Data cleanup finished
INFO 2016-03-17 11:48:17,401 PingPortListener.py:50 - Ping port listener started on port: 8670
INFO 2016-03-17 11:48:17,402 main.py:283 - Connecting to Ambari server at https://hadoop54.example.com:8440 (127.0.0.1)
INFO 2016-03-17 11:48:17,403 NetUtil.py:60 - Connecting to https://hadoop54.example.com:8440/ca
INFO 2016-03-17 11:48:17,516 threadpool.py:52 - Started thread pool with 3 core threads and 20 maximum threads
WARNING 2016-03-17 11:48:17,516 AlertSchedulerHandler.py:243 - [AlertScheduler] /var/lib/ambari-agent/cache/alerts/definitions.json not found or invalid. No alerts will be scheduled until registration occurs.
INFO 2016-03-17 11:48:17,516 AlertSchedulerHandler.py:139 - [AlertScheduler] Starting <ambari_agent.apscheduler.scheduler.Scheduler object at 0x283ea10>; currently running: False
INFO 2016-03-17 11:48:19,521 hostname.py:89 - Read public hostname \'hadoop54.example.com\' using socket.getfqdn()
ERROR 2016-03-17 11:48:19,532 main.py:309 - Fatal exception occurred:
Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/ambari_agent/main.py", line 306, in <module>
main(heartbeat_stop_callback)
File "/usr/lib/python2.6/site-packages/ambari_agent/main.py", line 297, in main
ExitHelper.execute_cleanup()
TypeError: unbound method execute_cleanup() must be called with ExitHelper instance as first argument (got nothing instead)
', None)
('INFO 2016-03-17 11:48:16,908 ExitHelper.py:53 - Performing cleanup before exiting...
INFO 2016-03-17 11:48:17,338 main.py:71 - loglevel=logging.INFO
INFO 2016-03-17 11:48:17,338 main.py:71 - loglevel=logging.INFO
INFO 2016-03-17 11:48:17,340 DataCleaner.py:39 - Data cleanup thread started
INFO 2016-03-17 11:48:17,341 DataCleaner.py:120 - Data cleanup started
INFO 2016-03-17 11:48:17,342 DataCleaner.py:122 - Data cleanup finished
INFO 2016-03-17 11:48:17,401 PingPortListener.py:50 - Ping port listener started on port: 8670
INFO 2016-03-17 11:48:17,402 main.py:283 - Connecting to Ambari server at https://hadoop54.example.com:8440 (127.0.0.1)
INFO 2016-03-17 11:48:17,403 NetUtil.py:60 - Connecting to https://hadoop54.example.com:8440/ca
INFO 2016-03-17 11:48:17,516 threadpool.py:52 - Started thread pool with 3 core threads and 20 maximum threads
WARNING 2016-03-17 11:48:17,516 AlertSchedulerHandler.py:243 - [AlertScheduler] /var/lib/ambari-agent/cache/alerts/definitions.json not found or invalid. No alerts will be scheduled until registration occurs.
INFO 2016-03-17 11:48:17,516 AlertSchedulerHandler.py:139 - [AlertScheduler] Starting <ambari_agent.apscheduler.scheduler.Scheduler object at 0x283ea10>; currently running: False
INFO 2016-03-17 11:48:19,521 hostname.py:89 - Read public hostname \'hadoop54.example.com\' using socket.getfqdn()
ERROR 2016-03-17 11:48:19,532 main.py:309 - Fatal exception occurred:
Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/ambari_agent/main.py", line 306, in <module>
main(heartbeat_stop_callback)
File "/usr/lib/python2.6/site-packages/ambari_agent/main.py", line 297, in main
ExitHelper.execute_cleanup()
TypeError: unbound method execute_cleanup() must be called with ExitHelper instance as first argument (got nothing instead)
', None)
Connection to hadoop54.example.com closed.
SSH command execution finished
host=hadoop54.example.com, exitcode=0
Command end time 2016-03-17 11:48:20
Registering with the server...
Registration with the server failed.
... View more
Labels: