Created 02-04-2016 11:49 PM
Below is the exception I am getting:
Traceback (most recent call last): File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 433, in <module> NameNode().execute() File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute method(env) File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 102, in start namenode(action="start", hdfs_binary=hdfs_binary, upgrade_type=upgrade_type, env=env) File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk return fn(*args, **kwargs) File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 112, in namenode create_log_dir=True File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/utils.py", line 267, in service Execute(daemon_cmd, not_if=process_id_exists_command, environment=hadoop_env_exports) File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__ self.env.run() File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 158, in run self.run_action(resource, action) File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 121, in run_action provider_action() File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 238, in action_run tries=self.resource.tries, try_sleep=self.resource.try_sleep) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner result = function(command, **kwargs) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call tries=tries, try_sleep=try_sleep) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper result = _call(command, **kwargs_copy) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call raise Fail(err_msg) resource_management.core.exceptions.Fail: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start namenode'' returned 1. starting namenode, logging to /var/log/hadoop/hdfs/hadoop-hdfs-namenode-pp-hdp-m.out
Created 02-08-2016 03:06 PM
@Prakash
Have you tried using internal ip instead?
Please give it a shot if not already done.
Created 02-09-2016 03:24 AM
Please see views instead HUE 😉
Created 02-08-2016 03:06 PM
@Prakash
Have you tried using internal ip instead?
Please give it a shot if not already done.
Created 02-09-2016 03:15 AM
Thanks @Rahul Pathak, Using inernal IP of the VM system did the trick...
I am new to all this next step is to install HUE. Any recommendation ?
Thanks
Created 02-09-2016 03:22 AM
@Prakash Punj I have accepted this answer. Here is my question
You said that IP changes dynamically so how are you going to make sure that nothing breaks after IP change?
I believe answer is use FQDN for cluster configs
Look into Ambari view instead HUE
Created 06-09-2016 04:32 PM
where can i change the ip to internal ip??
Created 07-21-2016 10:00 PM
Did you ever figure out what they meant by this? I'm not sure how to configure this aspect.
Created 01-05-2017 02:21 AM
Hi Prakash, I met just the same problem, did you solve it ?