Created on 07-18-2018 05:27 PM - edited 08-17-2019 11:58 PM
Hi All,
I am a noob w.r.t Hadoop.
I started with the Hortonworks HDP a week earlier. I am using Oracle VirtualBox 5.2.14 and Hortonworks Sandbox HDP 2.6.5.
I started Ambari using http://localhost:1080 and launching the Dashboard. It worked fine the first day and I even created a few tables in Hive. I POWERED OFF my VirtualBox that day.
Since the next day when I am trying to access the dashboard, it first throws the below error.
502 Bad Gateway nginx/1.15.0
Then after a few tries, it opens the Ambari Dashboard. This happens every time.
But then there are errors on the main page. Screenshot Ambari_Main.
I have tried to STOP ALL and then START ALL the services. But it throws the error during the App Timeline Server Start service.
stderr
Traceback (most recent call last): File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/application_timeline_server.py", line 89, in <module> ApplicationTimelineServer().execute() File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 375, in execute method(env) File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/application_timeline_server.py", line 43, in start self.configure(env) # FOR SECURITY File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 120, in locking_configure original_configure(obj, *args, **kw) File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/application_timeline_server.py", line 54, in configure yarn(name='apptimelineserver') File "/usr/lib/ambari-agent/lib/ambari_commons/os_family_impl.py", line 89, in thunk return fn(*args, **kwargs) File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/yarn.py", line 359, in yarn mode=0755 File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__ self.env.run() File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run self.run_action(resource, action) File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action provider_action() File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 606, in action_create_on_execute self.action_delayed("create") File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 603, in action_delayed self.get_hdfs_resource_executor().action_delayed(action_name, self) File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 330, in action_delayed self._assert_valid() File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 289, in _assert_valid self.target_status = self._get_file_status(target) File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 432, in _get_file_status list_status = self.util.run_command(target, 'GETFILESTATUS', method='GET', ignore_status_codes=['404'], assertable_result=False) File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 177, in run_command return self._run_command(*args, **kwargs) File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 237, in _run_command _, out, err = get_user_call_output(cmd, user=self.run_user, logoutput=self.logoutput, quiet=False) File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/get_user_call_output.py", line 61, in get_user_call_output raise ExecutionFailed(err_msg, code, files_output[0], files_output[1]) resource_management.core.exceptions.ExecutionFailed: Execution of 'curl -sS -L -w '%{http_code}' -X GET 'http://sandbox-hdp.hortonworks.com:50070/webhdfs/v1/ats/done?op=GETFILESTATUS&user.name=hdfs' 1>/tmp/tmpRDJltk 2>/tmp/tmpK9kAhT' returned 7. curl: (7) Failed connect to sandbox-hdp.hortonworks.com:50070; Connection refused 000
Please let me know a fix to the issue. I am stuck here for the past one week.
Thanks in advance,
M
Created 07-18-2018 09:18 PM
From your screenshot, the following components are in maintenance mode HDFS, Hive, HBase, Falcon etc the App Timeline server need to connect to the NameNode which is the Master process of HDFS.
So I suggest you turn off maintenance mode for HDFS and any other component like hive and hbase and restart the stale service and retry.
It should start normally.
Please revert