Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Communication issues (50070 connection refused) while launching a multi-node VM cluster using Vagrant and VirtualBox

avatar
Explorer

Hello, I kindly ask for assistance in troubleshooting my system. I have provided a lot of details to help with this process but if you require some more specifics please let me know.

Problem Introduction

In the past I have spun up a 3 node cluster with physical machines but I have since wanted to experiment with a single machine running several VMs to approximate a cluster environment. To achieve this I have followed this Hortonworks Community Connection guide: https://community.hortonworks.com/articles/39220/spinning-up-hadoop-hdp-cluster-on-local-machine-us....

I have followed every step with a few minor changes required for my Linux CentOS7.2 system (e.g., using systemctl enable ntp instead of chkconfig ntpd on) and using a more recent HDP deployment (2.6.3). My issue is that after following these steps, I launched my cluster but many of the services are not running properly, which I believe is primarily due to communication problems. I have outlined some of the problems/errors below:

Error List

(1) Cannot connect to File View

After starting services through Ambari (logging in as admin), I notice that although these services say they are running, I cannot see a lot of the information:

  • NameNode Started
  • SNameNode Started
  • DataNodes 2/2 Started
  • DataNodes Status n/a
  • Journal NOdes 0/0 JN Live
  • NFSGateways 0/0 started
  • NameNode Uptime Not Running
  • NameNode Heap n/a / n/a (0% used)
  • Disk Usage (DFS Used) n/a / n/a (0%)
  • Disk Usage (non DFS used) n/a / n/a (0%)
  • Disk Remaining n/a / n/a (0%)
  • Blocks (total) n/a
  • Block error n/a...

Then, when I try to open FIles View I receive a message as shown below:

Failed to transition to undefined

Server status: 500

Server Message:

master1.datacluster1:50070: Connection refused (Connection refused)     

Error trace:
    
      java.net.ConnectException: master1.datacluster1:50070: Connection refused (Connection refused)
	at java.net.PlainSocketImpl.socketConnect(Native Method)
	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
	at java.net.Socket.connect(Socket.java:589)
	at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
	at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
	at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
	at sun.net.www.http.HttpClient.<init>(HttpClient.java:211)
	at sun.net.www.http.HttpClient.New(HttpClient.java:308)
	at sun.net.www.http.HttpClient.New(HttpClient.java:326)
	at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1202)
	at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1138)
	at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1032)
	at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:966)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.connect(WebHdfsFileSystem.java:722)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.connect(WebHdfsFileSystem.java:674)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:747)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:592)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:622)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:618)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:1004)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:1020)
	at org.apache.ambari.view.utils.hdfs.HdfsApi$4.run(HdfsApi.java:216)
	at org.apache.ambari.view.utils.hdfs.HdfsApi$4.run(HdfsApi.java:214)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
	at org.apache.ambari.view.utils.hdfs.HdfsApi.execute(HdfsApi.java:500)
	at org.apache.ambari.view.utils.hdfs.HdfsApi.getFileStatus(HdfsApi.java:214)
	at org.apache.ambari.view.commons.hdfs.FileOperationService.listdir(FileOperationService.java:100)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
	at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
	at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
	at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
	at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
	at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137)
	at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
	at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137)
	at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
	at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137)
	at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
	at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137)
	at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
	at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137)
	at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
	at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
	at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
	at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
	at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
	at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
	at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
	at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
	at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
	at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
	at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
	at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:684)
	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1507)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330)
	at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:118)
	at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:84)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
	at org.apache.ambari.server.security.authorization.AmbariAuthorizationFilter.doFilter(AmbariAuthorizationFilter.java:287)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
	at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:113)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
	at org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:103)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
	at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:113)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
	at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:54)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
	at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:45)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
	at org.apache.ambari.server.security.authentication.AmbariDelegatingAuthenticationFilter.doFilter(AmbariDelegatingAuthenticationFilter.java:132)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
	at org.apache.ambari.server.security.authorization.AmbariUserAuthorizationFilter.doFilter(AmbariUserAuthorizationFilter.java:91)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
	at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:87)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
	at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:192)
	at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:160)
	at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:237)
	at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:167)
	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
	at org.apache.ambari.server.api.MethodOverrideFilter.doFilter(MethodOverrideFilter.java:72)
	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
	at org.apache.ambari.server.api.AmbariPersistFilter.doFilter(AmbariPersistFilter.java:47)
	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
	at org.apache.ambari.server.view.AmbariViewsMDCLoggingFilter.doFilter(AmbariViewsMDCLoggingFilter.java:54)
	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
	at org.apache.ambari.server.view.ViewThrottleFilter.doFilter(ViewThrottleFilter.java:161)
	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
	at org.apache.ambari.server.security.AbstractSecurityHeaderFilter.doFilter(AbstractSecurityHeaderFilter.java:125)
	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
	at org.apache.ambari.server.security.AbstractSecurityHeaderFilter.doFilter(AbstractSecurityHeaderFilter.java:125)
	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
	at org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:82)
	at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:294)
	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
	at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:499)
	at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
	at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
	at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
	at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086)
	at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:427)
	at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
	at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020)
	at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
	at org.apache.ambari.server.controller.AmbariHandlerList.processHandlers(AmbariHandlerList.java:212)
	at org.apache.ambari.server.controller.AmbariHandlerList.processHandlers(AmbariHandlerList.java:201)
	at org.apache.ambari.server.controller.AmbariHandlerList.handle(AmbariHandlerList.java:150)
	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
	at org.eclipse.jetty.server.Server.handle(Server.java:370)
	at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:494)
	at org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:973)
	at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1035)
	at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:641)
	at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:231)
	at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
	at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:696)
	at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:53)
	at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
	at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
	at java.lang.Thread.run(Thread.java:745)

          (less...)
    
  

(2) Zeppelin will not start

I am running the Zeppelin service on my "ambari1.datacluster1" VM and upon starting the service I am faced with this message. I noticed again, at the very end of these errors, "master1.datacluster1:50070 and ...:8020" connection failure shows up yet again:

stderr: /var/lib/ambari-agent/data/errors-1046.txt
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.7.0/package/scripts/master.py", line 619, in <module>
    Master().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 367, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.7.0/package/scripts/master.py", line 239, in start
    self.check_and_copy_notebook_in_hdfs(params)
  File "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.7.0/package/scripts/master.py", line 202, in check_and_copy_notebook_in_hdfs
    recursive_chmod=True
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 166, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 604, in action_create_on_execute
    self.action_delayed("create")
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 601, in action_delayed
    self.get_hdfs_resource_executor().action_delayed(action_name, self)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 328, in action_delayed
    self._assert_valid()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 287, in _assert_valid
    self.target_status = self._get_file_status(target)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 430, in _get_file_status
    list_status = self.util.run_command(target, 'GETFILESTATUS', method='GET', ignore_status_codes=['404'], assertable_result=False)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 177, in run_command
    return self._run_command(*args, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 235, in _run_command
    _, out, err = get_user_call_output(cmd, user=self.run_user, logoutput=self.logoutput, quiet=False)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/get_user_call_output.py", line 61, in get_user_call_output
    raise ExecutionFailed(err_msg, code, files_output[0], files_output[1])
resource_management.core.exceptions.ExecutionFailed: Execution of 'curl -sS -L -w '%{http_code}' -X GET 'http://master1.datacluster1:50070/webhdfs/v1/user/zeppelin/notebook?op=GETFILESTATUS&user.name=hdfs' 1>/tmp/tmpwkk1Y1 2>/tmp/tmpUvzqmU' returned 7. curl: (7) Failed connect to master1.datacluster1:50070; Connection refused
000

stdout: /var/lib/ambari-agent/data/output-1046.txt

2017-12-02 22:40:09,580 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.3.0-235 -> 2.6.3.0-235
2017-12-02 22:40:09,603 - Using hadoop conf dir: /usr/hdp/2.6.3.0-235/hadoop/conf
2017-12-02 22:40:09,834 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.3.0-235 -> 2.6.3.0-235
2017-12-02 22:40:09,841 - Using hadoop conf dir: /usr/hdp/2.6.3.0-235/hadoop/conf
2017-12-02 22:40:09,843 - Group['livy'] {}
2017-12-02 22:40:09,844 - Group['spark'] {}
2017-12-02 22:40:09,844 - Group['hdfs'] {}
2017-12-02 22:40:09,844 - Group['zeppelin'] {}
2017-12-02 22:40:09,845 - Group['hadoop'] {}
2017-12-02 22:40:09,845 - Group['users'] {}
2017-12-02 22:40:09,846 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2017-12-02 22:40:09,850 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2017-12-02 22:40:09,851 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2017-12-02 22:40:09,853 - User['superset'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2017-12-02 22:40:09,854 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2017-12-02 22:40:09,856 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2017-12-02 22:40:09,857 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2017-12-02 22:40:09,859 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2017-12-02 22:40:09,860 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2017-12-02 22:40:09,861 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'zeppelin', u'hadoop'], 'uid': None}
2017-12-02 22:40:09,863 - User['mahout'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2017-12-02 22:40:09,864 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2017-12-02 22:40:09,865 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2017-12-02 22:40:09,866 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2017-12-02 22:40:09,869 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2017-12-02 22:40:09,871 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs'], 'uid': None}
2017-12-02 22:40:09,872 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2017-12-02 22:40:09,874 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2017-12-02 22:40:09,875 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2017-12-02 22:40:09,877 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2017-12-02 22:40:09,878 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2017-12-02 22:40:09,879 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-12-02 22:40:09,881 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-12-02 22:40:09,888 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2017-12-02 22:40:09,888 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2017-12-02 22:40:09,890 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-12-02 22:40:09,891 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-12-02 22:40:09,892 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {}
2017-12-02 22:40:09,901 - call returned (0, '1021')
2017-12-02 22:40:09,902 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1021'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2017-12-02 22:40:09,909 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1021'] due to not_if
2017-12-02 22:40:09,910 - Group['hdfs'] {}
2017-12-02 22:40:09,910 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', u'hdfs']}
2017-12-02 22:40:09,911 - FS Type: 
2017-12-02 22:40:09,912 - Directory['/etc/hadoop'] {'mode': 0755}
2017-12-02 22:40:09,941 - File['/usr/hdp/2.6.3.0-235/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-12-02 22:40:09,942 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-12-02 22:40:09,965 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2017-12-02 22:40:09,978 - Skipping Execute[('setenforce', '0')] due to only_if
2017-12-02 22:40:09,979 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2017-12-02 22:40:09,982 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2017-12-02 22:40:09,983 - Changing owner for /var/run/hadoop from 1017 to root
2017-12-02 22:40:09,983 - Changing group for /var/run/hadoop from 1005 to root
2017-12-02 22:40:09,984 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2017-12-02 22:40:09,989 - File['/usr/hdp/2.6.3.0-235/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2017-12-02 22:40:09,992 - File['/usr/hdp/2.6.3.0-235/hadoop/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2017-12-02 22:40:10,002 - File['/usr/hdp/2.6.3.0-235/hadoop/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2017-12-02 22:40:10,018 - File['/usr/hdp/2.6.3.0-235/hadoop/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-12-02 22:40:10,020 - File['/usr/hdp/2.6.3.0-235/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2017-12-02 22:40:10,021 - File['/usr/hdp/2.6.3.0-235/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2017-12-02 22:40:10,028 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644}
2017-12-02 22:40:10,035 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2017-12-02 22:40:11,128 - call['ambari-python-wrap /usr/bin/hdp-select status spark-client'] {'timeout': 20}
2017-12-02 22:40:11,245 - call returned (0, 'spark-client - 2.6.3.0-235')
2017-12-02 22:40:11,287 - Using hadoop conf dir: /usr/hdp/2.6.3.0-235/hadoop/conf
2017-12-02 22:40:11,291 - Directory['/var/log/zeppelin'] {'owner': 'zeppelin', 'group': 'zeppelin', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2017-12-02 22:40:11,294 - Directory['/var/run/zeppelin'] {'owner': 'zeppelin', 'create_parents': True, 'group': 'zeppelin', 'mode': 0755, 'cd_access': 'a'}
2017-12-02 22:40:11,295 - Directory['/usr/hdp/current/zeppelin-server'] {'owner': 'zeppelin', 'group': 'zeppelin', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2017-12-02 22:40:11,297 - Execute[('chown', '-R', u'zeppelin:zeppelin', u'/var/run/zeppelin')] {'sudo': True}
2017-12-02 22:40:11,309 - XmlConfig['zeppelin-site.xml'] {'owner': 'zeppelin', 'group': 'zeppelin', 'conf_dir': '/etc/zeppelin/conf', 'configurations': ...}
2017-12-02 22:40:11,332 - Generating config: /etc/zeppelin/conf/zeppelin-site.xml
2017-12-02 22:40:11,333 - File['/etc/zeppelin/conf/zeppelin-site.xml'] {'owner': 'zeppelin', 'content': InlineTemplate(...), 'group': 'zeppelin', 'mode': None, 'encoding': 'UTF-8'}
2017-12-02 22:40:11,362 - File['/etc/zeppelin/conf/zeppelin-env.sh'] {'owner': 'zeppelin', 'content': InlineTemplate(...), 'group': 'zeppelin'}
2017-12-02 22:40:11,365 - File['/etc/zeppelin/conf/shiro.ini'] {'owner': 'zeppelin', 'content': InlineTemplate(...), 'group': 'zeppelin'}
2017-12-02 22:40:11,367 - File['/etc/zeppelin/conf/log4j.properties'] {'owner': 'zeppelin', 'content': ..., 'group': 'zeppelin'}
2017-12-02 22:40:11,369 - Directory['/etc/zeppelin/conf/external-dependency-conf'] {'owner': 'zeppelin', 'group': 'zeppelin', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2017-12-02 22:40:11,370 - XmlConfig['hbase-site.xml'] {'group': 'zeppelin', 'conf_dir': '/etc/zeppelin/conf/external-dependency-conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'zeppelin', 'configurations': ...}
2017-12-02 22:40:11,379 - Generating config: /etc/zeppelin/conf/external-dependency-conf/hbase-site.xml
2017-12-02 22:40:11,379 - File['/etc/zeppelin/conf/external-dependency-conf/hbase-site.xml'] {'owner': 'zeppelin', 'content': InlineTemplate(...), 'group': 'zeppelin', 'mode': 0644, 'encoding': 'UTF-8'}
2017-12-02 22:40:11,422 - XmlConfig['hdfs-site.xml'] {'group': 'zeppelin', 'conf_dir': '/etc/zeppelin/conf/external-dependency-conf', 'mode': 0644, 'configuration_attributes': {u'final': {u'dfs.support.append': u'true', u'dfs.datanode.data.dir': u'true', u'dfs.namenode.http-address': u'true', u'dfs.namenode.name.dir': u'true', u'dfs.webhdfs.enabled': u'true', u'dfs.datanode.failed.volumes.tolerated': u'true'}}, 'owner': 'zeppelin', 'configurations': ...}
2017-12-02 22:40:11,433 - Generating config: /etc/zeppelin/conf/external-dependency-conf/hdfs-site.xml
2017-12-02 22:40:11,433 - File['/etc/zeppelin/conf/external-dependency-conf/hdfs-site.xml'] {'owner': 'zeppelin', 'content': InlineTemplate(...), 'group': 'zeppelin', 'mode': 0644, 'encoding': 'UTF-8'}
2017-12-02 22:40:11,484 - Writing File['/etc/zeppelin/conf/external-dependency-conf/hdfs-site.xml'] because contents don't match
2017-12-02 22:40:11,485 - XmlConfig['core-site.xml'] {'group': 'zeppelin', 'conf_dir': '/etc/zeppelin/conf/external-dependency-conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'zeppelin', 'configurations': ...}
2017-12-02 22:40:11,494 - Generating config: /etc/zeppelin/conf/external-dependency-conf/core-site.xml
2017-12-02 22:40:11,494 - File['/etc/zeppelin/conf/external-dependency-conf/core-site.xml'] {'owner': 'zeppelin', 'content': InlineTemplate(...), 'group': 'zeppelin', 'mode': 0644, 'encoding': 'UTF-8'}
2017-12-02 22:40:11,526 - Execute[('chown', '-R', u'zeppelin:zeppelin', '/etc/zeppelin')] {'sudo': True}
2017-12-02 22:40:11,533 - Execute[('chown', '-R', u'zeppelin:zeppelin', u'/usr/hdp/current/zeppelin-server/notebook')] {'sudo': True}
2017-12-02 22:40:11,541 - call['kinit -kt  ; hdfs --config /usr/hdp/2.6.3.0-235/hadoop/conf dfs -test -d /user/zeppelin/notebook;echo $?'] {'user': 'zeppelin'}
2017-12-02 22:40:19,477 - call returned (0, '-bash: kinit: command not found\ntest: Call From ambari1.datacluster1/127.0.0.1 to master1.datacluster1:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused\n1')
2017-12-02 22:40:19,478 - HdfsResource['/user/zeppelin/notebook'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/2.6.3.0-235/hadoop/bin', 'keytab': [EMPTY], 'default_fs': 'hdfs://master1.datacluster1:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'zeppelin', 'recursive_chown': True, 'hadoop_conf_dir': '/usr/hdp/2.6.3.0-235/hadoop/conf', 'type': 'directory', 'action': ['create_on_execute'], 'recursive_chmod': True}
2017-12-02 22:40:19,481 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://master1.datacluster1:50070/webhdfs/v1/user/zeppelin/notebook?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpwkk1Y1 2>/tmp/tmpUvzqmU''] {'logoutput': None, 'quiet': False}
2017-12-02 22:40:19,581 - call returned (7, '')

Command failed after 1 tries

(3) Cannot access Hive View

Again, when I try to access this veiw II notice the error:

Service 'userhome' check failed: master1.datacluster1:50070: Connection refused (Connection refused)

or

Cannot open a hive connection with connect string jdbc:hive2://master1.datacluster1:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;hive.server2.proxy.user=admin

Service 'userhome' check failed:
java.net.ConnectException: master1.datacluster1:50070: Connection refused (Connection refused)
	at java.net.PlainSocketImpl.socketConnect(Native Method)
	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
	at java.net.Socket.connect(Socket.java:589)
	at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
	at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
	at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
	at sun.net.www.http.HttpClient.<init>(HttpClient.java:211)
	at sun.net.www.http.HttpClient.New(HttpClient.java:308)
	at sun.net.www.http.HttpClient.New(HttpClient.java:326)
	at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1202)
	at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1138)
	at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1032)
	at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:966)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.connect(WebHdfsFileSystem.java:722)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.connect(WebHdfsFileSystem.java:674)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:747)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:592)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:622)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:618)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:1004)
	at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:1020)
	at org.apache.ambari.view.utils.hdfs.HdfsApi$4.run(HdfsApi.java:216)
	at org.apache.ambari.view.utils.hdfs.HdfsApi$4.run(HdfsApi.java:214)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
	at org.apache.ambari.view.utils.hdfs.HdfsApi.execute(HdfsApi.java:500)
	at org.apache.ambari.view.utils.hdfs.HdfsApi.getFileStatus(HdfsApi.java:214)
	at org.apache.ambari.view.commons.hdfs.UserService.homeDir(UserService.java:67)
	at org.apache.ambari.view.hive2.resources.files.FileService.userhomeSmokeTest(FileService.java:256)
	at org.apache.ambari.view.hive2.HelpService.userhomeStatus(HelpService.java:92)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
	at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
	at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
	at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
	at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
	at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137)
	at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
	at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137)
	at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
	at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137)
	at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
	at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137)
	at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
	at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
	at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
	at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
	at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
	at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
	at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
	at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
	at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
	at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
	at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
	at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:684)
	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1507)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330)
	at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:118)
	at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:84)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
	at org.apache.ambari.server.security.authorization.AmbariAuthorizationFilter.doFilter(AmbariAuthorizationFilter.java:287)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
	at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:113)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
	at org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:103)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
	at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:113)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
	at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:54)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
	at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:45)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
	at org.apache.ambari.server.security.authentication.AmbariDelegatingAuthenticationFilter.doFilter(AmbariDelegatingAuthenticationFilter.java:132)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
	at org.apache.ambari.server.security.authorization.AmbariUserAuthorizationFilter.doFilter(AmbariUserAuthorizationFilter.java:91)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
	at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:87)
	at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
	at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:192)
	at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:160)
	at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:237)
	at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:167)
	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
	at org.apache.ambari.server.api.MethodOverrideFilter.doFilter(MethodOverrideFilter.java:72)
	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
	at org.apache.ambari.server.api.AmbariPersistFilter.doFilter(AmbariPersistFilter.java:47)
	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
	at org.apache.ambari.server.view.AmbariViewsMDCLoggingFilter.doFilter(AmbariViewsMDCLoggingFilter.java:54)
	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
	at org.apache.ambari.server.view.ViewThrottleFilter.doFilter(ViewThrottleFilter.java:161)
	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
	at org.apache.ambari.server.security.AbstractSecurityHeaderFilter.doFilter(AbstractSecurityHeaderFilter.java:125)
	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
	at org.apache.ambari.server.security.AbstractSecurityHeaderFilter.doFilter(AbstractSecurityHeaderFilter.java:125)
	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
	at org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:82)
	at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:294)
	at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
	at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:499)
	at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
	at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
	at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
	at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086)
	at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:427)
	at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
	at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020)
	at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
	at org.apache.ambari.server.controller.AmbariHandlerList.processHandlers(AmbariHandlerList.java:212)
	at org.apache.ambari.server.controller.AmbariHandlerList.processHandlers(AmbariHandlerList.java:201)
	at org.apache.ambari.server.controller.AmbariHandlerList.handle(AmbariHandlerList.java:150)
	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
	at org.eclipse.jetty.server.Server.handle(Server.java:370)
	at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:494)
	at org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:973)
	at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1035)
	at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:641)
	at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:231)
	at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
	at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:696)
	at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:53)
	at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
	at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
	at java.lang.Thread.run(Thread.java:745)

Cluster Configuration

To provide a better scope of how my cluster is arranged, I have provided the following information below:

(1) Distribution of services to nodes

This is the information I have regarding my nodes:

Admin Name : admin

Cluster Name : hdp_vmcluster

Total Hosts : 4 (4 new)

Repositories:

Services:

  • HDFS
    • DataNode : 2 hosts
    • NameNode : master1.datacluster1
    • NFSGateway : 0 host
    • SNameNode : ambari1.datacluster1
  • YARN + MapReduce2
    • App Timeline Server : master1.datacluster1
    • NodeManager : 2 hosts
    • ResourceManager : master1.datacluster1
  • Tez
    • Clients : 2 hosts
  • Hive
    • Metastore : master1.datacluster1
    • HiveServer2 : master1.datacluster1
    • WebHCat Server : master1.datacluster1
    • Database : New MySQL Database
  • HBase
    • Master : master1.datacluster1
    • RegionServer : 2 hosts
    • Phoenix Query Server : 0 host
  • Pig
    • Clients : 2 hosts
  • Sqoop
    • Clients : 2 hosts
  • Oozie
    • Server : master1.datacluster1
    • Database : New Derby Database
  • ZooKeeper
    • Server : master1.datacluster1
  • Falcon
    • Server : master1.datacluster1
  • Storm
    • DRPC Server : master1.datacluster1
    • Nimbus : master1.datacluster1
    • UI Server : master1.datacluster1
    • Supervisor : 2 hosts
  • Flume
    • Flume : 2 hosts
  • Ambari Infra
    • Infra Solr Instance : ambari1.datacluster1
  • Ambari Metrics
    • Metrics Collector : master1.datacluster1
    • Grafana : ambari1.datacluster1
  • SmartSense
    • Activity Analyzer : ambari1.datacluster1
    • Activity Explorer : ambari1.datacluster1
    • HST Server : master1.datacluster1
  • Spark
    • Livy Server : 0 host
    • History Server : ambari1.datacluster1
    • Thrift Server : 0 host
  • Spark2
    • Livy for Spark2 Server : 0 host
    • History Server : ambari1.datacluster1
    • Thrift Server : 0 host
  • Zeppelin Notebook
    • Notebook : ambari1.datacluster1
  • Druid
    • Broker : ambari1.datacluster1
    • Coordinator : ambari1.datacluster1
    • Historical : 2 hosts
    • MiddleManager : 2 hosts
    • Overlord : ambari1.datacluster1
    • Router : ambari1.datacluster1
  • Mahout
    • Clients : 2 hosts
  • Slider
    • Clients : 2 hosts
  • Superset
    • Superset : ambari1.datacluster1

(2) Vagrantfile

Initially I followed the exact file layout as was done in the aforementioned guide. But seen that many of my services were having "errors connecting to services" I decided to try opening some more ports, even though very few were intially mentioned in the guide which I found odd, I am not sure what is normal... I know that the sandbox has many ports opened though. I based which ports I opened on which "node" from the Hortonworks literature: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.2/bk_reference/content/hdfs-ports.html

Before adding these ports I had many more connection errors arising, and services like HBase and Oozie were trying to access ports that were not forwarded, so this seemed to solve some but not all of my connection troubles.

I am using a private network and to allow a few ports open multiple machines I added their IP addresses as well. Initially I was using a sub-net that another one of the routers in my house uses; vagrant didn't like this so I made one that is unique to just the virtual environment (192.168.50.###). The firewall should be off and ntp should be running.

# -*- mode: ruby -*-
# vi: set ft=ruby :

$script = <<SCRIPT

    sudo yum -y install ntp
    sudo systemctl enable ntpd
    sudo systemctl start ntpd
    sudo systemctl disable firewalld
    sudo service firewalld stop
    sudo setenforce 0
    sudo sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
    sudo sh -c 'echo "* soft nofile 10000" >> /etc/security/limits.conf'
    sudo sh -c 'echo "* hard nofile 10000" >> /etc/security/limits.conf'
    sudo sh -c 'echo never > /sys/kernel/mm/redhat_transparent_hugepage/defrag'
    sudo sh -c 'echo never > /sys/kernel/mm/redhat_transparent_hugepage/enabled'
    
SCRIPT


# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.

Vagrant.configure("2") do |config|

  config.vm.provision "shell", inline: $script
    
  #Ambari1 Server
  config.vm.define :ambari1 do |a1|
   a1.vm.hostname = "ambari1.datacluster1"
   a1.vm.network :private_network, ip: "192.168.50.220"
   a1.vm.provider :virtualbox do |vb|
    vb.memory = "2048"
   end

	#Ambari Based Ports
   a1.vm.network "forwarded_port", guest: 8080, host: 8080
   a1.vm.network "forwarded_port", guest: 80, host: 80
	
	#HDFS Ports
   a1.vm.network "forwarded_port", guest: 50090, host: 50090, host_ip: "192.168.50.220",  id: "Secondary NN"

	#MR Ports
   a1.vm.network "forwarded_port", guest: 10020, host: 10020, host_ip: "192.168.50.220", id: "MapReduceJobHist"
   a1.vm.network "forwarded_port", guest: 19888, host: 19888, host_ip: "192.168.50.220", id: "MRJobHistweb"
   a1.vm.network "forwarded_port", guest: 13562, host: 13562, host_ip: "192.168.50.220", id: "MR Shuffle Port"

	#Zeppelin Ports

   a1.vm.network "forwarded_port", guest: 9995, host: 9995, host_ip: "192.168.50.220", id: "Zeppelin UI"
 
	#Spark Ports
   a1.vm.network "forwarded_port", guest: 10015, host: 10015, host_ip: "192.168.50.220", id: "Spark1"
   a1.vm.network "forwarded_port", guest: 10016, host: 10016, host_ip: "192.168.50.220", id: "Spark2"
   a1.vm.network "forwarded_port", guest: 4040, host: 4040, host_ip: "192.168.50.220", id: "Spark3"
   a1.vm.network "forwarded_port", guest: 18080, host: 18080, host_ip: "192.168.50.220", id: "SparkHisServ"

	#Zoo Ports
   a1.vm.network "forwarded_port", guest: 2888, host: 2888, host_ip: "192.168.50.220", id: "Zoo1"
   a1.vm.network "forwarded_port", guest: 3888, host: 3888, host_ip: "192.168.50.220", id: "Zoo2"
   a1.vm.network "forwarded_port", guest: 2181, host: 2181, host_ip: "192.168.50.220", id: "Zoo3"

   end

  #Master1
  config.vm.define :master1 do |m1|
   m1.vm.hostname = "master1.datacluster1"
   m1.vm.network :private_network, ip: "192.168.50.221"
   m1.vm.provider :virtualbox do |vb|
    vb.memory = "4096"
   end
	#HDFS Ports
   m1.vm.network "forwarded_port", guest: 50070, host: 50070, host_ip: "192.168.50.221",  id: "NameNode WebUI"
   m1.vm.network "forwarded_port", guest: 50470, host: 50470, host_ip: "192.168.50.221", id: "NameNode Secure WebUI"
   m1.vm.network "forwarded_port", guest: 9000, host: 9000, host_ip: "192.168.50.221",  id: "NN metadata1"
   m1.vm.network "forwarded_port", guest: 8020, host: 8020, host_ip: "192.168.50.221", id: "NN metadata2"

	#Hive Ports
   m1.vm.network "forwarded_port", guest: 10000, host: 10000, host_ip: "192.168.50.221", id: "HiveServer"
   m1.vm.network "forwarded_port", guest: 9083, host: 9083, host_ip: "192.168.50.221", id: "Hive Metastore"
   m1.vm.network "forwarded_port", guest: 9999, host: 9999, host_ip: "192.168.50.221", id: "Hive Metastore"
   m1.vm.network "forwarded_port", guest: 3306, host: 3306, host_ip: "192.168.50.221", id: "MySQL"
   m1.vm.network "forwarded_port", guest: 50111, host: 50111, host_ip: "192.168.50.221", id: "WebHCat"


	#HBase
   m1.vm.network "forwarded_port", guest: 16000, host: 16000, host_ip: "192.168.50.221", id: "HMaster"
   m1.vm.network "forwarded_port", guest: 16010, host: 16010, host_ip: "192.168.50.221", id: "HMaster Web UI"
   m1.vm.network "forwarded_port", guest: 8085, host: 8085, host_ip: "192.168.50.221", id: "HBase Rest"
   m1.vm.network "forwarded_port", guest: 9090, host: 9090, host_ip: "192.168.50.221", id: "Thrift Server"
   m1.vm.network "forwarded_port", guest: 9095, host: 9095, host_ip: "192.168.50.221", id: "Thrift Server2"

	#MR Ports
   m1.vm.network "forwarded_port", guest: 10020, host: 10020, host_ip: "192.168.50.221", id: "MapReduceJobHist"
   m1.vm.network "forwarded_port", guest: 19888, host: 19888, host_ip: "192.168.50.221", id: "MRJobHistweb"
   m1.vm.network "forwarded_port", guest: 13562, host: 13562, host_ip: "192.168.50.221", id: "MR Shuffle Port"

	#Oozie Ports
   m1.vm.network "forwarded_port", guest: 11000, host: 11000, host_ip: "192.168.50.221", id: "Oozie"
   m1.vm.network "forwarded_port", guest: 11001, host: 11001, host_ip: "192.168.50.221", id: "OozieAdmin"
   m1.vm.network "forwarded_port", guest: 11443, host: 11443, host_ip: "192.168.50.221", id: "Oozie secure"
  
	#Sqoop Ports
#Repeated port...   m1.vm.network "forwarded_port", guest: 16000, host: 16000, host_ip: "192.168.50.221", id: "Sqoop"

	#Storm Ports
   m1.vm.network "forwarded_port", guest: 8000, host: 8000, host_ip: "192.168.50.221", id: "LogViewer"
   m1.vm.network "forwarded_port", guest: 8744, host: 8744, host_ip: "192.168.50.221", id: "StormUI"
   m1.vm.network "forwarded_port", guest: 3772, host: 3772, host_ip: "192.168.50.221", id: "DRPC1"
   m1.vm.network "forwarded_port", guest: 3773, host: 3773, host_ip: "192.168.50.221", id: "DRPC2"
   m1.vm.network "forwarded_port", guest: 6627, host: 6627, host_ip: "192.168.50.221", id: "Nimbus"
   m1.vm.network "forwarded_port", guest: 6700, host: 6700, host_ip: "192.168.50.221", id: "Superv 1"
   m1.vm.network "forwarded_port", guest: 6701, host: 6701, host_ip: "192.168.50.221", id: "Superv 2"
   m1.vm.network "forwarded_port", guest: 6702, host: 6702, host_ip: "192.168.50.221", id: "Supver 3"
   m1.vm.network "forwarded_port", guest: 6703, host: 6703, host_ip: "192.168.50.221", id: "Supver 4"

	#Tez
   m1.vm.network "forwarded_port", guest: 10030, host: 10030, host_ip: "192.168.50.221", id: "Tez1"
   m1.vm.network "forwarded_port", guest: 12999, host: 12999, host_ip: "192.168.50.221", id: "Tez2"

	#Yarn Ports
   m1.vm.network "forwarded_port", guest: 8050, host: 8050, host_ip: "192.168.50.221", id: "YARN RM"
   m1.vm.network "forwarded_port", guest: 8188, host: 8188, host_ip: "192.168.50.221", id: "YARN Timeline"
   m1.vm.network "forwarded_port", guest: 8032, host: 8032, host_ip: "192.168.50.221", id: "Yarn RM2"
   m1.vm.network "forwarded_port", guest: 8088, host: 8088, host_ip: "192.168.50.221", id: "Yarn RM3"
   m1.vm.network "forwarded_port", guest: 8025, host: 8024, host_ip: "192.168.50.221", id: "YARN RM4"
   m1.vm.network "forwarded_port", guest: 8030, host: 8030, host_ip: "192.168.50.221", id: "Scheduler"
   m1.vm.network "forwarded_port", guest: 8141, host: 8141, host_ip: "192.168.50.221", id: "Yarn RM5"
   m1.vm.network "forwarded_port", guest: 45454, host: 45454, host_ip: "192.168.50.221", id: "Yarn NM1"
   m1.vm.network "forwarded_port", guest: 8042, host: 8042, host_ip: "192.168.50.221", id: "Yarn NM2"
   m1.vm.network "forwarded_port", guest: 8190, host: 8190, host_ip: "192.168.50.221", id: "TL Server1"
   m1.vm.network "forwarded_port", guest: 10200, host: 10200, host_ip: "192.168.50.221", id: "TL Server2"

	#Zookeeper Ports
   m1.vm.network "forwarded_port", guest: 2888, host: 2888, host_ip: "192.168.50.221", id: "Zoo1"
   m1.vm.network "forwarded_port", guest: 3888, host: 3888, host_ip: "192.168.50.221", id: "Zoo2"
   m1.vm.network "forwarded_port", guest: 2181, host: 2181, host_ip: "192.168.50.221", id: "Zoo3"
  end

  #Slave1
  config.vm.define :slave1 do |s1|
   s1.vm.hostname = "slave1.datacluster1"
   s1.vm.network :private_network, ip: "192.168.50.230"
   s1.vm.provider :virtualbox do |vb|
    vb.memory = "2048"
   end
	#HDFS Ports
   s1.vm.network "forwarded_port", guest: 50075, host: 50075, host_ip: "192.168.50.230", id: "DN WebUI"
   s1.vm.network "forwarded_port", guest: 50475, host: 50475, host_ip: "192.168.50.230", id: "DN Secure"
   s1.vm.network "forwarded_port", guest: 50010, host: 50010, host_ip: "192.168.50.230", id: "DN Transfer"
   s1.vm.network "forwarded_port", guest: 1019, host: 1019, host_ip: "192.168.50.230", id: "DN Trans Sec"
   s1.vm.network "forwarded_port", guest: 50020, host: 50020, host_ip: "192.168.50.230", id: "Met Ops"
	
	#HBase Ports
   s1.vm.network "forwarded_port", guest: 16020, host: 16020, host_ip: "192.168.50.230", id: "Region Server HBase"
   s1.vm.network "forwarded_port", guest: 16030, host: 16030, host_ip: "192.168.50.230", id: "Region Server HBase2"

	#MR Ports
   s1.vm.network "forwarded_port", guest: 10020, host: 10020, host_ip: "192.168.50.230", id: "MapReduceJobHist"
  s1.vm.network "forwarded_port", guest: 19888, host: 19888, host_ip: "192.168.50.230", id: "MRJobHistweb"
   s1.vm.network "forwarded_port", guest: 13562, host: 13562, host_ip: "192.168.50.230", id: "MR Shuffle Port"

	#Zookeeper Ports
   s1.vm.network "forwarded_port", guest: 2888, host: 2888, host_ip: "192.168.50.230", id: "Zoo1"
   s1.vm.network "forwarded_port", guest: 3888, host: 3888, host_ip: "192.168.50.230", id: "Zoo2"
   s1.vm.network "forwarded_port", guest: 2181, host: 2181, host_ip: "192.168.50.230", id: "Zoo3"

  end
 
  #Slave2
  config.vm.define :slave2 do |s2|
   s2.vm.hostname = "slave2.datacluster1"
   s2.vm.network :private_network, ip: "192.168.50.231"
   s2.vm.provider :virtualbox do |vb|
    vb.memory = "2048"
   end

	#HDFS Ports
   s2.vm.network "forwarded_port", guest: 50075, host: 50075, host_ip: "192.168.50.231", id: "DN WebUI"
   s2.vm.network "forwarded_port", guest: 50475, host: 50475, host_ip: "192.168.50.231",  id: "DN Secure"
   s2.vm.network "forwarded_port", guest: 50010, host: 50010, host_ip: "192.168.50.231", id: "DN Transfer"
   s2.vm.network "forwarded_port", guest: 1019, host: 1019, host_ip: "192.168.50.231", id: "DN Trans Sec"
   s2.vm.network "forwarded_port", guest: 50020, host: 50020, host_ip: "192.168.50.231", id: "Met Ops"

	#HBase Ports
   s2.vm.network "forwarded_port", guest: 16020, host: 16020, host_ip: "192.168.50.231", id: "Region Server HBase"
   s2.vm.network "forwarded_port", guest: 16030, host: 16030, host_ip: "192.168.50.231", id: "Region Server HBase2"

	#MR Ports
   s2.vm.network "forwarded_port", guest: 10020, host: 10020, host_ip: "192.168.50.231", id: "MapReduceJobHist"
   s2.vm.network "forwarded_port", guest: 19888, host: 19888, host_ip: "192.168.50.231", id: "MRJobHistweb"
   s2.vm.network "forwarded_port", guest: 13562, host: 13562, host_ip: "192.168.50.231", id: "MR Shuffle Port"

	#Zookeeper
   s2.vm.network "forwarded_port", guest: 2888, host: 2888, host_ip: "192.168.50.231", id: "Zoo1"
   s2.vm.network "forwarded_port", guest: 3888, host: 3888, host_ip: "192.168.50.231", id: "Zoo2"
   s2.vm.network "forwarded_port", guest: 2181, host: 2181, host_ip: "192.168.50.231", id: "Zoo3"
   

  end


  # The most common configuration options are documented and commented below.
  # For a complete reference, please see the online documentation at
  # https://docs.vagrantup.com.

  # Every Vagrant development environment requires a box. You can search for
  # boxes at https://vagrantcloud.com/search.
  config.vm.box = "bento/centos-7.2"

  # Disable automatic box update checking. If you disable this, then
  # boxes will only be checked for updates when the user runs
  # `vagrant box outdated`. This is not recommended.
  # config.vm.box_check_update = false
  #
  # Create a forwarded port mapping which allows access to a specific port
  # within the machine from a port on the host machine. In the example below,
  # accessing "localhost:8080" will access port 80 on the guest machine.
  # NOTE: This will enable public access to the opened port
  # config.vm.network "forwarded_port", guest: 80, host: 8080

  # Create a forwarded port mapping which allows access to a specific port
  # within the machine from a port on the host machine and only allow access
  # via 127.0.0.1 to disable public access
  # config.vm.network "forwarded_port", guest: 80, host: 8080, host_ip: "127.0.0.1"

  # Create a private network, which allows host-only access to the machine
  # using a specific IP.
  # config.vm.network "private_network", ip: "192.168.33.10"

  # Create a public network, which generally matched to bridged network.
  # Bridged networks make the machine appear as another physical device on
  # your network.
  # config.vm.network "public_network"

  # Share an additional folder to the guest VM. The first argument is
  # the path on the host to the actual folder. The second argument is
  # the path on the guest to mount the folder. And the optional third
  # argument is a set of non-required options.
  # config.vm.synced_folder "../data", "/vagrant_data"

  # Provider-specific configuration so you can fine-tune various
  # backing providers for Vagrant. These expose provider-specific options.
  # Example for VirtualBox:
  #
  # config.vm.provider "virtualbox" do |vb|
  #   # Display the VirtualBox GUI when booting the machine
  #   vb.gui = true
  #
  #   # Customize the amount of memory on the VM:
  #   vb.memory = "1024"
  # end
  #
  # View the documentation for the provider you are using for more
  # information on available options.

  # Enable provisioning with a shell script. Additional provisioners such as
  # Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the
  # documentation for more information about their specific syntax and use.
  # config.vm.provision "shell", inline: <<-SHELL
  #   apt-get update
  #   apt-get install -y apache2
  # SHELL
end

(3) HDFS Config files

I have also looked at changing different config files through ambari, this is a summary

Advanced core-site

fs-default fs: hdfs://master1.datacluster1:8020

Advanced hdfs-site

*Most of the node address look lik 0.0.0.0 with a respective port

dfs.datanode.http.address: 0.0.0.0:50075

Custom core-site

Listing for hadoop.proxyuser.vagrant.hosts, hadoop.proxyuser.vagrant.groups, hadoop.proxyuser.root.hosts, and hadoop.proxyuser.vagrant.groups are all set to : *

Current Items Tried for Troubleshooting

I have tested various different aspects in trying to find what is causing the communication issues. I have looked at suggestions from the forum and other help documents but no one seemed to have the same setup and issues as myself. This is what I have done:

(1) Check SSH: I can SSH successfully from the "ambari1" node both as root and user to any other group on the VM cluster

(2) Ensure iptables are off: I have run this code on each node and receive this, stating the firewalld is dead, as I would like

[vagrant@slave1 ~]$ systemctl status firewalld
firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)

(3) Ping other nodes: all of my nodes can be pinged, even when I am not connected to the internet (local)

(4) NTP is on:

[vagrant@slave1 ~]$ systemctl status firewalld
firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)ve1 ~]$ systemctl status firewalld
firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)

(5) Check if I can connect to forwarded ports. Since I have errors show up for master1.datacluster1:50070, I tried the following commands and they seemed to work where I would expect, which confused me...:

[vagrant@master1 ~]$ telnet master1 50070
Trying 127.0.0.1...
Connected to master1.

[vagrant@master1 ~]$ telnet master1 8020
Trying 127.0.0.1...
Connected to master1.

[vagrant@slave1 ~]$ telnet slave1 50075
Trying 127.0.0.1...
Connected to slave1.

[vagrant@slave1 ~]$ telnet slave1 50070
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
Trying 192.168.50.230...
telnet: connect to address 192.168.50.230: Connection refused

If anyone knows how to resolve this issue so that I can access the filesystem and have my system fully operational, I would greatly appreciate it. I am currently at a loss of what to try next.

Thank you in advance.

1 ACCEPTED SOLUTION

avatar
Explorer

After a bit more tinkering around, the cluster appears to have no more communication issues. I checked the /etc/hosts file and although all of the FQDNs were present for all nodes, what I noticed is that a 127.0.0.1 FQDN had the same description as some other FQDNs For example:

127.0.0.1 slavenode1 slave1
192.168.##.### slavenode1 slave1

After removing this extra FQDN, I believe the communication could work fine... the nodes now showed the correct IP address which had all the proper ports forwarded.

View solution in original post

1 REPLY 1

avatar
Explorer

After a bit more tinkering around, the cluster appears to have no more communication issues. I checked the /etc/hosts file and although all of the FQDNs were present for all nodes, what I noticed is that a 127.0.0.1 FQDN had the same description as some other FQDNs For example:

127.0.0.1 slavenode1 slave1
192.168.##.### slavenode1 slave1

After removing this extra FQDN, I believe the communication could work fine... the nodes now showed the correct IP address which had all the proper ports forwarded.