Support Questions

Find answers, ask questions, and share your expertise

History Server fails to start

avatar
Contributor

I deployed a cluster of 3 nodes with Ambari 2.4.0.1 and am unable to get the History Server started.

Error:

Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/historyserver.py", line 190, in <module>
    HistoryServer().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/historyserver.py", line 101, in start
    host_sys_prepped=params.host_sys_prepped)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/copy_tarball.py", line 257, in copy_to_hdfs
    replace_existing_files=replace_existing_files,
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 459, in action_create_on_execute
    self.action_delayed("create")
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 456, in action_delayed
    self.get_hdfs_resource_executor().action_delayed(action_name, self)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 255, in action_delayed
    self._create_resource()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 269, in _create_resource
    self._create_file(self.main_resource.resource.target, source=self.main_resource.resource.source, mode=self.mode)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 322, in _create_file
    self.util.run_command(target, 'CREATE', method='PUT', overwrite=True, assertable_result=False, file_to_put=source, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 192, in run_command
    raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'curl -sS -L -w '%{http_code}' -X PUT --data-binary @/usr/hdp/2.5.3.0-37/hadoop/mapreduce.tar.gz 'http://test-1.c.i.internal:50070/webhdfs/v1/hdp/apps/2.5.3.0-37/mapreduce/mapreduce.tar.gz?op=CREATE&user.name=hdfs&overwrite=True&permission=444'' returned status_code=500.
1 ACCEPTED SOLUTION

avatar
Master Mentor

@vsubramanian

As the request to "http://test-1.c.i.internal:50070/webhdfs......." is failing with 500 error which indicates an Internal Server Error so definitely you will see a detailed stackTrace on your NameNode log.

Please check & share the NameNode log to see what kind of error is it (Like OutOfMemoryError or some other error) based on that error we can suggest what can be done. Looking at the NameNode log will definitely give us a fare idea.

View solution in original post

4 REPLIES 4

avatar

@vsubramanian - Can you verify if HDFS service was up when you tried to start History server, if yes was the Namenode out of Safemode. Another thing to check will be that /usr/hdp/2.5.3.0-37/hadoop/mapreduce.tar.gz exists, and that you are manually able to copy the file to HDFS.

There are similar questions posted with issue:

https://community.hortonworks.com/questions/41697/history-server-fails-to-start-on-a-new-ha-hdp-2347...

https://community.hortonworks.com/questions/30840/history-server-not-able-to-start-after-a-fresh-ins...

https://community.hortonworks.com/questions/9341/history-server-and-hive-server2-is-not-coming-up-i....

avatar
Master Mentor

@vsubramanian

As the request to "http://test-1.c.i.internal:50070/webhdfs......." is failing with 500 error which indicates an Internal Server Error so definitely you will see a detailed stackTrace on your NameNode log.

Please check & share the NameNode log to see what kind of error is it (Like OutOfMemoryError or some other error) based on that error we can suggest what can be done. Looking at the NameNode log will definitely give us a fare idea.

avatar
Contributor

Sounds like I am hitting OutOfMemory errors:

2017-03-27 11:32:45,199 ERROR Error for /webhdfs/v1/hdp/apps/2.5.3.0-37/mapreduce/mapreduce.tar.gz java.lang.OutOfMemoryError: Java heap space at java.lang.String.toLowerCase(String.java:2590) at org.apache.hadoop.util.StringUtils.toLowerCase(StringUtils.java:1040) at org.apache.hadoop.hdfs.web.AuthFilter.toLowerCase(AuthFilter.java:102) at org.apache.hadoop.hdfs.web.AuthFilter.doFilter(AuthFilter.java:82) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1294) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:945) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:756) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
2017-03-27 11:36:15,852 ERROR Error for /webhdfs/v1/hdp/apps/2.5.3.0-37/mapreduce/mapreduce.tar.gz java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOfRange(Arrays.java:3664) at java.lang.String.<init>(String.java:207) at java.lang.String.toLowerCase(String.java:2647) at org.apache.hadoop.util.StringUtils.toLowerCase(StringUtils.java:1040) at org.apache.hadoop.hdfs.web.AuthFilter.toLowerCase(AuthFilter.java:102) at org.apache.hadoop.hdfs.web.AuthFilter.doFilter(AuthFilter.java:82) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1294) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:945) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:756) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)

avatar
Master Mentor

@vsubramanian

The root cause of the failure is OOM on NameNode

java.lang.OutOfMemoryError: Java heap space at java.lang.String.

.

So i guess you will need to increase the -Xmx (Java Heap Size) of your NameNode.

NameNode heap size recommendations are mentioned in the following link: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_command-line-installation/content/ref-80...