Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Got 500 Error when trying to view status of a running mapreduce job

avatar
Rising Star

We just launched our cluster with CDH5.3.3 on AWS, and We are currently testing our process on the new cluster.

Our job ran fine, but we ran into issues when we tried to view the actual status detail page of a running job.

We got HTTP 500 error when we either click on the 'Application Master' link on the Resource Manager UI page or by click on the link under 'Child Job Urls' tab when the job is running. We were able to get the job status page after it completed, but not while it was still running. 

Is anyone know how to fix this issue? This is a real issue for us.

The detail error is listed below.

Thanks

 

HTTP ERROR 500

Problem accessing /. Reason:

    Guice configuration errors:

1) Could not find a suitable constructor in com.sun.jersey.guice.spi.container.servlet.GuiceContainer. Classes must have either one (and only one) constructor annotated with @Inject or a zero-argument constructor that is not private.
  at com.sun.jersey.guice.spi.container.servlet.GuiceContainer.class(GuiceContainer.java:108)
  while locating com.sun.jersey.guice.spi.container.servlet.GuiceContainer

1 error

 

Caused by:

com.google.inject.ConfigurationException: Guice configuration errors:

1) Could not find a suitable constructor in com.sun.jersey.guice.spi.container.servlet.GuiceContainer. Classes must have either one (and only one) constructor annotated with @Inject or a zero-argument constructor that is not private.
  at com.sun.jersey.guice.spi.container.servlet.GuiceContainer.class(GuiceContainer.java:108)
  while locating com.sun.jersey.guice.spi.container.servlet.GuiceContainer

1 error
	at com.google.inject.InjectorImpl.getBinding(InjectorImpl.java:113)
	at com.google.inject.InjectorImpl.getBinding(InjectorImpl.java:63)
	at com.google.inject.servlet.FilterDefinition.init(FilterDefinition.java:99)
	at com.google.inject.servlet.ManagedFilterPipeline.initPipeline(ManagedFilterPipeline.java:98)
	at com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:114)
	at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:113)
	at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
	at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1224)
	at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
	at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
	at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
	at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
	at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
	at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
	at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:767)
	at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
	at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
	at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
	at org.mortbay.jetty.Server.handle(Server.java:326)
	at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
	at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
	at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
	at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
	at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
	at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
	at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)

 

23 REPLIES 23

avatar
Super Collaborator

There is an existing issue in releases before CDH 5.3.3 which could cause the issue to show. That issue was introduced to fix a similar issue in an earlier release. Both issues were intermittent and related to HA.

Unless you are on CDH 5.3.3 or later you could be seeing one of those.

 

Wilfred

avatar
Rising Star

Hi Linou,

Were you able to resolve your issues again? I still could not solve it yet.

Thanks

avatar
Contributor

hi, ttruong

 

Just resolved this problem.

when I run MR job on my datanode(hadoop server), I can see the page, but when I run on client side, it cannot.

So I think there should be something diff between server side and client side.

I found my client config file missed "yarn.resourcemanager.webapp.address" property.

When I added it in yarn-site.xml on client side:

<property>
  <name>yarn.resourcemanager.webapp.address</name>
  <value>$HOSTNAME:8088</value>
</property>

 

It works fine!!

Hope can help you.

good luck!

 

Linou

 

avatar
Super Collaborator

Good to hear that this has been fixed!

 

We have seen this issue in early CDH 5 releases but this was fixed in CMC/CDH 5.2 and later. Cloudera Manager should have deployed that configuration setting for you in the client config on all nodes. If you did not use CM then that could explain it, otherwise I am would not know how that could have happened.

 

 

Wilfred

avatar
Rising Star

Thank you very much for your response.

I checked both resource manager node and the value for yarn.resourcemanager.webapp.address property was set.

I still could not get it working.

 

avatar
Super Collaborator

Please make sure that you also have added the setting to the configuration on the client node. The setting should be applied to all nodes in the cluster not just the nodes that run the service.

 

Wilfred

avatar
Super Collaborator

If you are not running the yarn command as the owner of the application you might need to add:

-appOwner <username>

To the yarn logs command line. If you do not have access the error you showed could be thrown.

We do not distinguish between not getting access and not finishing the aggregation.

 

Wilfred

avatar
Rising Star

Hi Wilfred,

We are currently on 5.3.3 now, and we are still having that issues now.

Also, I did try yarn command with -appOwner option but it still returned the same message. I ran the command as yarn user.

Thanks 

avatar
Explorer

Hello,

I am a team member of ttruong and I have been taking a look at this issue.  I tracked the error to the container logs on the server executing the mapreduce action.  It appears like this is an error with the following page: /ws/v1/mapreduce/jobs/job_1436912235624_4832.  We have updated the permissions on all of our log directories and that has resolved all of the log issues except this one.

 

Here is an excerpt from the log file at: /var/log/hadoop-yarn/container/container_e90_1436912235624_4832_01_000001/stderr

 

[IPC Server handler 26 on 47733] INFO org.apache.hadoop.mapred.TaskAttemptListenerImpl - Progress of TaskAttempt attempt_1436912235624_4832_m_000000_0 is : 0.4483067

[IPC Server handler 29 on 47733] INFO org.apache.hadoop.mapred.TaskAttemptListenerImpl - Progress of TaskAttempt attempt_1436912235624_4832_m_000000_0 is : 0.44861746

[IPC Server handler 27 on 47733] INFO org.apache.hadoop.mapred.TaskAttemptListenerImpl - Progress of TaskAttempt attempt_1436912235624_4832_m_000000_0 is : 0.44898608

[IPC Server handler 0 on 47733] INFO org.apache.hadoop.mapred.TaskAttemptListenerImpl - Progress of TaskAttempt attempt_1436912235624_4832_m_000000_0 is : 0.44930127

[IPC Server handler 1 on 47733] INFO org.apache.hadoop.mapred.TaskAttemptListenerImpl - Progress of TaskAttempt attempt_1436912235624_4832_m_000000_0 is : 0.4495801

Jul 15, 2015 2:19:51 PM com.google.inject.servlet.InternalServletModule$BackwardsCompatibleServletContextProvider get

WARNING: You are attempting to use a deprecated API (specifically, attempting to @Inject ServletContext inside an eagerly created singleton. While we allow this for backwards compatibility, be warned that this MAY have unexpected behavior if you have more than one injector (with ServletModule) running in the same JVM. Please consult the Guice documentation at http://code.google.com/p/google-guice/wiki/Servlets for more information.

[1347697750@qtp-453785195-2] ERROR org.mortbay.log - /ws/v1/mapreduce/jobs/job_1436912235624_4832

com.google.inject.ConfigurationException: Guice configuration errors:

 

1) Could not find a suitable constructor in com.sun.jersey.guice.spi.container.servlet.GuiceContainer. Classes must have either one (and only one) constructor annotated with @Inject or a zero-argument constructor that is not private.

  at com.sun.jersey.guice.spi.container.servlet.GuiceContainer.class(GuiceContainer.java:108)

  while locating com.sun.jersey.guice.spi.container.servlet.GuiceContainer

 

1 error

        at com.google.inject.InjectorImpl.getBinding(InjectorImpl.java:113)

        at com.google.inject.InjectorImpl.getBinding(InjectorImpl.java:63)

        at com.google.inject.servlet.FilterDefinition.init(FilterDefinition.java:99)

        at com.google.inject.servlet.ManagedFilterPipeline.initPipeline(ManagedFilterPipeline.java:98)

        at com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:114)

        at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:113)

        at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)

        at org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFilter(AmIpFilter.java:164)

        at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)

        at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1224)

        at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)

        at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)

        at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)

        at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)

        at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)

        at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)

        at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:767)

        at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)

        at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)

        at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)

        at org.mortbay.jetty.Server.handle(Server.java:326)

        at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)

        at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)

        at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)

        at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)

        at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)

        at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)

        at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)

 

Any help debugging this would be appreciated!

Thanks.

avatar
Explorer

I also checked the yarn-site.xml in /etc/hadoop/conf on both the active resource manager node and the node manager running the container and both files had the following configuration properties set.  The server names are correct for both resource manager nodes running in HA.

 

  <property>

    <name>yarn.resourcemanager.webapp.address.rm21</name>

    <value>i-802bd856.prod-dis11.aws1:8088</value>

  </property>

 

  <property>

    <name>yarn.resourcemanager.webapp.address.rm54</name>

    <value>i-942ad942.prod-dis11.aws1:8088</value>

  </property>