Created 11-25-2018 06:10 PM
Hello,
When I am starting my datanode on the master node it goes in critical state after few seconds.
Here is the log file of the data node:
2018-11-25 16:46:51,268 INFO datanode.DataNode (LogAdapter.java:info(51)) - registered UNIX signal handlers for [TERM, HUP, INT] 2018-11-25 16:46:53,961 INFO checker.ThrottledAsyncChecker (ThrottledAsyncChecker.java:schedule(137)) - Scheduling a check for [DISK]file:/hadoop/hdfs/data 2018-11-25 16:46:54,602 INFO impl.MetricsConfig (MetricsConfig.java:loadFirst(121)) - loaded properties from hadoop-metrics2.properties 2018-11-25 16:46:55,362 INFO timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(85)) - Initializing Timeline metrics sink. 2018-11-25 16:46:55,364 INFO timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(105)) - Identified hostname = hadoop01.test_nf.simscom.com, serviceName = datanode 2018-11-25 16:46:55,462 INFO availability.MetricSinkWriteShardHostnameHashingStrategy (MetricSinkWriteShardHostnameHashingStrategy.java:findCollectorShard(42)) - Calculated collector shard hadoop03.test_nf.simscom.com based on hostname: hadoop01.test_nf.simscom.com 2018-11-25 16:46:55,463 INFO timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(130)) - Collector Uri: http://hadoop03.test_nf.simscom.com:6188/ws/v1/timeline/metrics 2018-11-25 16:46:55,464 INFO timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(131)) - Container Metrics Uri: http://hadoop03.test_nf.simscom.com:6188/ws/v1/timeline/containermetrics 2018-11-25 16:46:55,473 INFO impl.MetricsSinkAdapter (MetricsSinkAdapter.java:start(204)) - Sink timeline started 2018-11-25 16:46:55,572 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:startTimer(374)) - Scheduled Metric snapshot period at 10 second(s). 2018-11-25 16:46:55,573 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:start(191)) - DataNode metrics system started 2018-11-25 16:46:56,000 INFO common.Util (Util.java:isDiskStatsEnabled(395)) - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling 2018-11-25 16:46:56,003 INFO datanode.BlockScanner (BlockScanner.java:<init>(184)) - Initialized block scanner with targetBytesPerSec 1048576 2018-11-25 16:46:56,009 INFO datanode.DataNode (DataNode.java:<init>(483)) - File descriptor passing is enabled. 2018-11-25 16:46:56,010 INFO datanode.DataNode (DataNode.java:<init>(496)) - Configured hostname is hadoop01.test_nf.simscom.com 2018-11-25 16:46:56,012 INFO common.Util (Util.java:isDiskStatsEnabled(395)) - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling 2018-11-25 16:46:56,029 INFO datanode.DataNode (DataNode.java:startDataNode(1387)) - Starting DataNode with maxLockedMemory = 0 2018-11-25 16:46:56,116 INFO datanode.DataNode (DataNode.java:initDataXceiver(1144)) - Opened streaming server at /0.0.0.0:50010 2018-11-25 16:46:56,119 INFO datanode.DataNode (DataXceiverServer.java:<init>(78)) - Balancing bandwidth is 6250000 bytes/s 2018-11-25 16:46:56,119 INFO datanode.DataNode (DataXceiverServer.java:<init>(79)) - Number threads for balancing is 50 2018-11-25 16:46:56,126 INFO datanode.DataNode (DataXceiverServer.java:<init>(78)) - Balancing bandwidth is 6250000 bytes/s 2018-11-25 16:46:56,127 INFO datanode.DataNode (DataXceiverServer.java:<init>(79)) - Number threads for balancing is 50 2018-11-25 16:46:56,127 INFO datanode.DataNode (DataNode.java:initDataXceiver(1162)) - Listening on UNIX domain socket: /var/lib/hadoop-hdfs/dn_socket 2018-11-25 16:46:56,385 INFO util.log (Log.java:initialized(192)) - Logging initialized @6955ms 2018-11-25 16:46:56,602 INFO server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(240)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets. 2018-11-25 16:46:56,606 INFO http.HttpRequestLog (HttpRequestLog.java:getRequestLog(81)) - Http request log for http.requests.datanode is not defined 2018-11-25 16:46:56,612 INFO http.HttpServer2 (HttpServer2.java:addGlobalFilter(968)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) 2018-11-25 16:46:56,615 INFO http.HttpServer2 (HttpServer2.java:addFilter(941)) - Added filter authentication (class=org.apache.hadoop.security.authentication.server.AuthenticationFilter) to context datanode 2018-11-25 16:46:56,615 INFO http.HttpServer2 (HttpServer2.java:addFilter(951)) - Added filter authentication (class=org.apache.hadoop.security.authentication.server.AuthenticationFilter) to context logs 2018-11-25 16:46:56,615 INFO http.HttpServer2 (HttpServer2.java:addFilter(951)) - Added filter authentication (class=org.apache.hadoop.security.authentication.server.AuthenticationFilter) to context static 2018-11-25 16:46:56,616 INFO security.HttpCrossOriginFilterInitializer (HttpCrossOriginFilterInitializer.java:initFilter(49)) - CORS filter not enabled. Please set hadoop.http.cross-origin.enabled to 'true' to enable it 2018-11-25 16:46:56,725 INFO http.HttpServer2 (HttpServer2.java:bindListener(1185)) - Jetty bound to port 39279 2018-11-25 16:46:56,727 INFO server.Server (Server.java:doStart(346)) - jetty-9.3.19.v20170502 2018-11-25 16:46:56,774 INFO server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(240)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets. 2018-11-25 16:46:56,777 INFO handler.ContextHandler (ContextHandler.java:doStart(781)) - Started o.e.j.s.ServletContextHandler@549621f3{/logs,file:///var/log/hadoop/hdfs/,AVAILABLE} 2018-11-25 16:46:56,778 INFO handler.ContextHandler (ContextHandler.java:doStart(781)) - Started o.e.j.s.ServletContextHandler@32232e55{/static,file:///usr/hdp/3.0.0.0-1634/hadoop-hdfs/webapps/static/,AVAILABLE} 2018-11-25 16:46:57,071 INFO handler.ContextHandler (ContextHandler.java:doStart(781)) - Started o.e.j.w.WebAppContext@55e7a35c{/,file:///usr/hdp/3.0.0.0-1634/hadoop-hdfs/webapps/datanode/,AVAILABLE}{/datanode} 2018-11-25 16:46:57,079 INFO server.AbstractConnector (AbstractConnector.java:doStart(278)) - Started ServerConnector@1f130eaf{HTTP/1.1,[http/1.1]}{localhost:39279} 2018-11-25 16:46:57,079 INFO server.Server (Server.java:doStart(414)) - Started @7649ms 2018-11-25 16:46:57,729 INFO web.DatanodeHttpServer (DatanodeHttpServer.java:start(255)) - Listening HTTP traffic on /0.0.0.0:50075 2018-11-25 16:46:57,762 INFO util.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Starting JVM pause monitor 2018-11-25 16:46:57,818 INFO datanode.DataNode (DataNode.java:startDataNode(1414)) - dnUserName = hdfs 2018-11-25 16:46:57,818 INFO datanode.DataNode (DataNode.java:startDataNode(1415)) - supergroup = hdfs 2018-11-25 16:46:58,149 INFO ipc.CallQueueManager (CallQueueManager.java:<init>(84)) - Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 1000 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler 2018-11-25 16:46:58,240 INFO ipc.Server (Server.java:run(1070)) - Starting Socket Reader #1 for port 8010 2018-11-25 16:46:58,410 INFO datanode.DataNode (DataNode.java:initIpcServer(1030)) - Opened IPC server at /0.0.0.0:8010 2018-11-25 16:46:58,488 INFO datanode.DataNode (BlockPoolManager.java:refreshNamenodes(149)) - Refresh request received for nameservices: null 2018-11-25 16:46:58,556 INFO datanode.DataNode (BlockPoolManager.java:doRefreshNamenodes(210)) - Starting BPOfferServices for nameservices: <default> 2018-11-25 16:46:58,629 INFO datanode.DataNode (BPServiceActor.java:run(809)) - Block pool <registering> (Datanode Uuid unassigned) service to hadoop01.test_nf.simscom.com/182.65.21.112:8020 starting to offer service 2018-11-25 16:46:58,647 INFO ipc.Server (Server.java:run(1310)) - IPC Server Responder: starting 2018-11-25 16:46:58,651 INFO ipc.Server (Server.java:run(1149)) - IPC Server listener on 8010: starting 2018-11-25 16:46:58,742 WARN datanode.DataNode (BPOfferService.java:getBlockPoolId(213)) - Block pool ID needed, but service not yet registered with NN, trace: java.lang.Exception at org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:213) at org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:224) at org.apache.hadoop.hdfs.server.datanode.DataNode.getNamenodeAddresses(DataNode.java:3082) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71) at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275) at com.sun.jmx.mbeanserver.ConvertingMethod.invokeWithOpenReturn(ConvertingMethod.java:193) at com.sun.jmx.mbeanserver.ConvertingMethod.invokeWithOpenReturn(ConvertingMethod.java:175) at com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:117) at com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:54) at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237) at com.sun.jmx.mbeanserver.PerInterface.getAttribute(PerInterface.java:83) at com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(MBeanSupport.java:206) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647) at com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678) at org.apache.hadoop.jmx.JMXJsonServlet.writeAttribute(JMXJsonServlet.java:338) at org.apache.hadoop.jmx.JMXJsonServlet.listBeans(JMXJsonServlet.java:316) at org.apache.hadoop.jmx.JMXJsonServlet.doGet(JMXJsonServlet.java:210) at javax.servlet.http.HttpServlet.service(HttpServlet.java:687) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:644) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:592) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1604) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) at org.eclipse.jetty.server.Server.handle(Server.java:534) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108) at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) at java.lang.Thread.run(Thread.java:745) 2018-11-25 16:46:58,771 WARN datanode.DataNode (BPOfferService.java:getBlockPoolId(213)) - Block pool ID needed, but service not yet registered with NN, trace: java.lang.Exception at org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:213) at org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:224) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.getActorInfoMap(BPServiceActor.java:175) at org.apache.hadoop.hdfs.server.datanode.DataNode.getBPServiceActorInfo(DataNode.java:3109) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71) at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275) at com.sun.jmx.mbeanserver.ConvertingMethod.invokeWithOpenReturn(ConvertingMethod.java:193) at com.sun.jmx.mbeanserver.ConvertingMethod.invokeWithOpenReturn(ConvertingMethod.java:175) at com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:117) at com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:54) at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237) at com.sun.jmx.mbeanserver.PerInterface.getAttribute(PerInterface.java:83) at com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(MBeanSupport.java:206) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647) at com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678) at org.apache.hadoop.jmx.JMXJsonServlet.writeAttribute(JMXJsonServlet.java:338) at org.apache.hadoop.jmx.JMXJsonServlet.listBeans(JMXJsonServlet.java:316) at org.apache.hadoop.jmx.JMXJsonServlet.doGet(JMXJsonServlet.java:210) at javax.servlet.http.HttpServlet.service(HttpServlet.java:687) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:644) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:592) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1604) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) at org.eclipse.jetty.server.Server.handle(Server.java:534) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108) at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) at java.lang.Thread.run(Thread.java:745) 2018-11-25 16:46:58,776 ERROR jmx.JMXJsonServlet (JMXJsonServlet.java:writeAttribute(345)) - getting attribute VolumeInfo of Hadoop:service=DataNode,name=DataNodeInfo threw an exception javax.management.RuntimeMBeanException: java.lang.NullPointerException: Storage not yet initialized at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrow(DefaultMBeanServerInterceptor.java:839) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrowMaybeMBeanException(DefaultMBeanServerInterceptor.java:852) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:651) at com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678) at org.apache.hadoop.jmx.JMXJsonServlet.writeAttribute(JMXJsonServlet.java:338) at org.apache.hadoop.jmx.JMXJsonServlet.listBeans(JMXJsonServlet.java:316) at org.apache.hadoop.jmx.JMXJsonServlet.doGet(JMXJsonServlet.java:210) at javax.servlet.http.HttpServlet.service(HttpServlet.java:687) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:644) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:592) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1604) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) at org.eclipse.jetty.server.Server.handle(Server.java:534) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108) at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.NullPointerException: Storage not yet initialized at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:204) at org.apache.hadoop.hdfs.server.datanode.DataNode.getVolumeInfo(DataNode.java:3123) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71) at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275) at com.sun.jmx.mbeanserver.ConvertingMethod.invokeWithOpenReturn(ConvertingMethod.java:193) at com.sun.jmx.mbeanserver.ConvertingMethod.invokeWithOpenReturn(ConvertingMethod.java:175) at com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:117) at com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:54) at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237) at com.sun.jmx.mbeanserver.PerInterface.getAttribute(PerInterface.java:83) at com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(MBeanSupport.java:206) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647) ... 35 more 2018-11-25 16:46:58,783 ERROR jmx.JMXJsonServlet (JMXJsonServlet.java:writeAttribute(345)) - getting attribute DiskBalancerStatus of Hadoop:service=DataNode,name=DataNodeInfo threw an exception javax.management.RuntimeMBeanException: java.lang.NullPointerException at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrow(DefaultMBeanServerInterceptor.java:839) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrowMaybeMBeanException(DefaultMBeanServerInterceptor.java:852) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:651) at com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678) at org.apache.hadoop.jmx.JMXJsonServlet.writeAttribute(JMXJsonServlet.java:338) at org.apache.hadoop.jmx.JMXJsonServlet.listBeans(JMXJsonServlet.java:316) at org.apache.hadoop.jmx.JMXJsonServlet.doGet(JMXJsonServlet.java:210) at javax.servlet.http.HttpServlet.service(HttpServlet.java:687) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:644) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:592) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1604) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) at org.eclipse.jetty.server.Server.handle(Server.java:534) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108) at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.NullPointerException at org.apache.hadoop.hdfs.server.datanode.DataNode.getDiskBalancerStatus(DataNode.java:3135) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71) at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275) at com.sun.jmx.mbeanserver.ConvertingMethod.invokeWithOpenReturn(ConvertingMethod.java:193) at com.sun.jmx.mbeanserver.ConvertingMethod.invokeWithOpenReturn(ConvertingMethod.java:175) at com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:117) at com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:54) at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237) at com.sun.jmx.mbeanserver.PerInterface.getAttribute(PerInterface.java:83) at com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(MBeanSupport.java:206) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647) ... 35 more 2018-11-25 16:46:59,071 INFO datanode.DataNode (BPOfferService.java:verifyAndSetNamespaceInfo(378)) - Acknowledging ACTIVE Namenode during handshakeBlock pool <registering> (Datanode Uuid unassigned) service to hadoop01.test_nf.simscom.com/182.65.21.112:8020 2018-11-25 16:46:59,076 INFO common.Storage (DataStorage.java:getParallelVolumeLoadThreadsNum(354)) - Using 1 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=1, dataDirs=1) 2018-11-25 16:46:59,106 INFO common.Storage (Storage.java:tryLock(905)) - Lock on /hadoop/hdfs/data/in_use.lock acquired by nodename 24970@hadoop01.test_nf.simscom.com 2018-11-25 16:46:59,118 WARN common.Storage (DataStorage.java:loadDataStorage(418)) - Failed to add storage directory [DISK]file:/hadoop/hdfs/data java.io.IOException: Incompatible clusterIDs in /hadoop/hdfs/data: namenode clusterID = CID-b3c5c280-4b44-4348-9859-1a088217019f; datanode clusterID = CID-2d76379c-acf9-479a-9374-ddbe0704dbbe at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:736) at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:294) at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:407) at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:387) at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:551) at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1705) at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1665) at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:390) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:280) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816) at java.lang.Thread.run(Thread.java:745) 2018-11-25 16:46:59,124 ERROR datanode.DataNode (BPServiceActor.java:run(828)) - Initialization failed for Block pool <registering> (Datanode Uuid 18320c3e-647e-40e5-9f46-c559fd6d386f) service to hadoop01.test_nf.simscom.com/182.65.21.112:8020. Exiting. java.io.IOException: All specified directories have failed to load. at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:552) at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1705) at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1665) at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:390) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:280) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816) at java.lang.Thread.run(Thread.java:745) 2018-11-25 16:46:59,125 WARN datanode.DataNode (BPServiceActor.java:run(852)) - Ending block pool service for: Block pool <registering> (Datanode Uuid 18320c3e-647e-40e5-9f46-c559fd6d386f) service to hadoop01.test_nf.simscom.com/182.65.21.112:8020 2018-11-25 16:46:59,227 INFO datanode.DataNode (BlockPoolManager.java:remove(102)) - Removed Block pool <registering> (Datanode Uuid 18320c3e-647e-40e5-9f46-c559fd6d386f) 2018-11-25 16:47:01,227 WARN datanode.DataNode (DataNode.java:secureMain(2877)) - Exiting Datanode 2018-11-25 16:47:01,233 INFO datanode.DataNode (LogAdapter.java:info(51)) - SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down DataNode at hadoop01.test_nf.simscom.com/182.65.21.112 ************************************************************/
The other data nodes are running without error.
On Ambari it returns the following statement:
Connection failed: [Errno 111] Connection refused to hadoop01.test_nf.simscom.com:50010
What is the problem here?
Created 11-26-2018 07:26 AM
Hi @Andreas Kühnert
Error:
ERROR datanode.DataNode (BPServiceActor.java:run(828)) - Initialization failed for Block pool <registering> (Datanode Uuid 18320c3e-647e-40e5-9f46-c559fd6d386f) service to hadoop01.test_nf.simscom.com/182.65.21.112:8020. Exiting.
If you add a datanode that already has data in the data dir on local disk, then data there probably has a conflicting cluster id.
Solution1: You should probably delete the local data dir (or move them out of the way) and then format NameNode and start the services. Note: Note: In this way, you will lose all your data. Data directory is local file system storage location for each datanode in your cluster. You can find the data dir by searching "dfs.datanode.data.dir" in hdfs-site.xml Solution2: 1. $cd <dfs.datanode.data.dir>/current 2. $vim VERSION 3. Now you need to change the cluster ID of Datanode to the same as Namenode(you can find the CID in VERSION file in namenode dir(<dfs.namenode.name.dir>/current/VERSION) 4. Change the datanode cluster id from old value to new value (in our case new value is: clusterID=CID-9a023007-6187-4bb0-92ec-cc2063a71ee9). 5. Save and exit 6. Restart the services
Hope it helps. Thanks.
Created 11-26-2018 07:26 AM
Hi @Andreas Kühnert
Error:
ERROR datanode.DataNode (BPServiceActor.java:run(828)) - Initialization failed for Block pool <registering> (Datanode Uuid 18320c3e-647e-40e5-9f46-c559fd6d386f) service to hadoop01.test_nf.simscom.com/182.65.21.112:8020. Exiting.
If you add a datanode that already has data in the data dir on local disk, then data there probably has a conflicting cluster id.
Solution1: You should probably delete the local data dir (or move them out of the way) and then format NameNode and start the services. Note: Note: In this way, you will lose all your data. Data directory is local file system storage location for each datanode in your cluster. You can find the data dir by searching "dfs.datanode.data.dir" in hdfs-site.xml Solution2: 1. $cd <dfs.datanode.data.dir>/current 2. $vim VERSION 3. Now you need to change the cluster ID of Datanode to the same as Namenode(you can find the CID in VERSION file in namenode dir(<dfs.namenode.name.dir>/current/VERSION) 4. Change the datanode cluster id from old value to new value (in our case new value is: clusterID=CID-9a023007-6187-4bb0-92ec-cc2063a71ee9). 5. Save and exit 6. Restart the services
Hope it helps. Thanks.
Created 11-28-2018 01:24 AM
Solution 1 worked. Thanks.