Member since
10-15-2019
9
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4511 | 11-20-2019 02:08 AM |
11-20-2019
02:08 AM
I moved the content on another directory, restarted the namenode and the error message is gone. So I could delete the old directory
... View more
11-19-2019
10:17 AM
It's not empty. Is it safe to delete the content?
... View more
11-19-2019
08:58 AM
I added an old server already used as datanode in my cluster.
The process get completed, but as soon as the datanode is been added I get the following error message on cloudera manager:
"The DataNode has 1 volume failure but disks"
I don't have any error message, just one warning as soon as I added the host:
5:27:37.578 PM WARN Storage
Failed to add storage directory [DISK]file:/data/2/dfs/dn/
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /data/2/dfs/dn is in an inconsistent state: Can't format the storage directory because the current/ directory is not empty.
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.checkEmptyCurrent(Storage.java:495)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:600)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:279)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:418)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:397)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:575)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1560)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1520)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:354)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:219)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:673)
at java.lang.Thread.run(Thread.java:748)
and this are the common INFO log from that host:
5:40:11.726 PM INFO FsDatasetAsyncDiskService
Scheduling blk_1078638685_4897872 file /data/1/dfs/dn/current/BP-22824834-10.179.104.198-1543004359329/current/finalized/subdir74/subdir184/blk_1078638685 for deletion
5:40:11.727 PM INFO FsDatasetAsyncDiskService
Deleted BP-22824834-10.179.104.198-1543004359329 blk_1078638685_4897872 file /data/1/dfs/dn/current/BP-22824834-10.179.104.198-1543004359329/current/finalized/subdir74/subdir184/blk_1078638685
5:42:14.918 PM INFO DataNode
Receiving BP-22824834-10.179.104.198-1543004359329:blk_1078638687_4897874 src: /10.179.104.167:55132 dest: /10.179.104.168:50010
5:42:14.937 PM INFO clienttrace
src: /10.179.104.167:55132, dest: /10.179.104.168:50010, bytes: 56, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_1219924065_103, offset: 0, srvID: 236da03c-1676-4097-a3a7-17a3f308008c, blockid: BP-22824834-10.179.104.198-1543004359329:blk_1078638687_4897874, duration: 12451111
5:42:14.937 PM INFO DataNode
PacketResponder: BP-22824834-10.179.104.198-1543004359329:blk_1078638687_4897874, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
5:42:17.729 PM INFO FsDatasetAsyncDiskService
Scheduling blk_1078638687_4897874 file /data/1/dfs/dn/current/BP-22824834-10.179.104.198-1543004359329/current/finalized/subdir74/subdir184/blk_1078638687 for deletion
5:42:17.731 PM INFO FsDatasetAsyncDiskService
Deleted BP-22824834-10.179.104.198-1543004359329 blk_1078638687_4897874 file /data/1/dfs/dn/current/BP-22824834-10.179.104.198-1543004359329/current/finalized/subdir74/subdir184/blk_1078638687
5:43:14.919 PM INFO DataNode
Receiving BP-22824834-10.179.104.198-1543004359329:blk_1078638688_4897875 src: /10.179.104.167:55140 dest: /10.179.104.168:50010
5:43:14.938 PM INFO clienttrace
src: /10.179.104.167:55140, dest: /10.179.104.168:50010, bytes: 56, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-879119697_103, offset: 0, srvID: 236da03c-1676-4097-a3a7-17a3f308008c, blockid: BP-22824834-10.179.104.198-1543004359329:blk_1078638688_4897875, duration: 12054137
5:43:14.938 PM INFO DataNode
PacketResponder: BP-22824834-10.179.104.198-1543004359329:blk_1078638688_4897875, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
5:43:17.732 PM INFO FsDatasetAsyncDiskService
Scheduling blk_1078638688_4897875 file /data/1/dfs/dn/current/BP-22824834-10.179.104.198-1543004359329/current/finalized/subdir74/subdir184/blk_1078638688 for deletion
5:43:17.733 PM INFO FsDatasetAsyncDiskService
Deleted BP-22824834-10.179.104.198-1543004359329 blk_1078638688_4897875 file /data/1/dfs/dn/current/BP-22824834-10.179.104.198-1543004359329/current/finalized/subdir74/subdir184/blk_1078638688
5:44:14.936 PM INFO DataNode
Receiving BP-22824834-10.179.104.198-1543004359329:blk_1078638689_4897876 src: /10.179.104.165:56144 dest: /10.179.104.168:50010
5:44:14.950 PM INFO clienttrace
src: /10.179.104.165:56144, dest: /10.179.104.168:50010, bytes: 56, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_573364305_103, offset: 0, srvID: 236da03c-1676-4097-a3a7-17a3f308008c, blockid: BP-22824834-10.179.104.198-1543004359329:blk_1078638689_4897876, duration: 11591201
5:44:14.951 PM INFO DataNode
PacketResponder: BP-22824834-10.179.104.198-1543004359329:blk_1078638689_4897876, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
5:44:17.736 PM INFO FsDatasetAsyncDiskService
Scheduling blk_1078638689_4897876 file /data/1/dfs/dn/current/BP-22824834-10.179.104.198-1543004359329/current/finalized/subdir74/subdir184/blk_1078638689 for deletion
5:44:17.738 PM INFO FsDatasetAsyncDiskService
Deleted BP-22824834-10.179.104.198-1543004359329 blk_1078638689_4897876 file /data/1/dfs/dn/current/BP-22824834-10.179.104.198-1543004359329/current/finalized/subdir74/subdir184/blk_1078638689
5:46:14.937 PM INFO DataNode
Receiving BP-22824834-10.179.104.198-1543004359329:blk_1078638691_4897878 src: /10.179.104.165:56158 dest: /10.179.104.168:50010
5:46:14.951 PM INFO clienttrace
src: /10.179.104.165:56158, dest: /10.179.104.168:50010, bytes: 56, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_562798393_103, offset: 0, srvID: 236da03c-1676-4097-a3a7-17a3f308008c, blockid: BP-22824834-10.179.104.198-1543004359329:blk_1078638691_4897878, duration: 11582892
5:46:14.951 PM INFO DataNode
PacketResponder: BP-22824834-10.179.104.198-1543004359329:blk_1078638691_4897878, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
5:46:17.744 PM INFO FsDatasetAsyncDiskService
Scheduling blk_1078638691_4897878 file /data/1/dfs/dn/current/BP-22824834-10.179.104.198-1543004359329/current/finalized/subdir74/subdir184/blk_1078638691 for deletion
5:46:17.746 PM INFO FsDatasetAsyncDiskService
Deleted BP-22824834-10.179.104.198-1543004359329 blk_1078638691_4897878 file /data/1/dfs/dn/current/BP-22824834-10.179.104.198-1543004359329/current/finalized/subdir74/subdir184/blk_1078638691
5:48:14.941 PM INFO DataNode
Receiving BP-22824834-10.179.104.198-1543004359329:blk_1078638693_4897880 src: /10.179.104.167:55180 dest: /10.179.104.168:50010
5:48:14.957 PM INFO clienttrace
src: /10.179.104.167:55180, dest: /10.179.104.168:50010, bytes: 56, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-1107939178_103, offset: 0, srvID: 236da03c-1676-4097-a3a7-17a3f308008c, blockid: BP-22824834-10.179.104.198-1543004359329:blk_1078638693_4897880, duration: 13081545
5:48:14.958 PM INFO DataNode
PacketResponder: BP-22824834-10.179.104.198-1543004359329:blk_1078638693_4897880, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
5:48:20.748 PM INFO FsDatasetAsyncDiskService
Scheduling blk_1078638693_4897880 file /data/1/dfs/dn/current/BP-22824834-10.179.104.198-1543004359329/current/finalized/subdir74/subdir184/blk_1078638693 for deletion
5:48:20.750 PM INFO FsDatasetAsyncDiskService
Deleted BP-22824834-10.179.104.198-1543004359329 blk_1078638693_4897880 file /data/1/dfs/dn/current/BP-22824834-10.179.104.198-1543004359329/current/finalized/subdir74/subdir184/blk_1078638693
5:49:19.944 PM INFO DataNode
Receiving BP-22824834-10.179.104.198-1543004359329:blk_1078638694_4897881 src: /10.179.104.165:56180 dest: /10.179.104.168:50010
5:49:19.959 PM INFO clienttrace
src: /10.179.104.165:56180, dest: /10.179.104.168:50010, bytes: 56, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-1732671963_103, offset: 0, srvID: 236da03c-1676-4097-a3a7-17a3f308008c, blockid: BP-22824834-10.179.104.198-1543004359329:blk_1078638694_4897881, duration: 12530220
5:49:19.960 PM INFO DataNode
PacketResponder: BP-22824834-10.179.104.198-1543004359329:blk_1078638694_4897881, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
5:49:26.752 PM INFO FsDatasetAsyncDiskService
Scheduling blk_1078638694_4897881 file /data/1/dfs/dn/current/BP-22824834-10.179.104.198-1543004359329/current/finalized/subdir74/subdir184/blk_1078638694 for deletion
5:49:26.754 PM INFO FsDatasetAsyncDiskService
Deleted BP-22824834-10.179.104.198-1543004359329 blk_1078638694_4897881 file /data/1/dfs/dn/current/BP-22824834-10.179.104.198-1543004359329/current/finalized/subdir74/subdir184/blk_1078638694
Any help on going deeply with this error?
... View more
Labels:
- Labels:
-
Apache Hadoop
11-06-2019
09:44 AM
Suddenly I'm receiving this error and I know that is a problem of permissions, but I can't figure out how to solve it.
Error while trying to scan the directory hdfs://nameservice1:8020/user/history/done_intermediate/cdmus
org.apache.hadoop.security.AccessControlException: Permission denied by sticky bit: user=mapred, path="/user/history/done_intermediate/cdmus/job_1571301865100_0001-1571302150810-cdmus-oozie%3Alauncher%3AT%3Dshell%3AW%3DImport+TIME%3AA%3Dshell%2D6231%3A-1571302203760-1-0-SUCCEEDED-root.users.cdmus-1571302157844.jhist":cdmus:-rwxrwxrwt, parent="/user/history/done_intermediate/cdmus":cdmus
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkStickyBit(DefaultAuthorizationProvider.java:387)
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:159)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:152)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:3885)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6855)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:4290)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInt(FSNamesystem.java:4245)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:4229)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:856)
at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.delete(AuthorizationProviderProxyClientProtocol.java:313)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:626)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2281)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2277)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2275)
at sun.reflect.GeneratedConstructorAccessor38.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:2106)
at org.apache.hadoop.fs.Hdfs.delete(Hdfs.java:115)
at org.apache.hadoop.fs.FileContext$5.next(FileContext.java:783)
at org.apache.hadoop.fs.FileContext$5.next(FileContext.java:779)
at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
at org.apache.hadoop.fs.FileContext.delete(FileContext.java:779)
at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager$HistoryFileInfo.delete(HistoryFileManager.java:495)
at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.scanIntermediateDirectory(HistoryFileManager.java:979)
at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.access$200(HistoryFileManager.java:86)
at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager$UserLogDir.scanIfNeeded(HistoryFileManager.java:332)
at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.scanIntermediateDirectory(HistoryFileManager.java:915)
at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.getAllFileInfo(HistoryFileManager.java:1043)
at org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.getAllPartialJobs(CachedHistoryStorage.java:210)
at org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.getPartialJobs(CachedHistoryStorage.java:227)
at org.apache.hadoop.mapreduce.v2.hs.JobHistory.getPartialJobs(JobHistory.java:285)
at org.apache.hadoop.mapreduce.v2.hs.webapp.HsWebServices.getJobs(HsWebServices.java:212)
at sun.reflect.GeneratedMethodAccessor54.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
at com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:886)
at com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:834)
at com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:795)
at com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:163)
at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58)
at com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:118)
at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:113)
at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1301)
at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:767)
at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:326)
at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied by sticky bit: user=mapred, path="/user/history/done_intermediate/cdmus/job_1571301865100_0001-1571302150810-cdmus-oozie%3Alauncher%3AT%3Dshell%3AW%3DImport+TIME%3AA%3Dshell%2D6231%3A-1571302203760-1-0-SUCCEEDED-root.users.cdmus-1571302157844.jhist":cdmus:hadoop:-rwxrwxrwt, parent="/user/history/done_intermediate/cdmus":cdmus:hadoop:drwxrwxrwt
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkStickyBit(DefaultAuthorizationProvider.java:387)
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:159)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:152)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:3885)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6855)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:4290)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInt(FSNamesystem.java:4245)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:4229)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:856)
at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.delete(AuthorizationProviderProxyClientProtocol.java:313)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:626)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2281)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2277)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2275)
at org.apache.hadoop.ipc.Client.call(Client.java:1504)
at org.apache.hadoop.ipc.Client.call(Client.java:1441)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
at com.sun.proxy.$Proxy16.delete(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:552)
at sun.reflect.GeneratedMethodAccessor48.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:258)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
at com.sun.proxy.$Proxy17.delete(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:2104)
... 63 more
I'm running cdh 5.15 on RHEL.
Any help is really appreciated.
... View more
Labels:
- Labels:
-
Apache Hadoop
10-17-2019
01:27 AM
There is no file Jetty_0_0_0_0_8042_node in /tmp, the partition is been wiped out. Is not a problem of permission, but of missing file. How can I let yarn to remake those files?
... View more
10-16-2019
06:52 AM
Searching on internet I found that I can clean the tmp directory without problem on a cloudera cluster, so I did it.
Now, when I try to start the cluster all services goes up well, but yarn failed to start with this error:
19/10/16 15:50:17 FATAL nodemanager.NodeManager: Error starting NodeManager
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: NMWebapps failed to start.
at org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer.serviceStart(WebServer.java:79)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceStart(NodeManager.java:329)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:563)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:609)
Caused by: org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
at org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer.serviceStart(WebServer.java:65)
... 6 more
Caused by: java.io.IOException: Unable to initialize WebAppContext
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:922)
at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
... 7 more
Caused by: java.io.FileNotFoundException: /tmp/Jetty_0_0_0_0_8042_node____19tj0x/webapp/webapps/node/.keep (No such file or directory)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
at java.io.FileOutputStream.<init>(FileOutputStream.java:162)
at org.mortbay.resource.JarResource.extract(JarResource.java:215)
at org.mortbay.jetty.webapp.WebAppContext.resolveWebApp(WebAppContext.java:974)
at org.mortbay.jetty.webapp.WebAppContext.getWebInf(WebAppContext.java:832)
at org.mortbay.jetty.webapp.WebInfConfiguration.configureClassLoader(WebInfConfiguration.java:62)
at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:489)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
at org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
at org.mortbay.jetty.Server.doStart(Server.java:224)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:900)
What can I do?
... View more
Labels:
- Labels:
-
Apache YARN
10-16-2019
03:22 AM
Every NodeManager in my hadoop cluster is not connected to its ResourceManager.
These are the errors I can see from yarn:
Thread Thread[Timer-2,5,main] threw an Exception.
java.lang.IllegalArgumentException: Wrong FS: hdfs://nameservice1:8020/user/history/done_intermediate/hive/job_1557996286771_33621_conf.xml, expected: hdfs://nameservice1
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:662)
at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:222)
at org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:114)
at org.apache.hadoop.hdfs.DistributedFileSystem$20.doCall(DistributedFileSystem.java:1266)
at org.apache.hadoop.hdfs.DistributedFileSystem$20.doCall(DistributedFileSystem.java:1262)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1262)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1418)
at org.apache.hadoop.fs.FileUtil.checkDest(FileUtil.java:499)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:351)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:341)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:292)
at org.apache.hadoop.mapreduce.v2.hs.KilledHistoryService$FlagFileHandler.copy(KilledHistoryService.java:210)
at org.apache.hadoop.mapreduce.v2.hs.KilledHistoryService$FlagFileHandler.access$300(KilledHistoryService.java:85)
at org.apache.hadoop.mapreduce.v2.hs.KilledHistoryService$FlagFileHandler$1.run(KilledHistoryService.java:138)
at org.apache.hadoop.mapreduce.v2.hs.KilledHistoryService$FlagFileHandler$1.run(KilledHistoryService.java:125)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
at org.apache.hadoop.mapreduce.v2.hs.KilledHistoryService$FlagFileHandler.run(KilledHistoryService.java:125)
at java.util.TimerThread.mainLoop(Timer.java:555)
at java.util.TimerThread.run(Timer.java:505)
View Log File
host4 ERROR October 15, 2019 11:40 PM NodeManager
RECEIVED SIGNAL 15: SIGTERM
View Log File
master3 ERROR October 15, 2019 11:40 PM JobHistoryServer
RECEIVED SIGNAL 15: SIGTERM
View Log File
master1 ERROR October 15, 2019 11:40 PM ResourceManager
RECEIVED SIGNAL 15: SIGTERM
View Log File
host3 ERROR October 15, 2019 11:40 PM NodeManager
RECEIVED SIGNAL 15: SIGTERM
View Log File
master1 ERROR October 15, 2019 11:40 PM AbstractDelegationTokenSecretManager
ExpiredTokenRemover received java.lang.InterruptedException: sleep interrupted
View Log File
master1 ERROR October 15, 2019 11:40 PM AbstractDelegationTokenSecretManager
ExpiredTokenRemover received java.lang.InterruptedException: sleep interrupted
View Log File
master1 ERROR October 15, 2019 11:40 PM AbstractDelegationTokenSecretManager
ExpiredTokenRemover received java.lang.InterruptedException: sleep interrupted
View Log File
host2 ERROR October 15, 2019 11:40 PM NodeManager
RECEIVED SIGNAL 15: SIGTERM
please any help?
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache YARN