Member since
07-11-2017
42
Posts
1
Kudos Received
0
Solutions
09-22-2024
10:09 PM
1 Kudo
@Venkatesh12, Welcome to the Cloudera Community! As this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post.
... View more
05-21-2019
03:36 AM
I have similar problem can you please help logs of HMaster- 2019-05-21 15:16:39,745 WARN [main-EventThread] coordination.SplitLogManagerCoordination: Error splitting /hbase/splitWAL/WALs%2F10.136.107.153%2C16020%2C1558423482915-splitting%2F10.136.107.153%252C16020%252C1558423482915.default.1558426329243 2019-05-21 15:16:39,745 WARN [MASTER_SERVER_OPERATIONS-DIGPUNPERHEL02:16000-0] master.SplitLogManager: error while splitting logs in [hdfs://10.136.107.59:9000/hbase/WALs/10.136.107.153,16020,1558423482915-splitting] installed = 3 but only 0 done 2019-05-21 15:16:39,747 ERROR [MASTER_SERVER_OPERATIONS-DIGPUNPERHEL02:16000-0] executor.EventHandler: Caught throwable while processing event M_SERVER_SHUTDOWN java.io.IOException: failed log splitting for 10.136.107.153,16020,1558423482915, will retry at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.resubmit(ServerShutdownHandler.java:357) at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:220) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.IOException: error or interrupted while splitting logs in [hdfs://10.136.107.59:9000/hbase/WALs/10.136.107.153,16020,1558423482915-splitting] Task = installed = 3 done = 0 error = 3 at org.apache.hadoop.hbase.master.SplitLogManager.splitLogDistributed(SplitLogManager.java:290) at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:391) at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:364) at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:286) at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:213) ... 4 more 2019-05-21 15:16:39,748 FATAL [MASTER_SERVER_OPERATIONS-DIGPUNPERHEL02:16000-0] master.HMaster: Master server abort: loaded coprocessors are: [] 2019-05-21 15:16:39,748 FATAL [MASTER_SERVER_OPERATIONS-DIGPUNPERHEL02:16000-0] master.HMaster: Caught throwable while processing event M_SERVER_SHUTDOWN java.io.IOException: failed log splitting for 10.136.107.153,16020,1558423482915, will retry at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.resubmit(ServerShutdownHandler.java:357) at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:220) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.IOException: error or interrupted while splitting logs in [hdfs://10.136.107.59:9000/hbase/WALs/10.136.107.153,16020,1558423482915-splitting] Task = installed = 3 done = 0 error = 3 Logs of HRegionServer - 2019-05-21 15:16:38,669 INFO [RS_LOG_REPLAY_OPS-DIGPUNPERHEL02:16020-0-Writer-1] wal.WALSplitter: Creating writer path=hdfs://10.136.107.59:9000/hbase/data/dev2/observation/8baa93c0a9ddc9ab4ebfead1a50d85b2/recovered.edits/0000000000002354365.temp region=8baa93c0a9ddc9ab4ebfead1a50d85b2 2019-05-21 15:16:38,701 WARN [Thread-101] hdfs.DFSClient: DataStreamer Exception org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/data/dev2/observation/8baa93c0a9ddc9ab4ebfead1a50d85b2/recovered.edits/0000000000002354365.temp could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and no node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1547) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3107) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3031) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:724) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043) at org.apache.hadoop.ipc.Client.call(Client.java:1411) at org.apache.hadoop.ipc.Client.call(Client.java:1364) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) at com.sun.proxy.$Proxy16.addBlock(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
... View more
08-11-2018
02:16 AM
Since it wasn't really described how exactly did you resolve it... The point is that on the client side (it's important that it's not on the server side), set "dfs.datanode.use.datanode.hostname" in the org.apache.hadoop.conf.Configuration object to value "true". If the Configuration object isn't created by your code (like if Spark creates it, in my case), then it depends on what creates it... see its documentation. But some guesses: Attempt 1: Set it inside $HADOOP_HOME/etc/hadoop/hdfs-site.xml. Hadoop command line tools use that, your Java application though... maybe not. Attempt 2: Put $HADOOP_HOME/etc/hadoop/ into the Java classpath (or pack hdfs-site.xml into your project under /src/main/resources/, but that's kind of dirty...). This works with Spark. Spark only: SparkSession.builder().config("spark.hadoop.dfs.client.use.datanode.hostname", "true").[...] Of course, you may also need to add the domain name of the DataNode-s (as the NameNode knows it) into the /etc/hosts on the computer running your application.
... View more
06-15-2018
08:15 AM
Hi @manuroman, We are getting this error sometimes and sometimes this doesnt show up for the same query.What might be the issue , any idea? Thanks, Renuka
... View more
10-04-2017
04:04 AM
1 Kudo
I know this is a really old post but just for knowledge I'm sharing my solution. For me the restart of the agent in the CDH None host solved the problem sudo service cloudera-scm-agent restart
... View more
09-01-2017
05:44 AM
Hi, Setting those values didnot help. CDH/Hive : hive-common-1.1.0-cdh5.12.0.jar
... View more
07-31-2017
09:34 AM
Should I pass the /etc/hosts (all nodes on the cluster including the edge node , name node , data node) file in the java code instead of getting it from the host I am connecting to (edge node)
... View more
07-26-2017
01:25 PM
No this didnt solve my issue.
... View more