Member since
04-20-2023
3
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1335 | 04-21-2023 03:00 AM |
04-21-2023
03:00 AM
i fix this issue by run command on destination cluster,i think it caused by original version is too old to support ec (hadoop-2.7.5)
... View more
04-20-2023
07:37 PM
hi blizano , thanks for reply, im sure this issue is caused by ec Policy, if i unset ec on /hbase, hbase snapshot export works. and i have tried set dfs.client.block.write.locateFollowingBlock.retries to 10 , but it's not useful, that throw the same error as before. and i run fsck, it shows: /hbase/.hbase-snapshot/test_snapshot/.snapshotinfo: Under replicated BP-870847850-192.168.30.174-1680829859143:blk_-9223372036854767824_8445. Target Replicas is 3 but found 1 live replica(s), 0 decommissioned replica(s), 0 decommissioning replica(s) /hbase/.hbase-snapshot/test_snapshot/data.manifest: Under replicated BP-870847850-192.168.30.174-1680829859143:blk_-9223372036854767808_8446. Target Replicas is 3 but found 1 live replica(s), 0 decommissioned replica(s), 0 decommissioning replica(s). in fact my Target Replicas is not 3 , because i have set ec on /hbase . and i tried set dfs.replication to 1 ,it not useful too. i run du on /hbase/archive/data/default/test/ ,it shows 1.1 M 384 M, this is strange, original file is only 1.1M,but total size is 384 M.
... View more
04-20-2023
02:58 AM
hello guys, I need to copy hbase data from replication set cluster to ec cluster, i have tried hbase snapshot export and hadoop distcp ,they look like throw the same error: Not replicated yet, i think this was caused by when they transporting ,they will check if destination cluster replication num was same as local, but ignore if destination cluster used replication set. how can i do it ,thx. There is all the errors. snapshot export: 2023-04-19 19:44:10,199 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: BLOCK* blk_-9223372036854751104_9943 is COMMITTED but not COMPLETE(numNodes= 5 >= minimum = 3) in file /hbase/archive/data/default/test/09748ed90d0d58c0fe7ac4b3c08f3cd4/cf/e35fadd88d244766800728318ccea508 2023-04-19 19:44:10,200 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 8020, call Call#31 Retry#0 org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 172.31.0.146:52210 org.apache.hadoop.hdfs.server.namenode.NotReplicatedYetException: Not replicated yet: /hbase/archive/data/default/test/09748ed90d0d58c0fe7ac4b3c08f3cd4/cf/e35fadd88d244766800728318ccea508 at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.validateAddBlock(FSDirWriteFileOp.java:181) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2661) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:872) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:550) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675) hadoop distcp: java.io.IOException: Unable to close file because the last block does not have enough number of replicas. at org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2521) at org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:2482) at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2447) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72) at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106) at java.io.FilterOutputStream.close(FilterOutputStream.java:159) at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyBytes(RetriableFileCopyCommand.java:258) at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.copyToFile(RetriableFileCopyCommand.java:183) at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:123) at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:99) at org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
... View more
Labels:
- Labels:
-
Apache HBase
-
HDFS