Support Questions

Find answers, ask questions, and share your expertise

Error with HDFS PUT replicated to 0 nodes instead of minReplication (=1).

avatar
New Contributor

Hi,

 

I am getting this error. While trying to do a simple hdfs put command from my local using hdfs client into the cloudera quick start vm. We are trying to POC streamsets and even that is failing with the same error. 

 

Thanks

Krijan

 

Command:

hadoop fs -put /Users/kkothapalli/streamsets/tutorials-master/sample_data/citylots_modified.json hdfs://192.168.56.101:8020/user/hive/warehouse/

 

 

Error Stack:

 

 

 

hadoop fs -put /Users/kkothapalli/streamsets/tutorials-master/sample_data/citylots_modified.json hdfs://192.168.56.101:8020/user/hive/warehouse/

17/04/12 16:08:35 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

17/04/12 16:08:36 WARN shortcircuit.DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.

17/04/12 16:08:40 INFO hdfs.DFSClient: Exception in createBlockOutputStream

com.google.protobuf.InvalidProtocolBufferException: Protocol message end-group tag did not match expected tag.

at com.google.protobuf.InvalidProtocolBufferException.invalidEndTag(InvalidProtocolBufferException.java:94)

at com.google.protobuf.CodedInputStream.checkLastTagWas(CodedInputStream.java:124)

at com.google.protobuf.AbstractParser.parsePartialFrom(AbstractParser.java:202)

at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:217)

at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:223)

at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:49)

at org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos$BlockOpResponseProto.parseFrom(DataTransferProtos.java:20531)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1622)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1541)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:683)

17/04/12 16:08:40 INFO hdfs.DFSClient: Abandoning BP-1288191314-127.0.0.1-1470859564889:blk_1073742751_1929

17/04/12 16:08:40 INFO hdfs.DFSClient: Excluding datanode DatanodeInfoWithStorage[10.0.2.15:50010,DS-5dd411ac-a868-4285-8cbf-bbac9879a2ed,DISK]

17/04/12 16:08:40 WARN hdfs.DFSClient: DataStreamer Exception

org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hive/warehouse/citylots_modified.json._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this operation.

at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1610)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3315)

at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:679)

at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:214)

at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:489)

at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)

at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)

at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:415)

at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)

at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)

 

at org.apache.hadoop.ipc.Client.call(Client.java:1471)

at org.apache.hadoop.ipc.Client.call(Client.java:1408)

at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)

at com.sun.proxy.$Proxy9.addBlock(Unknown Source)

at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:409)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:498)

at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256)

at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)

at com.sun.proxy.$Proxy10.addBlock(Unknown Source)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1733)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1529)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:683)

put: File /user/hive/warehouse/citylots_modified.json._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this operation.

2 REPLIES 2

avatar
Champion

avatar
Champion
There is probably an issue with the client connecting to the Datanode. It is reporting that you have one live data nodes but it is failing to place any replica on it. I would expect the client to get a different error if it was failing to write out the first replica. Check the NN UI to validate that the DN is live, and check the NN and DN logs to see if there is more information on what the issue is.