Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

"File /user/root/tmp/test.txt" could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 node(s) are excluded in this operation.

avatar
Contributor

Hi guys,

First of all, I am new in Hortonworks. I have tried to connect to a remote HDP (actually my hdfs) in VM from NiFi in my local machine. I apply putHDFS processor, but unfortunately I am faced with an error.

org.apache.nifi.processor.exception.ProcessException: IOException thrown from PutHDFS[id=07af2532-015a-1000-acc3-3ed53acfcc7c]: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/root/tmp/test.txt could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
 at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1641)
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3198)
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3122)
 at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:843)
 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:500)
 at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)

It should be noted :

I set core_site and hdfs_site XMLs on PutHdfs processor;

Only a NameNode instance is running and it's not in safe-mode;

There is a DataNode instances up and running, and there is no node dead;

Namenode and Datanode instances are both running, but I don't know how they can communicate with each other;

I check datanode and namenode logs and i didn't realized anything;

I forwarded 50010 and 8020 ports by using the Virtualbox;

Specified reserved spaces for DataNode instances in dfs.datanode.du.reserved is equal 200000000 and the usage of disk is as follows:

12209-screenshot-from-2017-02-08-09-17-30.png

My storage status in the Virtualbox is as follows:

12210-screenshot-from-2017-02-08-09-56-49.png

before the error i got another error the same as below:

putHDFS[id=07af2532-015a-1000-acc3-3ed53acfcc7c] Failed to write to HDFS due to java.lang.IllegalArgumentException: Compression codec com.hadoop.compression.lzo.LzoCodec not found.: java.lang.IllegalArgumentException: Compression codec com.hadoop.compression.lzo.LzoCodec not found

and for solving it, I removed a property (io.compression.codecs) from core_site.xml. Maybe it has caused the error.

Thanks.

1 ACCEPTED SOLUTION

avatar
Master Mentor
7 REPLIES 7

avatar
Master Mentor

Try to add the rest of the hdfs ports to port forwarding

https://ambari.apache.org/1.2.3/installing-hadoop-using-ambari/content/reference_chap2_1.html and if you are on HDP 2.5 sandbox follow this tutorial to port forward https://community.hortonworks.com/articles/65914/how-to-add-ports-to-the-hdp-25-virtualbox-sandbox.h...

avatar
Contributor

Thank You Dear Artem

I followed the tutorial, but when I apply the last command "docker ps", I saw something else

12231-screenshot-from-2017-02-08-16-11-38.png

and I couldn't to connect to VM in Nifi any more. I faced with this error:

PutHDFS[id=015a1000-2532-17af-9400-6501c5ca7018] Failed to write to HDFS due to java.io.IOException: Failed on local exception: java.io.IOException: Connection reset by peer; Host Details : local host is: "shanghoosh-All-Series/127.0.1.1"; destination host is: "sandbox.hortonworks.com":8020; : java.io.IOException: Failed on local exception: java.io.IOException: Connection reset by peer; Host Details : local host is: "shanghoosh-All-Series/127.0.1.1"; destination host is: "sandbox.hortonworks.com":8020; 

avatar
Master Mentor

Install Nifi in the same container as sandbox

avatar
Contributor

Dear Artem

I tried again and now, I have a remote HDFS.

Thanks a million

avatar
Explorer

I use the HDP 2.6.5 and i can not connect to the sandbox vm. Is there a change in Version 2.6.5?

 

thx

Marcel 

avatar
Explorer

I find the solution in HDP 2.6.5 the ssh prort for the sandbox vm is the standard port 22.

avatar
Contributor

Notice: some information will be lost when you do what we did.

for example I had a lucidworks but now I should install it again.