Support Questions
Find answers, ask questions, and share your expertise
Alert: Please see the Cloudera blog for information on the Cloudera Response to CVE-2021-4428

Cannot copy from local machine to VM datanode via Java



I have an application that copies data to HDFS, but is failing due to the datanode being excluded. See snippet:

private void copyFileToHdfs(FileSystem hdfs, Path localFilePath, Path hdfsFilePath) throws IOException, InterruptedException {"Copying " + localFilePath + " to " + hdfsFilePath);
    hdfs.copyFromLocalFile(localFilePath, hdfsFilePath);

However, when I try to execute, I get:

org.apache.hadoop.ipc.RemoteException( File /user/dev/workflows/test.jar could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
	at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(

HDFS commands work fine. I can create, modify, delete. Copying data is the only problem. I've also attempted to check the ports by telnet'ing. I can telnet 8020, but not 50010. I assume this is the root of the issue and why the single datanode is being excluded. I attempted to add an iptable firewall rule but I still am running into the same issue.

Any help is appreciated.


Accepted Solutions


So, the problem was two issues. One was that the VM does not have port 50010 opened by default, so all the datanodes are excluded leading to the issue above. The other issue was I needed to set "dfs.client.use.datanode.hostname" to true to avoid the datanodes resolving to the internal ip on the VM which I did set. Finally, after stepping through the configuration, I found that it was still being set to "false" which turned out to be a problem in my own code. My FileSystem object was being created with a new Configuration() rather than the one I had loaded with the configs from hdfs-site and core-site pulled from Ambari. Whoops!

Anyway, Thanks for the help all.

View solution in original post


@Jim Fratzke

See this Link


No effect on the problem.

@Jim Fratzke

Please follow these suggestions. Link

You have to make sure that DN is running and there is connectivity between NN and DN.


My dfsadmin -report shows space availability and I if I SSH onto the VM and copy data from the VM to HDFS, it works fine. It's only when I attempt to copy from my host machine to the VM hdfs that causes the problem.


@Jim Fratzke

Option 1

a) Add your hostname to conf/slaves and retry!

Option 2

There are cases when a Data Node may not available to Name Node and the causes could be :

a)Data Node is Busy with block scanning and reporting

b)Data Node disk is Full

c)Check that the dfs.block.size value in hdfs-site.xml is not negative

d) check the diskspace in your system and make sure the logs are not warning you about it

e)while write in progress primary datanode goes down(Any n/w fluctations b/w Name Node and Data Node Machines)

when Ever we append any partial chunk and call sync for subsequent partial chunk appends client should store the previous data in buffer.

Option3 Last resort

The below procedure will destroy ALL data on HDFS. Do not execute the steps in this answer unless you do not care about destroying existing data!!

a)stop all hadoop services

b)delete dfs/name and dfs/data directories

c)hadoop namenode -format # Answer with a capital Y

d)start hadoop services

e)cluster's" dfs health page http://your_host:50070/dfshealth.jsp


@Jim Fratzke what does your datanode log say?


Not sure if it's related, but this shows up:

2016-02-14 18:15:19,232 ERROR datanode.DataNode ( - error processing unknown operation  src: / dst: /
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(


@Jim Fratzke

The -copyFromLocal copies file from sandbox to hdfs not from windows to sandbox.

You need to scp the file from windows to sandbox using the scp command in the sandbox. Use Winscp for copying the files from windows to any unix based server like Hortonwork Sandbox.


I can scp files onto the VM and use "hadoop fs -copyFromLocal" to move the file onto HDFS. That works fine. It's when I run the java code on my local machine and attempt to copy data using the filesystem object passed to me. I can run commands like hdfs.mkdirs() and hdfs.delete(), but hdfs.copyFromLocalFile() failed due to the described above.