- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
I want to wirte a file to vm datanode with a local java client
- Labels:
-
HDFS
-
Hortonworks Data Platform (HDP)
Created ‎01-13-2021 10:51 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I have HDP 2.6.5 running in VirtualBox with the networkmode "bridgenetwork".
I have opened port 50010 in the sandbox vm and the port is accessible.
Now I want to create a text file from my Windows machine with a Java programme.
Configuration conf = new Configuration();
conf.set("dfs.client.use.datanode.hostname", "true");
FileSystem fs = new DistributedFileSystem();
fs.initialize(new URI("hdfs://sandbox-hdp.hortonworks.com:8020"), conf);
FSDataOutputStream fdos=fs.create(new Path("/testing/file02.txt"), true);
fdos.writeBytes("Test text for the txt file");
fdos.flush();
fdos.close();
fs.close();
The text file is also created, but there is nothing in it and I get the following error:
21/01/13 19:13:00 INFO hdfs.DFSClient: Exception in createBlockOutputStream
java.io.EOFException: Premature EOF: no length prefix available
at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2294)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1480)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1400)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:554)
21/01/13 19:13:00 INFO hdfs.DFSClient: Abandoning BP-1389476969-172.18.0.2-1610559785364:blk_1073741850_1026
21/01/13 19:13:00 INFO hdfs.DFSClient: Excluding datanode DatanodeInfoWithStorage[172.18.0.2:50010,DS-11d5c1d9-fa92-46fd-b130-cd05683300c7,DISK]
21/01/13 19:13:00 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /testing/file02.txt could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1719)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3372)
My local pc ip is 192.168.178.2 and the vm ip is 192.168.178.39.
I have already tested everything recommended under this link.
https://community.cloudera.com/t5/Support-Questions/Cannot-copy-from-local-machine-to-VM-datanode-vi...
But nothing has helped.
Does anyone have an idea what the problem is?
Thanks
Marcel
Created ‎01-14-2021 06:06 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
i solve the problem.
I add the port 50010 to the tcp-hdp.conf which is mapped to the docker container.
Created ‎01-14-2021 06:06 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
i solve the problem.
I add the port 50010 to the tcp-hdp.conf which is mapped to the docker container.
Created ‎01-14-2021 09:44 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I'm happy to see you resolved your issue. Please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
Cy Jervis, Manager, Community Program
Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.
