- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Unable to write to HDFS
- Labels:
-
Apache Hadoop
Created ‎03-26-2019 11:21 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
In a single cluster test environment I suddenly cannot run any MR jobs or write to HDFS. I keep getting this error:
$ hdfs dfs -put war-and-peace.txt /user/hands-on/ 19/03/25 18:28:29 WARN hdfs.DataStreamer: Exception for BP-1098838250-127.0.0.1-1516469292616:blk_1073742374_1550 java.io.EOFException: Unexpected EOF while trying to read response from server at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:399) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213) at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1020) put: All datanodes [DatanodeInfoWithStorage[127.0.0.1:50010,DS-b90326de-a499-4a43-a66a-cc3da83ea966,DISK]] are bad. Aborting...
"hdfs dfsadmin -report" shows me everything is fine.
$ hdfs dfsadmin -report Configured Capacity: 52710469632 (49.09 GB) Present Capacity: 43335585007 (40.36 GB) DFS Remaining: 43334025216 (40.36 GB) DFS Used: 1559791 (1.49 MB) DFS Used%: 0.00% Under replicated blocks: 0 Blocks with corrupt replicas: 0 Missing blocks: 0 Missing blocks (with replication factor 1): 0 Pending deletion blocks: 0 ------------------------------------------------- Live datanodes (1): Name: 127.0.0.1:50010 (localhost) Hostname: localhost Decommission Status : Normal Configured Capacity: 52710469632 (49.09 GB) DFS Used: 1559791 (1.49 MB) Non DFS Used: 6690530065 (6.23 GB) DFS Remaining: 43334025216 (40.36 GB) DFS Used%: 0.00% DFS Remaining%: 82.21% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 2 Last contact: Mon Mar 25 18:30:45 EDT 2019
Any suggestions are appreciated.
Created ‎03-27-2019 11:05 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
It looks like Namenode is not able to reach datanodes. Please check for network issues and any recent changes made on hdfs configs like changing of rack etc.
Created ‎03-27-2019 01:59 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
it is a single cluster environment, how can there be network issues? I did not change any configs, that is the weird thing.
