Support Questions
Find answers, ask questions, and share your expertise

WARN hdfs.DFSClient (DFSOutputStream.java:run(661)) - DataStreamer Exception

After copying 130 files to nfsgateway, a copy error occurs!

hadoop-hdfs-nfs3.log

hdfs.DFSClient (DFSOutputStream.java:run(661)) - DataStreamer Exception

org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/test03/bin/infocmp could only be replicated to 0 nodes instead of minReplication (=1).  There are 3 datanode(s) running and no node(s) are excluded in this operation.

hadoop-hdfs-namenode.log

WARN  blockmanagement.BlockPlacementPolicy (BlockPlacementPolicyDefault.java:chooseTarget(385)) - Failed to place enough replicas, still in need of 3 to reach 3 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology

Copying with hdfs dfs -put passes without problems with any number of files...

Why does copying through the nfs gateway cause this error?

HDP-2.6.0.3

4 REPLIES 4

Contributor

Hi @Eugene Mogilevsky

Possible issue would be disk space, Can you check and post the output of

hdfs dfsadmin -report

Cheers,

Ram

[hdfs@hdp01 ~]$ hdfs dfsadmin -report
Configured Capacity: 167402287104 (155.91 GB)
Present Capacity: 147762115362 (137.61 GB)
DFS Remaining: 102311189074 (95.28 GB)
DFS Used: 45450926288 (42.33 GB)
DFS Used%: 30.76%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0

-------------------------------------------------

Live datanodes (3):

Name: 10.100.4.124:50010 (hdp04.amb.corp)
Hostname: hdp04.amb.corp
Decommission Status : Normal
Configured Capacity: 55800762368 (51.97 GB)
DFS Used: 15149899221 (14.11 GB)
Non DFS Used: 5961487782 (5.55 GB)
DFS Remaining: 34118117574 (31.77 GB)
DFS Used%: 27.15%
DFS Remaining%: 61.14%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 16
Last contact: Wed Apr 19 14:30:53 EEST 2017

Name: 10.100.4.123:50010 (hdp03.amb.corp)
Hostname: hdp03.amb.corp
Decommission Status : Normal
Configured Capacity: 55800762368 (51.97 GB)
DFS Used: 15150591270 (14.11 GB)
Non DFS Used: 5822404181 (5.42 GB)
DFS Remaining: 34256509126 (31.90 GB)
DFS Used%: 27.15%
DFS Remaining%: 61.39%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 16
Last contact: Wed Apr 19 14:30:53 EEST 2017

Name: 10.100.4.122:50010 (hdp02.amb.corp)
Hostname: hdp02.amb.corp
Decommission Status : Normal
Configured Capacity: 55800762368 (51.97 GB)
DFS Used: 15150435797 (14.11 GB)
Non DFS Used: 6142506406 (5.72 GB)
DFS Remaining: 33936562374 (31.61 GB)
DFS Used%: 27.15%
DFS Remaining%: 60.82%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 16
Last contact: Wed Apr 19 14:30:53 EEST 2017

dfs.blocksize = 134217728

130 files * 134217728 = 17448304640 (17 GB)

DFS Remaining: 34118117574 (31.77 GB)

Reserved space for HDFS: 2146172928 (2 GB)

17+2 = 19 GB Why is 31 gigabytes not enough?

Contributor

@Eugene Mogilevsky did you figure this out?