Support Questions

Find answers, ask questions, and share your expertise

hdfs fs -setrep -w 3 fails and Target Replicas is 3 but found 2 live replica(s), 0 decommissioned

avatar
Explorer

hdfs fs -setrep -w 3 fails and Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).

 

hadoop fs -setrep -w 3 /
^C[root@host1 ~]# hadoop fs -setrep -w 3 /user/root/ml-20m/genome-scores.csv
Replication 3 set: /user/root/ml-20m/genome-scores.csv
Waiting for /user/root/ml-20m/genome-scores.csv ........^C[root@host1 ~]# hadoop fs -setrep -w 3 /user/root/ml-20m/genome-scores.csv
Replication 3 set: /user/root/ml-20m/genome-scores.csv
Waiting for /user/root/ml-20m/genome-scores.csv ........................................................................................................................................................................................................

 

hdfs fsck /user/root/ml-20m/genome-scores.csv -files -blocks -locations
Connecting to namenode via http://host1.cloudera.com:50070
FSCK started by root (auth:SIMPLE) from /172.31.32.255 for path /user/root/ml-20m/genome-scores.csv at Thu May 04 04:57:20 UTC 2017
/user/root/ml-20m/genome-scores.csv 323544381 bytes, 3 block(s): Under replicated BP-1580841952-172.31.32.255-1493753849966:blk_1073745713_4889. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
Under replicated BP-1580841952-172.31.32.255-1493753849966:blk_1073745714_4890. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
Under replicated BP-1580841952-172.31.32.255-1493753849966:blk_1073745715_4891. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
0. BP-1580841952-172.31.32.255-1493753849966:blk_1073745713_4889 len=134217728 Live_repl=2 [DatanodeInfoWithStorage[172.31.63.45:50010,DS-c9f8ac9f-97fc-4547-a5c5-a5373055a50d,DISK], DatanodeInfoWithStorage[172.31.56.160:50010,DS-741c0afe-1496-4e6e-9662-b83677d6e0dc,DISK]]
1. BP-1580841952-172.31.32.255-1493753849966:blk_1073745714_4890 len=134217728 Live_repl=2 [DatanodeInfoWithStorage[172.31.56.160:50010,DS-741c0afe-1496-4e6e-9662-b83677d6e0dc,DISK], DatanodeInfoWithStorage[172.31.63.45:50010,DS-c9f8ac9f-97fc-4547-a5c5-a5373055a50d,DISK]]
2. BP-1580841952-172.31.32.255-1493753849966:blk_1073745715_4891 len=55108925 Live_repl=2 [DatanodeInfoWithStorage[172.31.63.45:50010,DS-c9f8ac9f-97fc-4547-a5c5-a5373055a50d,DISK], DatanodeInfoWithStorage[172.31.56.160:50010,DS-741c0afe-1496-4e6e-9662-b83677d6e0dc,DISK]]

Status: HEALTHY
Total size: 323544381 B
Total dirs: 0
Total files: 1
Total symlinks: 0
Total blocks (validated): 3 (avg. block size 107848127 B)
Minimally replicated blocks: 3 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 3 (100.0 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 3
Average block replication: 2.0
Corrupt blocks: 0
Missing replicas: 3 (33.333332 %)
Number of data-nodes: 2
Number of racks: 1
FSCK ended at Thu May 04 04:57:20 UTC 2017 in 3 milliseconds

 

hdfs dfs -setrep
-setrep: Not enough arguments: expected 2 but got 0
Usage: hadoop fs [generic options] -setrep [-R] [-w] <rep> <path> ...

 

 

sudo -u hdfs hdfs fsck /

/user/root/penndtch.1: Under replicated BP-1580841952-172.31.32.255-1493753849966:blk_1073745711_4887. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).
.
/user/root/psd7003.xml: Under replicated BP-1580841952-172.31.32.255-1493753849966:blk_1073745703_4879. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).

/user/root/psd7003.xml: Under replicated BP-1580841952-172.31.32.255-1493753849966:blk_1073745704_4880. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).

/user/root/psd7003.xml: Under replicated BP-1580841952-172.31.32.255-1493753849966:blk_1073745705_4881. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).

/user/root/psd7003.xml: Under replicated BP-1580841952-172.31.32.255-1493753849966:blk_1073745706_4882. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).

/user/root/psd7003.xml: Under replicated BP-1580841952-172.31.32.255-1493753849966:blk_1073745707_4883. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).

/user/root/psd7003.xml: Under replicated BP-1580841952-172.31.32.255-1493753849966:blk_1073745708_4884. Target Replicas is 3 but found 2 live replica(s), 0 decommissioned replica(s) and 0 decommissioning replica(s).

.Status: HEALTHY
Total size: 3184936505 B
Total dirs: 417
Total files: 1986
Total symlinks: 0
Total blocks (validated): 1994 (avg. block size 1597260 B)
Minimally replicated blocks: 1994 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 20 (1.0030091 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 3
Average block replication: 2.0
Corrupt blocks: 0
Missing replicas: 20 (0.499002 %)
Number of data-nodes: 2
Number of racks: 1
FSCK ended at Thu May 04 05:49:14 UTC 2017 in 123 milliseconds

 

I am using cloudera 5.10.1 

1 namenode

1 secondary namenode

2 data nodes

Replication Factor
dfs.replication
HDFS (Service-Wide)
 
 
can any help me why it is failing to replicate to 3.
2 ACCEPTED SOLUTIONS

avatar
Mentor
The general rule is that N replicas require N DataNodes to be placed. So you cannot have 4 living replicas on a cluster of 3 DataNodes.

In such a case, you'd only observe 3 live replicas, and the block will be marked under-replicated (with target required as 4 but live possibility capping at 3), just like your previous situation. The file should still be readable/writable though.

View solution in original post

avatar
Explorer

i have added one node and tried hadoop fs -setrep -w 3 /

it is working.....

no under Replicas.......

 

View solution in original post

4 REPLIES 4

avatar
Mentor
Two DataNodes cannot carry 3 replicas as each replica must reside on a
unique DataNode host. Either add a third DataNode, or change the
replication factor to 2 to match your DataNode count.

avatar
Explorer

hdfs fs -setrep -w 2 / is working ..

if i have 3 datanodes can i replicate to 4

avatar
Mentor
The general rule is that N replicas require N DataNodes to be placed. So you cannot have 4 living replicas on a cluster of 3 DataNodes.

In such a case, you'd only observe 3 live replicas, and the block will be marked under-replicated (with target required as 4 but live possibility capping at 3), just like your previous situation. The file should still be readable/writable though.

avatar
Explorer

i have added one node and tried hadoop fs -setrep -w 3 /

it is working.....

no under Replicas.......