Support Questions

Find answers, ask questions, and share your expertise

Failed to place enough replicas: expected size is 2 but only 0 storage types can be selected

avatar
Explorer

Has hdfs with 3 dn 1 nn

All dfs.data.dir, dfs.datanode.data.dir is set to [DISK]/dfs/dn

 

I kept on getting the following errors from Name node log 

 

2017-07-04 23:37:42,134 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 2 to reach 3 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology
2017-07-04 23:37:42,135 WARN org.apache.hadoop.hdfs.protocol.BlockStoragePolicy: Failed to place enough replicas: expected size is 2 but only 0 storage types can be selected (replication=3, selected=[], unavailable=[DISK, ARCHIVE], removed=[DISK, DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
2017-07-04 23:37:42,136 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 2 to reach 3 (unavailableStorages=[DISK, ARCHIVE], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) All required storage types are unavailable: unavailableStorages=[DISK, ARCHIVE], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}

 

1 ACCEPTED SOLUTION

avatar
Explorer

Take a while to figure out why. Hope the following can help you. 

 

  1. Cloudera Manager datanode status is not reliable. When it shows commission OK. It may very likely not OK in hadoop. In my case, all 4 nodes shown commissioned fine.

  2. The hadoop web interface from apache for datanode and namenode are much more reliable. In my case, the apache webinterface shown the right status of 2 of the 4 datanodes are decommissioned.

  3. Always check your specific dfs_hosts_allows.txt and dfs_hosts_exclude.txt and make sure the datanode you need are in allows.txt , but not in exclude.txt. the file location is in hdfs-site.xml

  4. Only in cloudera CDH, To commission a data node. Go to that node, then select the data node role, decommission in cloudera manager to clean the setting in Cloudera and recommission the node in cloudera manager.

  5. Check and make sure the datanode webinterface from apache shows the correct number of commssioned nodes.

View solution in original post

1 REPLY 1

avatar
Explorer

Take a while to figure out why. Hope the following can help you. 

 

  1. Cloudera Manager datanode status is not reliable. When it shows commission OK. It may very likely not OK in hadoop. In my case, all 4 nodes shown commissioned fine.

  2. The hadoop web interface from apache for datanode and namenode are much more reliable. In my case, the apache webinterface shown the right status of 2 of the 4 datanodes are decommissioned.

  3. Always check your specific dfs_hosts_allows.txt and dfs_hosts_exclude.txt and make sure the datanode you need are in allows.txt , but not in exclude.txt. the file location is in hdfs-site.xml

  4. Only in cloudera CDH, To commission a data node. Go to that node, then select the data node role, decommission in cloudera manager to clean the setting in Cloudera and recommission the node in cloudera manager.

  5. Check and make sure the datanode webinterface from apache shows the correct number of commssioned nodes.