Member since
02-01-2017
5
Posts
4
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
13853 | 07-09-2017 07:14 AM | |
38025 | 07-04-2017 07:50 AM | |
37836 | 07-04-2017 07:17 AM |
07-09-2017
07:14 AM
Take a while to figure out why. Hope the following can help you. Cloudera Manager datanode status is not reliable. When it shows commission OK. It may very likely not OK in hadoop. In my case, all 4 nodes shown commissioned fine. The hadoop web interface from apache for datanode and namenode are much more reliable. In my case, the apache webinterface shown the right status of 2 of the 4 datanodes are decommissioned. Always check your specific dfs_hosts_allows.txt and dfs_hosts_exclude.txt and make sure the datanode you need are in allows.txt , but not in exclude.txt. the file location is in hdfs-site.xml Only in cloudera CDH, To commission a data node. Go to that node, then select the data node role, decommission in cloudera manager to clean the setting in Cloudera and recommission the node in cloudera manager. Check and make sure the datanode webinterface from apache shows the correct number of commssioned nodes.
... View more
07-06-2017
05:29 PM
Has hdfs with 3 dn 1 nn All dfs.data.dir, dfs.datanode.data.dir is set to [DISK]/dfs/dn I kept on getting the following errors from Name node log 2017-07-04 23:37:42,134 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 2 to reach 3 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2017-07-04 23:37:42,135 WARN org.apache.hadoop.hdfs.protocol.BlockStoragePolicy: Failed to place enough replicas: expected size is 2 but only 0 storage types can be selected (replication=3, selected=[], unavailable=[DISK, ARCHIVE], removed=[DISK, DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2017-07-04 23:37:42,136 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 2 to reach 3 (unavailableStorages=[DISK, ARCHIVE], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) All required storage types are unavailable: unavailableStorages=[DISK, ARCHIVE], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
... View more
Labels:
- Labels:
-
HDFS
07-04-2017
07:50 AM
4 Kudos
I found a alternative for this based on documentation https://www.cloudera.com/documentation/enterprise/latest/topics/cdh_ig_hdfs_cluster_deploy.html#topic_11_2_5 Run the following Command and formated the Namenode. Thanks all sudo -u hdfs hdfs namenode -format
... View more
07-04-2017
07:17 AM
Thanks for the advise. I am new in hadoop world. your suggestion is very helpful. However, I am not so sure if the format option is avaiable through the interface after stopped name node. Here is my screen shot. Then I try to see if there is action I can take for selected instance, but there is no such option to format the namenode and recreate these checkpoint related files. Thanks again for your time.
... View more
07-02-2017
09:26 PM
"Version: Cloudera Enterprise Data Hub Edition Trial 5.11.0 (#101 built by jenkins on 20170412-1249 git: 70cb1442626406432a6e7af5bdf206a384ca3f98)"
Failed to start Name Server for HDFS. with the following error "java.io.IOException: NameNode is not formatted."
2017-07-03 00:14:56,117 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. java.io.IOException: NameNode is not formatted. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:222) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1096) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:780) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:614) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:676) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:844) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:823) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1547) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1615)
Try to follow "http://www.cloudera.com/documentation/manager/5-0-x/Cloudera-Manager-Managing-Clusters/cm5mc_nn.html" to format the name node directory. but there is no such action in the action menu for the selected Name node instance.
... View more
Labels:
- Labels:
-
Cloudera Manager
-
HDFS