- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
changing datanode.dir
- Labels:
-
Apache Hadoop
Created ‎05-11-2017 03:17 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
HDP 2.5.3
I have a cluster that has 34 datanodes, each with (11) 1.2TB disks for hdfs. I added three new nodes but these only have (9) 1.2TB disks for hdfs. The new datanodes have been added but it seems that not all the file systems are seen by hdfs
When I look at one of the older datanodes, in hdfs-site.xml for dfs.datanode.data.dir all the files systems are listed (disk1 - disk11). On the new nodes, only disk1 thru disk6 are listed even though disk7, 8 and 9 are configure and mounted as file systems.
Question. How do I get these nodes to recognize the other disks? Can I edit the hdfs-site.xml and add them to the list? If so, what are the steps?
I don't seem to be able to do this thru ambari.
Created ‎05-11-2017 03:42 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I figured out what was wrong. There are 2 HDFS configuration groups on this cluster. One is set up for the datanodes. I just needed to add the new servers to that group
Created ‎05-11-2017 03:28 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Is your Ambari able to see the disk. Basically you need to add new values for dfs.datanode.dir that point to these missing disks. Look at this screenshot. You need to add new lines for these nodes.
Created ‎05-11-2017 03:42 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I figured out what was wrong. There are 2 HDFS configuration groups on this cluster. One is set up for the datanodes. I just needed to add the new servers to that group
