- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
How to add more disks to HDFS?
- Labels:
-
Apache Hadoop
Created ‎01-13-2016 07:24 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Created ‎01-13-2016 07:44 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Add the disks to the datanodes/server and mount them.
Afterwards you should shutdown the datanode, add the new mountpoints to dfs.datanode.data.dir (e.g. /grid/hadoop/hdfs/dn /grid1/hadoop/hdfs/dn / grid2/hadoop/hdfs/dn,...) and restart the datanode.
Created ‎01-13-2016 07:36 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
you add more disks by adding new nodes called datanodes. If you want to add disk to existing nodes, you will need to change the property in ambari for datanode dir to include the newly created directories. @vijaya inturi
Created ‎01-13-2016 07:44 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Add the disks to the datanodes/server and mount them.
Afterwards you should shutdown the datanode, add the new mountpoints to dfs.datanode.data.dir (e.g. /grid/hadoop/hdfs/dn /grid1/hadoop/hdfs/dn / grid2/hadoop/hdfs/dn,...) and restart the datanode.
Created ‎02-02-2016 07:40 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@vijaya inturi are you still having issues with this? Can you accept best answer or provide your own solution?
