Member since
07-22-2020
3
Posts
2
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1492 | 08-06-2020 07:49 AM |
08-06-2020
07:49 AM
2 Kudos
I finally found the correct way to do that. I used Ambari to create a new configuration group that includes the new hosts only, and then I added the extra disks paths to the dfs.datanode.data.dir parameter in the new configuration group only. That will integrate the extra disk on the new nodes only into the HDFS. Older nodes will not be impacted by the change in the parameter. Reference: https://docs.cloudera.com/HDPDocuments/Ambari-2.7.5.0/managing-and-monitoring-ambari/content/amb_managing_host_configuration_groups.html
... View more
07-27-2020
02:51 AM
@SagarKanani Thank you for your reply. Referring to the documentation, I found the following: dfs.datanode.data.dir Determines where on the local filesystem a DFS data node should store its blocks. If this is a comma-delimited list of directories, then data is stored in all named directories, typically on different devices. Directories that do not exist are ignored. Heterogeneous storage allows specifying that each directory resides on a different type of storage: DISK, SSD, ARCHIVE, or RAM_DISK. (https://docs.cloudera.com/HDPDocuments/HDP2/HDP-2.6.3/bk_hdfs-administration/content/configuration_props.html) I think that means paths /data/disk3 & /data/disk4 will be ignored on old nodes, right? Have anyone tried this scenario before?
... View more
07-22-2020
01:11 PM
Hi, I need to add new hosts to an existing cluster using Ambari but the new hosts have more disks than the old nodes that I want to add to the HDFS (old nodes have /data/disk1& /data/disk2 while new nodes have/data/disk1, /data/disk2, /data/disk3 & /data/disk4). How can I add those disks after adding the nodes? can I just update dfs.datanode.data.dir?
... View more
Labels:
- Labels:
-
Apache Ambari
-
HDFS