Archives of Support Questions (Read Only)

This is an archived board for historical reference. Information and links may no longer be available or relevant
Announcements
This board is archived and read-only for historical reference. To ask a new question, please post a new topic on the appropriate active board.

dfs.data.dir question

avatar
Frequent Visitor

If I have several slave nodes with varying number of drives mounted in each of them, then how does dfs.data.dir property setup works? If I include the directory name that is available in one of the datanode that is not available in other in this property, then does hadoop skip this value/drive for that particular datanode where this directory is missing?

 

Thanks in advance!

1 ACCEPTED SOLUTION

avatar
Contributor

No, this isn't how it works.  If you use the same configuration:

 

/mnt, /mnt2, /mnt3, /mnt4, /mnt5

 

A host who just has 3 drives (/mnt, /mnt2, /mnt3) will fail to start, depending on the value of dfs.datanode.failed.volumes.tolerated (default 0).  You're going to need to set up each server properly with the right value for dfs.data.dir.  For this (and other) reason(s), a homogenous cluster setup is usually preferred.

Bryan Beaudreault
Senior Technical Lead, Data Ops
HubSpot, Inc

View solution in original post

1 REPLY 1

avatar
Contributor

No, this isn't how it works.  If you use the same configuration:

 

/mnt, /mnt2, /mnt3, /mnt4, /mnt5

 

A host who just has 3 drives (/mnt, /mnt2, /mnt3) will fail to start, depending on the value of dfs.datanode.failed.volumes.tolerated (default 0).  You're going to need to set up each server properly with the right value for dfs.data.dir.  For this (and other) reason(s), a homogenous cluster setup is usually preferred.

Bryan Beaudreault
Senior Technical Lead, Data Ops
HubSpot, Inc