Support Questions

Find answers, ask questions, and share your expertise
Announcements
Check out our newest addition to the community, the Cloudera Data Analytics (CDA) group hub.

add grid datanode

Explorer

 

Grid.PNG

- /grid/1/
- /grid/2/
- /grid/3/
- /grid/4/
- /grid/5/

Attached image where I don't know how to add those routes if someone can help me.

Greetings

1 ACCEPTED SOLUTION

Contributor

It depends what you want to change:

If you want just to add additional disks in all nodes follow this: 

 

Best way to create partitions like /grid/0/hadoop/hdfs/data - /grid/10/hadoop/hdfs/data and mount them to new formatted disks (its one of recommendation parameters for hdfs data mounts but you can change it):

/dev/sda1 /grid/0 ext4 inode_readahead_blks=128,commit=30,data=writeback,noatime,nodiratime,nodev,nobarrier 0 0
/dev/sdb1 /grid/1 ext4 inode_readahead_blks=128,commit=30,data=writeback,noatime,nodiratime,nodev,nobarrier 0 0
/dev/sdc1 /grid/2 ext4 inode_readahead_blks=128,commit=30,data=writeback,noatime,nodiratime,nodev,nobarrier 0 0

After that just add all partitions paths in hdfs configs like:

/grid/0/hadoop/hdfs/data,/grid/1/hadoop/hdfs/data,/grid/2/hadoop/hdfs/data

But dont delete existed partition from configuration because you will lost data from block which stored in  /hadoop/hdfs/data.

Path dont really matter just keep them separately and dont forget to make re-balance between disks.

View solution in original post

5 REPLIES 5

Contributor

You need to prepare and mount disks before setting this configuration:

Datanode directores:

/hadoop/hdfs/data/grid/1/

/hadoop/hdfs/data/grid/2/

/hadoop/hdfs/data/grid/3/

/hadoop/hdfs/data/grid/4/

/hadoop/hdfs/data/grid/5/

Explorer

Thank you very much for the information @goga , now I have a doubt, I modified according to what you told me, but it gives me error I have to make a format to the hdfs¿? I ask because when it was installed it was made in /hadoop/hdfs/data/ and now I finish modifying it /hadoop/hdfs/data/grid/1 but when I try to raise in ambari it gives error.

Contributor

Its fresh installation or you want to change existed?

Explorer

Thanks for helping me @goga I explain you I have 2 environments, one of development where there is already an installation where the namenode has the path

Namenode
/hadoop/hdfs/namenode

Datanode
/hadoop/hdfs/data

and where I changed the new records
/hadoop/hdfs/data/grip1,/hadoop/hdfs/data2,etc. that's where I get error because when I try to modify it it doesn't start.

Then in the production environment which are 25 servers with the following structure

namenode
/hadoop/hdfs/namenode (2 1TB hard drives in raid 10)
/hadoop/hdfs/data/grip1/namenode


datanode
/hadoop/hdfs/data/grip1- /hadoop/hdfs/data/grip10 (10 hard drives with 2TB)

I don't know if you see my production distribution and development as correctly as I do. Thanks for the help.

 

Contributor

It depends what you want to change:

If you want just to add additional disks in all nodes follow this: 

 

Best way to create partitions like /grid/0/hadoop/hdfs/data - /grid/10/hadoop/hdfs/data and mount them to new formatted disks (its one of recommendation parameters for hdfs data mounts but you can change it):

/dev/sda1 /grid/0 ext4 inode_readahead_blks=128,commit=30,data=writeback,noatime,nodiratime,nodev,nobarrier 0 0
/dev/sdb1 /grid/1 ext4 inode_readahead_blks=128,commit=30,data=writeback,noatime,nodiratime,nodev,nobarrier 0 0
/dev/sdc1 /grid/2 ext4 inode_readahead_blks=128,commit=30,data=writeback,noatime,nodiratime,nodev,nobarrier 0 0

After that just add all partitions paths in hdfs configs like:

/grid/0/hadoop/hdfs/data,/grid/1/hadoop/hdfs/data,/grid/2/hadoop/hdfs/data

But dont delete existed partition from configuration because you will lost data from block which stored in  /hadoop/hdfs/data.

Path dont really matter just keep them separately and dont forget to make re-balance between disks.

Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.