Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

is it necessary to stop the components on each datanode ( worker machine ) in case of adding new disks ?

Solved Go to solution
Highlighted

is it necessary to stop the components on each datanode ( worker machine ) in case of adding new disks ?

we want to understand what is the pre procedure before adding on each worker machine disks

we have ambari cluster version 2.6 with 3 datanode machines ( workers machines )

we want to add on each worker new 5 disks

before adding the new disks do we must to stop the components on each worker machine ( the componets are - data node , matrix monitor , node manager )

or maybe to restart all effected servoces/compontes in enntire cluster?

Michael-Bronson
1 ACCEPTED SOLUTION

Accepted Solutions

Re: is it necessary to stop the components on each datanode ( worker machine ) in case of adding new disks ?

Rising Star
@Michael Bronson

It depends on whether the compoennts are going to use the new disks or not. If not, they don't need to restart. For those services that need to use the new disk. Some of them, such as HDFS datanode supported Hot-Swap, which means you can add disks by the following steps without a restart of datanode service.

1> changing the dfs.datanode.data.dir from hdfs-site.xml to include new disk locations (e.g., /data/disk2).
<property>
  <name>dfs.datanode.data.dir</name>
  <value>/data/disk1,/data/disk2</value>
</property>

2> Run hdfs CLI to reconfig datanode service without a restart. 

hdfs dfsadmin-reconfig datanode dn1.hdp.com:9820 start


Other services might need a restart to use the new disks if Hot-Swap is not supported.

1 REPLY 1

Re: is it necessary to stop the components on each datanode ( worker machine ) in case of adding new disks ?

Rising Star
@Michael Bronson

It depends on whether the compoennts are going to use the new disks or not. If not, they don't need to restart. For those services that need to use the new disk. Some of them, such as HDFS datanode supported Hot-Swap, which means you can add disks by the following steps without a restart of datanode service.

1> changing the dfs.datanode.data.dir from hdfs-site.xml to include new disk locations (e.g., /data/disk2).
<property>
  <name>dfs.datanode.data.dir</name>
  <value>/data/disk1,/data/disk2</value>
</property>

2> Run hdfs CLI to reconfig datanode service without a restart. 

hdfs dfsadmin-reconfig datanode dn1.hdp.com:9820 start


Other services might need a restart to use the new disks if Hot-Swap is not supported.

Don't have an account?
Coming from Hortonworks? Activate your account here