- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Setting dfs.datanode.failed.volumes.tolerated parameter to 1
- Labels:
-
Apache Hadoop
-
Apache HBase
Created 05-04-2017 09:00 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
HI All, I want to set the parameter of dfs.datanode.failed.volumes.tolerated to 1 so that DataNode service will be fault tolerant.
Once I set this parameter in Ambari it will ask me for HDFS service restart. My question is if I restart HDFS service, which all service will be restarted? Will HBase will also be restarted ? Since I don't have an environment to check before performing this activity posting this question.
Created 05-08-2017 11:15 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Akash S - If you make any change in hdfs-site.xml using Ambari it will ask you to restart below services - HDFS, YARN and MAPREDUCE2. It will not ask for HBASE restart.
Created 05-04-2017 09:20 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Making changes to "dfs.datanode.failed.volumes.tolerate=1" will not ask for HBase restart. After making this change you might see a restart required services as "HDFS", "MapReduce2" and "YARN".
Created 05-08-2017 11:15 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Akash S - If you make any change in hdfs-site.xml using Ambari it will ask you to restart below services - HDFS, YARN and MAPREDUCE2. It will not ask for HBASE restart.
