<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: datanode machine ( worker ) + one of the disks have Filesystem  errors in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-machine-worker-one-of-the-disks-have-Filesystem/m-p/179495#M77687</link>
    <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/26229/uribarih.html" nodeid="26229"&gt;@Michael Bronson&lt;/A&gt;&lt;/P&gt;&lt;P&gt;1. is it safty to run &lt;STRONG&gt;e2fsck -y /dev/sdf &lt;/STRONG&gt;in order torepair a disk /dev/sdf file-system ?&lt;/P&gt;&lt;P&gt;Datanodes need to be able to read and write to the underlying file system. So if there is an error in the file system, we have no choice but to fix it. That said, HDFS will have the same blocks on other machines. So you can put this node into maintenance mode in Ambari and fix the file system errors. There is a possibility of losing some data blocks. So if you have this error in more than one datanode, please do this one by one, with some time in between. I would run fcsk and then reboot the Datanode machine to make sure everything is okay, before starting work on the next node.&lt;/P&gt;&lt;P&gt;2. is it necessary to do some other steps after running - &lt;STRONG&gt;e2fsck -y /dev/sdf ? &lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;Not from the HDFS point of view, as I said, I would make sure I am doing this datanode by datanode and not in parallel.&lt;/P&gt;</description>
    <pubDate>Fri, 27 Apr 2018 01:19:38 GMT</pubDate>
    <dc:creator>aengineer</dc:creator>
    <dc:date>2018-04-27T01:19:38Z</dc:date>
    <item>
      <title>datanode machine ( worker ) + one of the disks have Filesystem  errors</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-machine-worker-one-of-the-disks-have-Filesystem/m-p/179494#M77686</link>
      <description>&lt;P&gt;hi all,&lt;/P&gt;&lt;P&gt;we  have ambari cluster ( HDP version 2.6.0.1 )&lt;/P&gt;&lt;P&gt;one of the datanode machine - ( worker12 machine) , have a disk - &lt;STRONG&gt;/dev/sdf &lt;/STRONG&gt;with File-system  errors&lt;/P&gt;&lt;P&gt;we notice about that from  - &lt;STRONG&gt;e2fsck -n /dev/sdf&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;1. according to the output from the e2fsck , is it safty to run &lt;STRONG&gt;e2fsck -y /dev/sdf &lt;/STRONG&gt;in order to&lt;STRONG&gt; &lt;/STRONG&gt;repair a disk /dev/sdf file-system ?&lt;/P&gt;&lt;P&gt;2. is it necessary to do some other steps after running - &lt;STRONG&gt;e2fsck -y /dev/sdf ? &lt;/STRONG&gt;&lt;/P&gt;&lt;PRE&gt;ls /grid/sdf/hadoop/
hdfs/ yarn/&lt;/PRE&gt;
&lt;PRE&gt;e2fsck -n /dev/sdf
e2fsck 1.42.9 (28-Dec-2013)
Warning!  /dev/sdf is in use.
Warning: skipping journal recovery because doing a read-only filesystem check.
/dev/sdf contains a file system with errors, check forced.
Pass 1: Checking inodes, blocks, and sizes
Inodes that were part of a corrupted orphan linked list found.  Fix? no
Inode 176619732 was part of the orphaned inode list.  IGNORED.
Inode 176619733 was part of the orphaned inode list.  IGNORED.
Inode 176619745 was part of the orphaned inode list.  IGNORED.
Inode 176619747 was part of the orphaned inode list.  IGNORED.
Inode 176619751 was part of the orphaned inode list.  IGNORED.
Inode 176619752 was part of the orphaned inode list.  IGNORED.
Inode 176619753 was part of the orphaned inode list.  IGNORED.
Inode 176619756 was part of the orphaned inode list.  IGNORED.
Inode 176619759 was part of the orphaned inode list.  IGNORED.
Inode 176619760 was part of the orphaned inode list.  IGNORED.
Inode 176619762 was part of the orphaned inode list.  IGNORED.
Inode 176619763 was part of the orphaned inode list.  IGNORED.
Inode 176619766 was part of the orphaned inode list.  IGNORED.
Inode 176619767 was part of the orphaned inode list.  IGNORED.
Inode 176619773 was part of the orphaned inode list.  IGNORED.
Inode 176619774 was part of the orphaned inode list.  IGNORED.
Inode 176619775 was part of the orphaned inode list.  IGNORED.
Deleted inode 176619779 has zero dtime.  Fix? no
Inode 176619781 was part of the orphaned inode list.  IGNORED.
Inode 176619786 was part of the orphaned inode list.  IGNORED.
Inode 176619788 was part of the orphaned inode list.  IGNORED.
Inode 176619799 was part of the orphaned inode list.  IGNORED.
Inode 176619800 was part of the orphaned inode list.  IGNORED.
Pass 2: Checking directory structure
Entry '00' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-8248ef4a-78f5-4f43-967d-0007096d0c0b (176554376) has deleted/unused inode 176619732.  Clear? no
Entry '16' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-8248ef4a-78f5-4f43-967d-0007096d0c0b (176554376) has deleted/unused inode 176619733.  Clear? no
Entry '17' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-8248ef4a-78f5-4f43-967d-0007096d0c0b (176554376) has deleted/unused inode 176619745.  Clear? no
Entry '21' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-8248ef4a-78f5-4f43-967d-0007096d0c0b (176554376) has deleted/unused inode 176619747.  Clear? no
Entry '2e' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-8248ef4a-78f5-4f43-967d-0007096d0c0b (176554376) has deleted/unused inode 176619762.  Clear? no
Entry '1f' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-8248ef4a-78f5-4f43-967d-0007096d0c0b (176554376) has deleted/unused inode 176619763.  Clear? no
Entry '19' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-8248ef4a-78f5-4f43-967d-0007096d0c0b (176554376) has deleted/unused inode 176619775.  Clear? no
Entry '35' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-8248ef4a-78f5-4f43-967d-0007096d0c0b (176554376) has deleted/unused inode 176619779.  Clear? no
Entry '09' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-8248ef4a-78f5-4f43-967d-0007096d0c0b (176554376) has deleted/unused inode 176619788.  Clear? no
Entry '34' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-c7b71625-3667-48e4-8843-8ddf3c6cc98c (176554456) has deleted/unused inode 176619752.  Clear? no
Entry '04' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-c7b71625-3667-48e4-8843-8ddf3c6cc98c (176554456) has deleted/unused inode 176619756.  Clear? no
Entry '0f' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-c7b71625-3667-48e4-8843-8ddf3c6cc98c (176554456) has deleted/unused inode 176619799.  Clear? no
Entry '3b' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-5a61cab7-acb9-497a-9d7b-e6d6b29235ed (176554463) has deleted/unused inode 176619751.  Clear? no
Entry '3c' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-5a61cab7-acb9-497a-9d7b-e6d6b29235ed (176554463) has deleted/unused inode 176619753.  Clear? no
Entry '1f' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-5a61cab7-acb9-497a-9d7b-e6d6b29235ed (176554463) has deleted/unused inode 176619759.  Clear? no
Entry '15' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-5a61cab7-acb9-497a-9d7b-e6d6b29235ed (176554463) has deleted/unused inode 176619760.  Clear? no
Entry '14' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-5a61cab7-acb9-497a-9d7b-e6d6b29235ed (176554463) has deleted/unused inode 176619766.  Clear? no
Entry '01' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-5a61cab7-acb9-497a-9d7b-e6d6b29235ed (176554463) has deleted/unused inode 176619767.  Clear? no
Entry '27' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-5a61cab7-acb9-497a-9d7b-e6d6b29235ed (176554463) has deleted/unused inode 176619773.  Clear? no
Entry '35' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-5a61cab7-acb9-497a-9d7b-e6d6b29235ed (176554463) has deleted/unused inode 176619774.  Clear? no
Entry '0c' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-5a61cab7-acb9-497a-9d7b-e6d6b29235ed (176554463) has deleted/unused inode 176619781.  Clear? no
Entry '09' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-5a61cab7-acb9-497a-9d7b-e6d6b29235ed (176554463) has deleted/unused inode 176619786.  Clear? no
Entry '31' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-5a61cab7-acb9-497a-9d7b-e6d6b29235ed (176554463) has deleted/unused inode 176619800.  Clear? no
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Inode 176554376 ref count is 63, should be 54.  Fix? no
Inode 176554456 ref count is 63, should be 60.  Fix? no
Inode 176554463 ref count is 64, should be 53.  Fix? no
Pass 5: Checking group summary information
Block bitmap differences:  -(1412960478--1412960479) -1412960491 -1412960493 -(1412960497--1412960499) -1412960502 -(1412960505--1412960506) -(1412960508--1412960509) -(1412960512--1412960513) -(1412960519--1412960521) -1412960525 -1412960527 -1412960532 -1412960534 -(1412960545--1412960546)
Fix? no
Free blocks count wrong (1918728678, counted=1919005864).
Fix? no
Inode bitmap differences:  -(176619732--176619733) -176619745 -176619747 -(176619751--176619753) -176619756 -(176619759--176619760) -(176619762--176619763) -(176619766--176619767) -(176619773--176619775) -176619779 -176619781 -176619786 -176619788 -(176619799--176619800)
Fix? no
Directories count wrong for group #43120 (245, counted=222).
Fix? no
Free inodes count wrong (243908566, counted=243908282).
Fix? no
/dev/sdf: ********** WARNING: Filesystem still has errors **********
/dev/sdf: 282666/244191232 files (0.3% non-contiguous), 34777968/1953506646 blocks&lt;/PRE&gt;</description>
      <pubDate>Thu, 26 Apr 2018 01:28:39 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-machine-worker-one-of-the-disks-have-Filesystem/m-p/179494#M77686</guid>
      <dc:creator>mike_bronson7</dc:creator>
      <dc:date>2018-04-26T01:28:39Z</dc:date>
    </item>
    <item>
      <title>Re: datanode machine ( worker ) + one of the disks have Filesystem  errors</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-machine-worker-one-of-the-disks-have-Filesystem/m-p/179495#M77687</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/26229/uribarih.html" nodeid="26229"&gt;@Michael Bronson&lt;/A&gt;&lt;/P&gt;&lt;P&gt;1. is it safty to run &lt;STRONG&gt;e2fsck -y /dev/sdf &lt;/STRONG&gt;in order torepair a disk /dev/sdf file-system ?&lt;/P&gt;&lt;P&gt;Datanodes need to be able to read and write to the underlying file system. So if there is an error in the file system, we have no choice but to fix it. That said, HDFS will have the same blocks on other machines. So you can put this node into maintenance mode in Ambari and fix the file system errors. There is a possibility of losing some data blocks. So if you have this error in more than one datanode, please do this one by one, with some time in between. I would run fcsk and then reboot the Datanode machine to make sure everything is okay, before starting work on the next node.&lt;/P&gt;&lt;P&gt;2. is it necessary to do some other steps after running - &lt;STRONG&gt;e2fsck -y /dev/sdf ? &lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;Not from the HDFS point of view, as I said, I would make sure I am doing this datanode by datanode and not in parallel.&lt;/P&gt;</description>
      <pubDate>Fri, 27 Apr 2018 01:19:38 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-machine-worker-one-of-the-disks-have-Filesystem/m-p/179495#M77687</guid>
      <dc:creator>aengineer</dc:creator>
      <dc:date>2018-04-27T01:19:38Z</dc:date>
    </item>
    <item>
      <title>Re: datanode machine ( worker ) + one of the disks have Filesystem  errors</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-machine-worker-one-of-the-disks-have-Filesystem/m-p/179496#M77688</link>
      <description>&lt;P&gt;ok I have another question , as you know we set in the /etc/fstab file the partition as sdf , to perform fsck during reboot , for now all machine s set without fsck dusring reboot , so do you recomended to set it to "1" , in order to perform fsck during reboot ?  &lt;/P&gt;</description>
      <pubDate>Sun, 29 Apr 2018 13:19:34 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-machine-worker-one-of-the-disks-have-Filesystem/m-p/179496#M77688</guid>
      <dc:creator>mike_bronson7</dc:creator>
      <dc:date>2018-04-29T13:19:34Z</dc:date>
    </item>
    <item>
      <title>Re: datanode machine ( worker ) + one of the disks have Filesystem  errors</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-machine-worker-one-of-the-disks-have-Filesystem/m-p/179497#M77689</link>
      <description>&lt;P&gt;&lt;A href="https://community.hortonworks.com/users/26229/uribarih.html"&gt;&lt;EM&gt;@Michael Bronson&lt;/EM&gt;&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;If this is a production server, it's not a good idea to disable fsck's automatically scheduled checks on boot. &lt;STRONG&gt;fsck &lt;/STRONG&gt;automatically runs on boot after &lt;STRONG&gt;M&lt;/STRONG&gt; mounts or &lt;STRONG&gt;N&lt;/STRONG&gt; days, whichever comes first. &lt;/EM&gt;&lt;I&gt;You can tune this schedule using &lt;STRONG&gt;tune2fs&lt;/STRONG&gt;. &lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;I would suggest leaving the automatic check enabled, but using &lt;STRONG&gt;tune2fs&lt;/STRONG&gt; to adjust the check schedule if appropriate, and forcing fsck to run when it is more convenient. &lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;When fsck runs, it will reset the mount count to 0 and update the Last checked field, effectively rescheduling the next automatic check. If you don't want to run fsck manually but you know it will be convenient on the next scheduled reboot, you can force fsck on the next boot.&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;I&gt;You can make your system run fsck by creating an empty '&lt;STRONG&gt;forcefsck'&lt;/STRONG&gt; file in the root of your root filesystem. i.e. &lt;STRONG&gt;touch /forcefsck&lt;/STRONG&gt; Filesystems that have&lt;STRONG&gt; 0&lt;/STRONG&gt; or &lt;STRONG&gt;nothing &lt;/STRONG&gt;specified in the sixth column of your &lt;STRONG&gt;/etc/fstab&lt;/STRONG&gt;, will not be checked.&lt;/I&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Good &lt;A href="https://www.thomas-krenn.com/de/wiki/FSCK_Best_Practices" target="_blank"&gt; fsck resource&lt;/A&gt; &lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Hope that helps&lt;/EM&gt;&lt;/P&gt;</description>
      <pubDate>Sun, 29 Apr 2018 14:01:02 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-machine-worker-one-of-the-disks-have-Filesystem/m-p/179497#M77689</guid>
      <dc:creator>Shelton</dc:creator>
      <dc:date>2018-04-29T14:01:02Z</dc:date>
    </item>
    <item>
      <title>Re: datanode machine ( worker ) + one of the disks have Filesystem  errors</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-machine-worker-one-of-the-disks-have-Filesystem/m-p/179498#M77690</link>
      <description>&lt;P&gt;regarding what you said "&lt;I&gt;but using &lt;STRONG&gt;tune2fs&lt;/STRONG&gt; to adjust the check schedule if appropriate, and forcing fsck to run when it is more convenient." , can yopu give example for this configuration ?&lt;/I&gt;&lt;/P&gt;</description>
      <pubDate>Sun, 29 Apr 2018 14:56:42 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-machine-worker-one-of-the-disks-have-Filesystem/m-p/179498#M77690</guid>
      <dc:creator>mike_bronson7</dc:creator>
      <dc:date>2018-04-29T14:56:42Z</dc:date>
    </item>
    <item>
      <title>Re: datanode machine ( worker ) + one of the disks have Filesystem  errors</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-machine-worker-one-of-the-disks-have-Filesystem/m-p/179499#M77691</link>
      <description>&lt;P&gt;&lt;EM&gt;&lt;A href="https://community.hortonworks.com/users/26229/uribarih.html"&gt;@Michael Bronson&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Here we go&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;Force fsck for root partition &lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;The simplest way to force fsck filesystem check on a root partition eg. &lt;STRONG&gt;/dev/sda1&lt;/STRONG&gt; is to create an empty file called &lt;STRONG&gt;forcefsck&lt;/STRONG&gt; in the partition's root directory. &lt;/EM&gt;&lt;/P&gt;&lt;PRE&gt;# touch /forcefsck &lt;/PRE&gt;&lt;P&gt;&lt;EM&gt;This empty file will temporarily override any other settings and force fsck to check the filesystem on the next system reboot. Once the filesystem is checked the forcefsck file will be removed thus next time you reboot your filesystem will NOT be checked again. To enable more permanent solution and force filesystem check on every reboot we need to manipulate filesystem's "Maximum mount count" parameter. &lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;The following command will ensure that filesystem &lt;STRONG&gt;/dev/sdb1&lt;/STRONG&gt; is checked every time your Linux system reboots. Please note that for this to happen the fsck's PASS value in /etc/fstab &lt;/EM&gt;&lt;/P&gt;&lt;PRE&gt;# tune2fs -c 1 /dev/sdb1 &lt;/PRE&gt;&lt;P&gt;&lt;EM&gt;alternatively, we can set fsck after every &lt;STRONG&gt;10 reboots&lt;/STRONG&gt;: &lt;/EM&gt;&lt;/P&gt;&lt;PRE&gt;# tune2fs -c 10 /dev/sdb1 &lt;/PRE&gt;&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;Force fsck for all other non-root partitions &lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;As opposed to root partition creating empty &lt;STRONG&gt;forcefsck &lt;/STRONG&gt;file will NOT trigger partition check on reboot. The only way to force fsck on all other non-root partitions is to manipulate filesystem's "Maximum mount count" parameter and PASS value within &lt;STRONG&gt;/etc/fstab&lt;/STRONG&gt; configuration file. To force filesystem check on non-root partition change fsck's PASS value in &lt;STRONG&gt;/etc/fstab &lt;/STRONG&gt;to value &lt;STRONG&gt;2&lt;/STRONG&gt; &lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;For example: &lt;/EM&gt;&lt;/P&gt;&lt;PRE&gt;UUID=c6e22f63-e63c-40ed-bf9b-bb4a10f2db66 /grid01 ext4 errors=remount-ro 0 2 &lt;/PRE&gt;&lt;P&gt;&lt;EM&gt;and change maximum mounts filesystem parameter to a positive integer, depending on how many times you wish to allow a specified filesystem to be mounted without being checked. Force fsck on every reboot: &lt;/EM&gt;&lt;/P&gt;&lt;PRE&gt;# tune2fs -c 1 /dev/sdb1 &lt;/PRE&gt;&lt;P&gt;&lt;EM&gt;alternatively, we can set fsck to check filesystem after every 5 reboots: &lt;/EM&gt;&lt;/P&gt;&lt;PRE&gt;# tune2fs -c 5 /dev/sdb1 &lt;/PRE&gt;&lt;P&gt;&lt;EM&gt;To disable fsck run: &lt;/EM&gt;&lt;/P&gt;&lt;PRE&gt;# tune2fs -c 0 /dev/sdb1 &lt;/PRE&gt;&lt;P&gt;&lt;EM&gt;OR &lt;/EM&gt;&lt;/P&gt;&lt;PRE&gt;# tune2fs -c -1 /dev/sdb1 &lt;/PRE&gt;&lt;P&gt;&lt;EM&gt;Which will set the filesystem's "Maximum mount count" parameter to -1&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Hope that gives you a walkthrough &lt;/EM&gt;&lt;/P&gt;</description>
      <pubDate>Sun, 29 Apr 2018 15:41:19 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-machine-worker-one-of-the-disks-have-Filesystem/m-p/179499#M77691</guid>
      <dc:creator>Shelton</dc:creator>
      <dc:date>2018-04-29T15:41:19Z</dc:date>
    </item>
    <item>
      <title>Re: datanode machine ( worker ) + one of the disks have Filesystem  errors</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-machine-worker-one-of-the-disks-have-Filesystem/m-p/179500#M77692</link>
      <description>&lt;P&gt;&lt;EM&gt; &lt;A href="https://community.hortonworks.com/users/26229/uribarih.html"&gt;@Michael Bronson&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Any updates so as to close the thread?&lt;/EM&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 30 Apr 2018 04:41:33 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-machine-worker-one-of-the-disks-have-Filesystem/m-p/179500#M77692</guid>
      <dc:creator>Shelton</dc:creator>
      <dc:date>2018-04-30T04:41:33Z</dc:date>
    </item>
    <item>
      <title>Re: datanode machine ( worker ) + one of the disks have Filesystem  errors</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-machine-worker-one-of-the-disks-have-Filesystem/m-p/179501#M77693</link>
      <description>&lt;P&gt;about - &lt;EM&gt;&lt;STRONG&gt;Force fsck for all other non-root partitions , in that case as you explained each reboot will activate the fsck check on that partition , but can we schedule the fsck for &lt;/STRONG&gt;&lt;/EM&gt;"&lt;EM&gt;&lt;STRONG&gt;all other non-root partitions" ? ( I mean not only by reboot , we want to run fsck each month for example for &lt;/STRONG&gt;&lt;/EM&gt;&lt;EM&gt;&lt;STRONG&gt;non-root partitions )&lt;/STRONG&gt;&lt;/EM&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 30 Apr 2018 17:01:59 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-machine-worker-one-of-the-disks-have-Filesystem/m-p/179501#M77693</guid>
      <dc:creator>mike_bronson7</dc:creator>
      <dc:date>2018-04-30T17:01:59Z</dc:date>
    </item>
    <item>
      <title>Re: datanode machine ( worker ) + one of the disks have Filesystem  errors</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-machine-worker-one-of-the-disks-have-Filesystem/m-p/179502#M77694</link>
      <description>&lt;P&gt;&lt;EM&gt;&lt;A href="https://community.hortonworks.com/users/26229/uribarih.html"&gt;@Michael Bronson&lt;/A&gt;&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Here is how I force my filesystem check for every 3 months, I use the below command below.&lt;/EM&gt;&lt;/P&gt;&lt;PRE&gt;$ sudo tune2fs -i 3m /dev/sda1&lt;/PRE&gt;&lt;P&gt;&lt;EM&gt;Now verify that newly added filesystem check conditions are set properly.&lt;/EM&gt;&lt;/P&gt;&lt;PRE&gt;$ sudo tune2fs -l /dev/sda1&lt;/PRE&gt;&lt;P&gt;&lt;EM&gt;Desired output should look like this &lt;/EM&gt;&lt;/P&gt;&lt;PRE&gt;Last mount time:          n/a
Last write time:          Sat Mar 10 22:29:24 2018
Mount count:              20
Maximum mount count:      30
Last checked:             Fri Mar  2 20:55:08 2018
Check interval:           7776000 (3 months)
Next check after:         Sat Jun  2 21:55:08 2018&lt;/PRE&gt;&lt;P&gt;&lt;EM&gt;Hope that answers your question &lt;/EM&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 30 Apr 2018 17:39:43 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/datanode-machine-worker-one-of-the-disks-have-Filesystem/m-p/179502#M77694</guid>
      <dc:creator>Shelton</dc:creator>
      <dc:date>2018-04-30T17:39:43Z</dc:date>
    </item>
  </channel>
</rss>

