Support Questions

Find answers, ask questions, and share your expertise

datanode machine ( worker ) + one of the disks have Filesystem errors

avatar

hi all,

we have ambari cluster ( HDP version 2.6.0.1 )

one of the datanode machine - ( worker12 machine) , have a disk - /dev/sdf with File-system errors

we notice about that from - e2fsck -n /dev/sdf

1. according to the output from the e2fsck , is it safty to run e2fsck -y /dev/sdf in order to repair a disk /dev/sdf file-system ?

2. is it necessary to do some other steps after running - e2fsck -y /dev/sdf ?

ls /grid/sdf/hadoop/
hdfs/ yarn/
e2fsck -n /dev/sdf
e2fsck 1.42.9 (28-Dec-2013)
Warning!  /dev/sdf is in use.
Warning: skipping journal recovery because doing a read-only filesystem check.
/dev/sdf contains a file system with errors, check forced.
Pass 1: Checking inodes, blocks, and sizes
Inodes that were part of a corrupted orphan linked list found.  Fix? no
Inode 176619732 was part of the orphaned inode list.  IGNORED.
Inode 176619733 was part of the orphaned inode list.  IGNORED.
Inode 176619745 was part of the orphaned inode list.  IGNORED.
Inode 176619747 was part of the orphaned inode list.  IGNORED.
Inode 176619751 was part of the orphaned inode list.  IGNORED.
Inode 176619752 was part of the orphaned inode list.  IGNORED.
Inode 176619753 was part of the orphaned inode list.  IGNORED.
Inode 176619756 was part of the orphaned inode list.  IGNORED.
Inode 176619759 was part of the orphaned inode list.  IGNORED.
Inode 176619760 was part of the orphaned inode list.  IGNORED.
Inode 176619762 was part of the orphaned inode list.  IGNORED.
Inode 176619763 was part of the orphaned inode list.  IGNORED.
Inode 176619766 was part of the orphaned inode list.  IGNORED.
Inode 176619767 was part of the orphaned inode list.  IGNORED.
Inode 176619773 was part of the orphaned inode list.  IGNORED.
Inode 176619774 was part of the orphaned inode list.  IGNORED.
Inode 176619775 was part of the orphaned inode list.  IGNORED.
Deleted inode 176619779 has zero dtime.  Fix? no
Inode 176619781 was part of the orphaned inode list.  IGNORED.
Inode 176619786 was part of the orphaned inode list.  IGNORED.
Inode 176619788 was part of the orphaned inode list.  IGNORED.
Inode 176619799 was part of the orphaned inode list.  IGNORED.
Inode 176619800 was part of the orphaned inode list.  IGNORED.
Pass 2: Checking directory structure
Entry '00' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-8248ef4a-78f5-4f43-967d-0007096d0c0b (176554376) has deleted/unused inode 176619732.  Clear? no
Entry '16' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-8248ef4a-78f5-4f43-967d-0007096d0c0b (176554376) has deleted/unused inode 176619733.  Clear? no
Entry '17' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-8248ef4a-78f5-4f43-967d-0007096d0c0b (176554376) has deleted/unused inode 176619745.  Clear? no
Entry '21' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-8248ef4a-78f5-4f43-967d-0007096d0c0b (176554376) has deleted/unused inode 176619747.  Clear? no
Entry '2e' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-8248ef4a-78f5-4f43-967d-0007096d0c0b (176554376) has deleted/unused inode 176619762.  Clear? no
Entry '1f' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-8248ef4a-78f5-4f43-967d-0007096d0c0b (176554376) has deleted/unused inode 176619763.  Clear? no
Entry '19' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-8248ef4a-78f5-4f43-967d-0007096d0c0b (176554376) has deleted/unused inode 176619775.  Clear? no
Entry '35' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-8248ef4a-78f5-4f43-967d-0007096d0c0b (176554376) has deleted/unused inode 176619779.  Clear? no
Entry '09' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-8248ef4a-78f5-4f43-967d-0007096d0c0b (176554376) has deleted/unused inode 176619788.  Clear? no
Entry '34' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-c7b71625-3667-48e4-8843-8ddf3c6cc98c (176554456) has deleted/unused inode 176619752.  Clear? no
Entry '04' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-c7b71625-3667-48e4-8843-8ddf3c6cc98c (176554456) has deleted/unused inode 176619756.  Clear? no
Entry '0f' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-c7b71625-3667-48e4-8843-8ddf3c6cc98c (176554456) has deleted/unused inode 176619799.  Clear? no
Entry '3b' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-5a61cab7-acb9-497a-9d7b-e6d6b29235ed (176554463) has deleted/unused inode 176619751.  Clear? no
Entry '3c' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-5a61cab7-acb9-497a-9d7b-e6d6b29235ed (176554463) has deleted/unused inode 176619753.  Clear? no
Entry '1f' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-5a61cab7-acb9-497a-9d7b-e6d6b29235ed (176554463) has deleted/unused inode 176619759.  Clear? no
Entry '15' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-5a61cab7-acb9-497a-9d7b-e6d6b29235ed (176554463) has deleted/unused inode 176619760.  Clear? no
Entry '14' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-5a61cab7-acb9-497a-9d7b-e6d6b29235ed (176554463) has deleted/unused inode 176619766.  Clear? no
Entry '01' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-5a61cab7-acb9-497a-9d7b-e6d6b29235ed (176554463) has deleted/unused inode 176619767.  Clear? no
Entry '27' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-5a61cab7-acb9-497a-9d7b-e6d6b29235ed (176554463) has deleted/unused inode 176619773.  Clear? no
Entry '35' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-5a61cab7-acb9-497a-9d7b-e6d6b29235ed (176554463) has deleted/unused inode 176619774.  Clear? no
Entry '0c' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-5a61cab7-acb9-497a-9d7b-e6d6b29235ed (176554463) has deleted/unused inode 176619781.  Clear? no
Entry '09' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-5a61cab7-acb9-497a-9d7b-e6d6b29235ed (176554463) has deleted/unused inode 176619786.  Clear? no
Entry '31' in /hadoop/yarn/local/usercache/hive/appcache/application_1523380874382_1834/blockmgr-5a61cab7-acb9-497a-9d7b-e6d6b29235ed (176554463) has deleted/unused inode 176619800.  Clear? no
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Inode 176554376 ref count is 63, should be 54.  Fix? no
Inode 176554456 ref count is 63, should be 60.  Fix? no
Inode 176554463 ref count is 64, should be 53.  Fix? no
Pass 5: Checking group summary information
Block bitmap differences:  -(1412960478--1412960479) -1412960491 -1412960493 -(1412960497--1412960499) -1412960502 -(1412960505--1412960506) -(1412960508--1412960509) -(1412960512--1412960513) -(1412960519--1412960521) -1412960525 -1412960527 -1412960532 -1412960534 -(1412960545--1412960546)
Fix? no
Free blocks count wrong (1918728678, counted=1919005864).
Fix? no
Inode bitmap differences:  -(176619732--176619733) -176619745 -176619747 -(176619751--176619753) -176619756 -(176619759--176619760) -(176619762--176619763) -(176619766--176619767) -(176619773--176619775) -176619779 -176619781 -176619786 -176619788 -(176619799--176619800)
Fix? no
Directories count wrong for group #43120 (245, counted=222).
Fix? no
Free inodes count wrong (243908566, counted=243908282).
Fix? no
/dev/sdf: ********** WARNING: Filesystem still has errors **********
/dev/sdf: 282666/244191232 files (0.3% non-contiguous), 34777968/1953506646 blocks
Michael-Bronson
1 ACCEPTED SOLUTION

avatar
Master Mentor

@Michael Bronson

If this is a production server, it's not a good idea to disable fsck's automatically scheduled checks on boot. fsck automatically runs on boot after M mounts or N days, whichever comes first. You can tune this schedule using tune2fs.

I would suggest leaving the automatic check enabled, but using tune2fs to adjust the check schedule if appropriate, and forcing fsck to run when it is more convenient.

When fsck runs, it will reset the mount count to 0 and update the Last checked field, effectively rescheduling the next automatic check. If you don't want to run fsck manually but you know it will be convenient on the next scheduled reboot, you can force fsck on the next boot.

You can make your system run fsck by creating an empty 'forcefsck' file in the root of your root filesystem. i.e. touch /forcefsck Filesystems that have 0 or nothing specified in the sixth column of your /etc/fstab, will not be checked.

Good fsck resource

Hope that helps

View solution in original post

8 REPLIES 8

avatar
Expert Contributor

@Michael Bronson

1. is it safty to run e2fsck -y /dev/sdf in order torepair a disk /dev/sdf file-system ?

Datanodes need to be able to read and write to the underlying file system. So if there is an error in the file system, we have no choice but to fix it. That said, HDFS will have the same blocks on other machines. So you can put this node into maintenance mode in Ambari and fix the file system errors. There is a possibility of losing some data blocks. So if you have this error in more than one datanode, please do this one by one, with some time in between. I would run fcsk and then reboot the Datanode machine to make sure everything is okay, before starting work on the next node.

2. is it necessary to do some other steps after running - e2fsck -y /dev/sdf ?

Not from the HDFS point of view, as I said, I would make sure I am doing this datanode by datanode and not in parallel.

avatar

ok I have another question , as you know we set in the /etc/fstab file the partition as sdf , to perform fsck during reboot , for now all machine s set without fsck dusring reboot , so do you recomended to set it to "1" , in order to perform fsck during reboot ?

Michael-Bronson

avatar
Master Mentor

@Michael Bronson

If this is a production server, it's not a good idea to disable fsck's automatically scheduled checks on boot. fsck automatically runs on boot after M mounts or N days, whichever comes first. You can tune this schedule using tune2fs.

I would suggest leaving the automatic check enabled, but using tune2fs to adjust the check schedule if appropriate, and forcing fsck to run when it is more convenient.

When fsck runs, it will reset the mount count to 0 and update the Last checked field, effectively rescheduling the next automatic check. If you don't want to run fsck manually but you know it will be convenient on the next scheduled reboot, you can force fsck on the next boot.

You can make your system run fsck by creating an empty 'forcefsck' file in the root of your root filesystem. i.e. touch /forcefsck Filesystems that have 0 or nothing specified in the sixth column of your /etc/fstab, will not be checked.

Good fsck resource

Hope that helps

avatar

regarding what you said "but using tune2fs to adjust the check schedule if appropriate, and forcing fsck to run when it is more convenient." , can yopu give example for this configuration ?

Michael-Bronson

avatar
Master Mentor

@Michael Bronson

Here we go

Force fsck for root partition

The simplest way to force fsck filesystem check on a root partition eg. /dev/sda1 is to create an empty file called forcefsck in the partition's root directory.

# touch /forcefsck 

This empty file will temporarily override any other settings and force fsck to check the filesystem on the next system reboot. Once the filesystem is checked the forcefsck file will be removed thus next time you reboot your filesystem will NOT be checked again. To enable more permanent solution and force filesystem check on every reboot we need to manipulate filesystem's "Maximum mount count" parameter.

The following command will ensure that filesystem /dev/sdb1 is checked every time your Linux system reboots. Please note that for this to happen the fsck's PASS value in /etc/fstab

# tune2fs -c 1 /dev/sdb1 

alternatively, we can set fsck after every 10 reboots:

# tune2fs -c 10 /dev/sdb1 

Force fsck for all other non-root partitions

As opposed to root partition creating empty forcefsck file will NOT trigger partition check on reboot. The only way to force fsck on all other non-root partitions is to manipulate filesystem's "Maximum mount count" parameter and PASS value within /etc/fstab configuration file. To force filesystem check on non-root partition change fsck's PASS value in /etc/fstab to value 2

For example:

UUID=c6e22f63-e63c-40ed-bf9b-bb4a10f2db66 /grid01 ext4 errors=remount-ro 0 2 

and change maximum mounts filesystem parameter to a positive integer, depending on how many times you wish to allow a specified filesystem to be mounted without being checked. Force fsck on every reboot:

# tune2fs -c 1 /dev/sdb1 

alternatively, we can set fsck to check filesystem after every 5 reboots:

# tune2fs -c 5 /dev/sdb1 

To disable fsck run:

# tune2fs -c 0 /dev/sdb1 

OR

# tune2fs -c -1 /dev/sdb1 

Which will set the filesystem's "Maximum mount count" parameter to -1

Hope that gives you a walkthrough

avatar
Master Mentor

@Michael Bronson

Any updates so as to close the thread?

avatar

about - Force fsck for all other non-root partitions , in that case as you explained each reboot will activate the fsck check on that partition , but can we schedule the fsck for "all other non-root partitions" ? ( I mean not only by reboot , we want to run fsck each month for example for non-root partitions )

Michael-Bronson

avatar
Master Mentor

@Michael Bronson

Here is how I force my filesystem check for every 3 months, I use the below command below.

$ sudo tune2fs -i 3m /dev/sda1

Now verify that newly added filesystem check conditions are set properly.

$ sudo tune2fs -l /dev/sda1

Desired output should look like this

Last mount time:          n/a
Last write time:          Sat Mar 10 22:29:24 2018
Mount count:              20
Maximum mount count:      30
Last checked:             Fri Mar  2 20:55:08 2018
Check interval:           7776000 (3 months)
Next check after:         Sat Jun  2 21:55:08 2018

Hope that answers your question