Many orphaned inodes after resize2fs

tytso at mit.edu tytso at mit.edu
Fri Apr 18 20:20:30 UTC 2014


On Fri, Apr 18, 2014 at 06:56:57PM +0200, Patrik Horník wrote:
> 
> yesterday I experienced following problem with my ext3 filesystem:
> 
> - I had ext3 filesystem of the size of a few TB with journal. I correctly
> unmounted it and it was marked clean.
> 
> - I then ran fsck.etx3 -f on it and it did not find any problem.
> 
> - After increasing size of its LVM volume by 1.5 TB I resized the
> filesystem by resize2fs lvm_volume and it finished without problem.
> 
> - But fsck.ext3 -f immediately after that showed "Inodes that were part of
> a corrupted orphan linked list found." and many thousands of "Inode XXX was
> part of the orphaned inode list." I did not accepted fix. According to
> debugfs all the inodes I check from these reported orphaned inodes (I
> checked only some from beginning of list of errors) have size 0.

Can you send the output of dumpe2fs -h?  I'm curious how many inodes
you had after the resize, and what file system features might have
been enabled on your file system.

If the only file system corruption errors that you saw were from about
the corrupted orphan inode list, then things are probably OK.

What this error message means is that there are d_time values which
look like they belong to inode numbers (as opposed to number of
seconds since January 1, 1970).  So if you ran the system where the
clock was set incorrectly, so that the time was January 1, 1970, and
you delete a lot of files, you can run into this error --- it's
basically a sanity check that we put in a long time ago to catch
potential file system bugs caused by a corrupted orphan inode list.

I'm thinking that we should turn off this check if the e2fsck.conf
"broken_system_lock" is enabled, since if the system has a busted
system clock, this can end up triggering a bunch of scary warnings.

In any case, when you grew the size of the file system, this also
increased the number of inodes, which means it would increase the
sensitivity of hitting this bug.  It's also possible that if you
created your file system with the number of inodes per block group
close to the maximum (assuming an average file size 4k, which would be
highly wasteful of space, so it' s not the default), that you ended up
with the maximum number of inodes exceeding 1.2 or 1.3 billion inodes,
at which point it would trigger a false positive.  (And indeed, I
should probably put in a fix to e2fsprogs so that if a file system
does have more than 1.2 billion inodes, to disable this check.)

Cheers,

						- Ted




More information about the Ext3-users mailing list