[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: Possible funny with /sbin/fsck

   Date: Thu, 25 Jan 2001 11:13:25 +0000
   From: "Stephen C. Tweedie" <sct redhat com>

   Actually, the rule of thumb reads

     Now, create a journal file.  I don't know how big it should be yet: the
     rules of thumb have yet to be established!  However, try (say) 2MB for a
     small filesystem on a 486; maybe up to 30MB on a big 18G 10krpm Cheetah.

   Note the "up to".  30MB is meant to be an upper limit on a sane
   journal size.  The optimal journal size depends on the filesystem's
   write load, NOT on the size of the disk, so a 300MB journal is way too
   large, and will end up pinning loads of stuff into memory!

The mke2fs/tune2fs programs have an upper limit of 100MB (and a lower
limit of 1MB --- but I suspect 1MB journal using data journaling won't
be much fun :-).

   > to check the non-root filesystems, we actually got a crash from the
   > fsck parent process until we changed to serialized fscking, viz.

   What sort of "crash"?  It sounds as if you ended up doing several
   hundred MB of simultaneous journal replay because of the huge journal
   sizes, and the system got bogged down into seeking all over the place.
   That is going to be _really_ slow, but it should eventually complete.  

   Ted, one thing I can't recall here offhand --- will e2fsck buffer all
   the IO channel writes until flush, or will it write as it goes?  If
   it's buffering things, then e2fsck journal replay could end up using a
   lot of swap, and that's maybe something we want to fix.

It has a 8 block write-behind cache --- which can be turned into a
writethrough cache by setting the CHANNEL_FLAGS_WRITETHROUGH flag.  

Note that we've gotten confirmation that the VM bug which fails to do
write throttling when mke2fs is writing out all of the inode pages on
large filesystems (replicated with mke2fs creating a ~30GB filesystem
when the system had only 128 megs of memory) was verified in
2.2.19-pre-latest --- the OOM killer killed mke2fs, which I supposed is
a primitive form of write-throttling.  :-)  So it's still broken.
Leonard's trying to get it replicated under controlled circumstances so
we can get better debugging information.

It may be that this is what you're seeing..... 

							- Ted

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]