[Linux-cluster] GFS2 and fragmentation

Steven Whitehouse swhiteho at redhat.com
Thu Aug 9 10:55:52 UTC 2012


Hi,

On Wed, 2012-08-08 at 09:28 -0700, Scooter Morris wrote:
> Bob,
>      Thanks for the information and the pointer, that really helps.
> 
> -- scooter
> 
> On 08/08/2012 05:50 AM, Bob Peterson wrote:
> > ----- Original Message -----
> > | Hi All,
> > |      We have a RedHat 6.2 cluster with 4 nodes using GFS2 for shared
> > | filesystems.  One of the filesystems we need to share is
> > | /var/spool/mail.  In general, with recent upgrades and improvements
> > | to
> > | GFS2, which has been working really, really well.  We're starting to
> > | see
> > | some significant fragmentation on files in the filesystem, though,
> > | even
> > | after recreating the filesystem after the RHEL 6 upgrade.  My
> > | understanding was that there were fixes in RHEL 6 that made
> > | defragmentation un-(or less?) necessary, but we're seeing a
> > | disturbing
> > | increase in the amount of fragmentation since the upgrade.  By the
> > | way,
> > | we're seeing numbers in the range of 4,000-6,000 extents/GB for some
> > | of
> > | these files, which seems a bit large.
> > |      So, I've got two questions:
> > |
> > |      1. At what point should we be worried about the number of
> > |      extents?
> > |      2. Are there plans for a defragmentation tool?
> > |
The question is how large are the files? If each file is not very big,
and is only taking up a few disk blocks at most (not unusual for an
email server) then I'd hope that each individual file would be mostly a
single extent.

Over time though, if the files are being randomly deleted and then other
files created (and particularly if the filesystem is nearly full) then
this will increase fragmentation. It is a good plan not to let
filesystems get too full when fragmentation is a concern.

It also depends upon how the files were created. Files which are written
in a streaming fashion will be much more likely to be allocated as such
on disk, and thus less fragmented. Files which start off as large sparse
files and then have randomly placed writes to them are likely to be
fragmented on almost any filesystem. So application behaviour can be a
factor in this too,

Steve.





More information about the Linux-cluster mailing list