[OT] block allocation algorithm [Was] Re: heavily fragmented file system.. How to defrag it on-line??

Andreas Dilger adilger at clusterfs.com
Tue Mar 9 08:02:48 UTC 2004


On Mar 09, 2004  15:46 +0800, Isaac Claymore wrote:
> I've got a workload that several clients tend to write to separate files
> under a same dir simultaneously, resulting in heavily fragmented files.
> And, even worse, those files are rarely read simultaneously, thus read
> performance degrades quite alot.
> 
> I'm wondering whether there's any feature that helps alleviating
> fragmentation in such workloads. Does writing to different dirs(of a same
> filesystem) help?

Very much yes.  Files allocated from different directories will get blocks
from different parts of the filesystem (if available), so they should be
less fragmented.  In 2.6 there is a heuristic that files opened by different
processes allocate from different parts of a group, even within the same
directory, but that only really helps if the files themselves aren't too
large (i.e. under 8MB or so).

Cheers, Andreas
--
Andreas Dilger
http://sourceforge.net/projects/ext2resize/
http://www-mddsp.enel.ucalgary.ca/People/adilger/





More information about the Ext3-users mailing list