[OT] block allocation algorithm [Was] Re: heavily fragmented file system.. How to defrag it on-line??
clay at exavio.com.cn
Tue Mar 9 07:46:43 UTC 2004
On Wed, Mar 03, 2004 at 09:17:07PM -0500, Theodore Ts'o wrote:
> On Wed, Mar 03, 2004 at 03:01:35PM -0800, Guolin Cheng wrote:
> > I got machines running continuously for long time, but then the underlying ext3 file systems becomes quite heavily fragmented (94% non-contiguous).
> Note that non-contiguous does not necessarily mean fragmented. Files
> that are larger than a block group will be non-contiguous by
> definition. On the other hand if you have more than one file
> simultaneously being written to in a directory, then yes the files
> will certainly get fragmented.
I've got a workload that several clients tend to write to separate files
under a same dir simultaneously, resulting in heavily fragmented files.
And, even worse, those files are rarely read simultaneously, thus read
performance degrades quite alot.
I'm wondering whether there's any feature that helps alleviating
fragmentation in such workloads. Does writing to different dirs(of a same
If ext2/3 cant do much in such workloads, do you know of any other
filesystem featuring such a block allocation algorithm that somewhat
differentiates among simultaneous writers?
Thanks alot for any hint/suggestion.
> Are you a sufficient read-performance degredation? If not, it may not
> be worth bothering to defrag the filesystem.
> > Anyone have any ideas on defraging ext3 file systems on-line? Thanks a lot.
> There rae no on-line defrag tools right now, sorry.
> - Ted
> Ext3-users mailing list
> Ext3-users at redhat.com
() ascii ribbon campaign - against html e-mail
/\ - against microsoft attachments
More information about the Ext3-users