[OT] block allocation algorithm [Was] Re: heavily fragmented file system.. How to defrag it on-line??

Andrew Morton akpm at osdl.org
Wed Mar 10 01:16:31 UTC 2004


Isaac Claymore <clay at exavio.com.cn> wrote:
>
> I've got a workload that several clients tend to write to separate files
> under a same dir simultaneously, resulting in heavily fragmented files.
> And, even worse, those files are rarely read simultaneously, thus read
> performance degrades quite alot.

We really, really suck at this.  I have a little hack here which provides
an ioctl with which you can instantiate blocks outside the end-of-file, so
each time you've written 128M you go into the filesystem and say "reserve me
another 128M".  This causes the 128M chunks to be laid out very nicely
indeed.

It is, however, wildly insecure - it's trivial to use this to read
uninitialised disk blocks.  But we happen to not care about that.

It is, however, a potential way forward to fix this problem.  Do the growth
automatically somehow, fix the security problem, stick the inodes on the
orphan list so that they get trimmed back to the correct size during
recovery, and there we have it.

We're a bit short on bodies to do it at present though.

One thing you could do, which _may_ suit is to write the files beforehand
and change your app to perform overwrite.

Or just change your app to buffer more data: write 16MB at a time.





More information about the Ext3-users mailing list