question about exact behaviour with data=ordered.

Neil Brown neilb at cse.unsw.edu.au
Fri Nov 3 01:09:09 UTC 2006


Suppose I have a large machine with 24Gig of memory, and I write a 2
gig file.  This is below 10% and so background_dirty_threshold wont
cause any write out.
Suppose that a regular journal commit is then triggered.  

Am I correct in thinking this will flush out the full 2Gig, causing
the commit to take about 30 seconds if the drive sustains
60Meg/second?

If so, what other operations will be blocked while the commit happens?
I assume sync updates (rm, chmod, mkdir etc) will block?
Is it safe to assume that normaly async writes won't block?
What about if they extend the file and so change the file size?
What about atime updates? Could they ever block for the full 30
seconds?


Supposing lots of stuff would block for 30seconds, is there anything
that could be done to improve this?  Would it be possible (easy?) to
modify the commit process to flush out 'ordered' data without locking
the journal?


As you might guess, we have a situation where writing large files on a
large-memory machine is causing occasional bad fs delays and I'm
trying to understand what is going on.

Thanks for any input,
NeilBrown




More information about the Ext3-users mailing list