[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

question about exact behaviour with data=ordered.

Suppose I have a large machine with 24Gig of memory, and I write a 2
gig file.  This is below 10% and so background_dirty_threshold wont
cause any write out.
Suppose that a regular journal commit is then triggered.  

Am I correct in thinking this will flush out the full 2Gig, causing
the commit to take about 30 seconds if the drive sustains

If so, what other operations will be blocked while the commit happens?
I assume sync updates (rm, chmod, mkdir etc) will block?
Is it safe to assume that normaly async writes won't block?
What about if they extend the file and so change the file size?
What about atime updates? Could they ever block for the full 30

Supposing lots of stuff would block for 30seconds, is there anything
that could be done to improve this?  Would it be possible (easy?) to
modify the commit process to flush out 'ordered' data without locking
the journal?

As you might guess, we have a situation where writing large files on a
large-memory machine is causing occasional bad fs delays and I'm
trying to understand what is going on.

Thanks for any input,

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]