[dm-devel] [PATCH 03/11] block: deprecate barrier and replace blk_queue_ordered() with blk_queue_flush()

Jeremy Fitzhardinge jeremy at goop.org
Mon Aug 16 20:38:45 UTC 2010


 On 08/14/2010 02:42 AM, hch at lst.de wrote:
> On Fri, Aug 13, 2010 at 06:07:13PM -0700, Jeremy Fitzhardinge wrote:
>>  On 08/12/2010 05:41 AM, Tejun Heo wrote:
>>> Barrier is deemed too heavy and will soon be replaced by FLUSH/FUA
>>> requests.  Deprecate barrier.  All REQ_HARDBARRIERs are failed with
>>> -EOPNOTSUPP and blk_queue_ordered() is replaced with simpler
>>> blk_queue_flush().
>>>
>>> blk_queue_flush() takes combinations of REQ_FLUSH and FUA.  If a
>>> device has write cache and can flush it, it should set REQ_FLUSH.  If
>>> the device can handle FUA writes, it should also set REQ_FUA.
>> Christoph, do these two patches (parts 2 and 3) make xen-blkfront
>> correct WRT barriers/flushing as far as your concerned?
> If all your backends handle a zero-length BLKIF_OP_WRITE_BARRIER request
> it is a fully correct, but rather suboptimal implementation.  To get
> all the benefit of the new non-draining barriers you'll need a new
> If all your backends handle a zero-length BLKIF_OP_FLUSH request that
> only flushes the cache, but has no ordering side effects.

Is the effect of the flush that, once complete, any previously completed
write is guaranteed to be on durable storage, but it is not guaranteed
to have any effect on pending writes?  If so, does it flush writes that
were completed before the flush is issued, or writes that complete
before the flush completes?

>   Note that
> the quite suboptimal here means not as good as the new barrier
> implementation, but it shouldn't be notiably worse than the old one
> for Xen.

OK, thanks.  We can do some testing on that and see if there's a benefit
to adding a flush operation with the appropriate semantics.

    J





More information about the dm-devel mailing list