[dm-devel] dd to a striped device with 9 disks gets much lower throughput when oflag=direct used

Zdenek Kabelac zkabelac at redhat.com
Fri Jan 27 15:16:12 UTC 2012


Dne 27.1.2012 16:03, Richard Sharpe napsal(a):
> On Fri, Jan 27, 2012 at 12:52 AM, Christoph Hellwig<hch at infradead.org>  wrote:
>> On Thu, Jan 26, 2012 at 05:06:42PM -0800, Richard Sharpe wrote:
>>> Why do I see such a big performance difference? Does writing to the
>>> device also use the page cache if I don't specify DIRECT IO?
>>
>> Yes.  Trying adding conv=fdatasync to both versions to get more
>> realistic results.
>
> Thank you for that advice. I am comparing btrfs vs rolling my own
> thing using the new dm thin-provisioning approach to get something
> with resilient metadata, but I need to support two different types of
> IO, one that uses directio and one that can take advantage of the page
> cache.
>
> So far, btrfs gives me around 800MB/s with a similar setup (can't get
> exactly the same setup) without DIRECTIO and 450MB/s with DIRECTIO. a
> dm striped setup is giving me about 10% better throughput without
> DIRECTIO but only about 45% of the performance with DIRECTIO.
>

You've mentioned you are using thinp device with stripping - do you have
stripes properly aligned on data-block-size of thinp device ?
(I think 9 disks are properly quite hard to align somehow on 3.2 kernel,
since data block size needs to be power of 2 - I think 3.3 will have this
relaxed to page size boundary.

Zdenek





More information about the dm-devel mailing list