[dm-devel] dd to a striped device with 9 disks gets much lower throughput when oflag=direct used
Hannes Reinecke
hare at suse.de
Fri Jan 27 06:54:47 UTC 2012
On 01/27/2012 02:06 AM, Richard Sharpe wrote:
> Hi,
>
> Perhaps I am doing something stupid, but I would like to understand
> why there is a difference in the following situation.
>
> I have defined a stripe device thusly:
>
> "echo 0 17560535040 striped 9 8 /dev/sdd 0 /dev/sde 0 /dev/sdf 0
> /dev/sdg 0 /dev/sdh 0 /dev/sdi 0 /dev/sdj 0 /dev/sdk 0 /dev/sdl 0 |
> dmsetup create stripe_dev"
>
> Then is did the following:
>
> dd if=/dev/zero of=/dev/mapper/stripe_dev bs=262144 count=1000000
>
> and I got 880 MB/s
>
> However, when I changed that command to:
>
> dd if=/dev/zero of=/dev/mapper/stripe_dev bs=262144 count=1000000
> oflag=direct
>
> I get 210 MB/s reliably.
>
> The system in question is a 16 core (probably two CPUs) Intel Xeon
> E5620 @2.40Ghz with 64GB of memory and 12 7200PRM SATA drives
> connected to an LSI SAS controller but set up as a JBOD of 12 drives.
>
> Why do I see such a big performance difference? Does writing to the
> device also use the page cache if I don't specify DIRECT IO?
>
Yes. All I/O using read/write calls is going via the pagecache.
The only way to circumvent this is to use DIRECT_IO.
Cheers,
Hannes
--
Dr. Hannes Reinecke zSeries & Storage
hare at suse.de +49 911 74053 688
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: J. Hawn, J. Guild, F. Imendörffer, HRB 16746 (AG Nürnberg)
More information about the dm-devel
mailing list