[linux-lvm] Re: 2.6.3-rc2-mm1 (dm)
Miquel van Smoorenburg
miquels at cistron.nl
Fri Feb 13 09:11:24 UTC 2004
According to Andrew Morton:
The maintain-bio-ordering patch mostly fixes the performance problem
I was seeing on XFS-over-LVM-on-3ware-raid5. (See my earlier message
at http://www.ussg.iu.edu/hypermail/linux/kernel/0312.3/0684.html )
However there's still one issue:
I created a LVM volume on /dev/sda2, called /dev/vg0/test. Then
I created and mounted an XFS partition on /dev/vg0/test. XFS uses
a 512 byte blocksize by default, but the underlying /dev/sda2
device had a soft blocksize of 4096 (default after boot is 1024,
but I had been mucking around with it so it got set to 4096).
As a result, I couldn't get more than 35 MB/sec write speed out
of XFS mounted on the LVM device.
I added this little patch:
--- drivers/md/dm-table.c.ORIG 2004-02-12 20:49:47.000000000 +0100
+++ drivers/md/dm-table.c 2004-02-12 20:56:59.000000000 +0100
@@ -361,7 +361,7 @@
d->bdev = bdev;
- set_blocksize(bdev, d->bdev->bd_block_size);
+ set_blocksize(bdev, 512);
This forces the underlying device(s) to a soft blocksize of 512. And
I had my 80 MB/sec write speed back !
I'm not sure if setting the blocksize of the underlying device
always to 512 is the right solution. I think that set_blocksize
for dm devices should also set the size for the underlying devices,
but that probably means adding an extra hook so that
set_blocksize can call bdev->bd_disk->fops->set_blocksize(bdev, size).
Which, in the case of dm, would basically call set_blocksize for the
underlying devices again.
More information about the linux-lvm