[linux-lvm] poor read performance on rbd+LVM, LVM overload

Mike Snitzer snitzer at redhat.com
Mon Oct 21 18:27:18 UTC 2013


On Mon, Oct 21 2013 at  2:06pm -0400,
Christoph Hellwig <hch at infradead.org> wrote:

> On Mon, Oct 21, 2013 at 11:01:29AM -0400, Mike Snitzer wrote:
> > It isn't DM that splits the IO into 4K chunks; it is the VM subsystem
> > no?
> 
> Well, it's the block layer based on what DM tells it.  Take a look at
> dm_merge_bvec
> 
> >From dm_merge_bvec:
> 
> 	/*
>          * If the target doesn't support merge method and some of the devices
>          * provided their merge_bvec method (we know this by looking at
>          * queue_max_hw_sectors), then we can't allow bios with multiple vector
>          * entries.  So always set max_size to 0, and the code below allows
>          * just one page.
>          */
> 	
> Although it's not the general case, just if the driver has a
> merge_bvec method.  But this happens if you using DM ontop of MD where I
> saw it aswell as on rbd, which is why it's correct in this context, too.

Right, but only if the DM target that is being used doesn't have a
.merge method.  I don't think it was ever shared which DM target is in
use here.. but both the linear and stripe DM targets provide a .merge
method.
 
> Sorry for over generalizing a bit.

No problem.




More information about the linux-lvm mailing list