[linux-lvm] Testing the new LVM cache feature

Mike Snitzer snitzer at redhat.com
Fri May 30 14:44:54 UTC 2014


On Fri, May 30 2014 at 10:36am -0400,
Richard W.M. Jones <rjones at redhat.com> wrote:

> On Fri, May 30, 2014 at 10:29:26AM -0400, Mike Snitzer wrote:
> > sequential_threshold is only going to help the md5sum's IO get promoted
> > (assuming you're having it read a large file).
> 
> Note the fio test runs on the virt.* files.  I'm using md5sum in an
> attempt to pull those same files into the SSD.
> 
> > > Is there a way to print the current settings?
> > > 
> > > Could writethrough be enabled?  (I'm supposed to be using writeback).
> > > How do I find out?
> > 
> > dmsetup status vg_guests-libvirt--images
> 
> Here's dmsetup status on various objects:
> 
> $ sudo dmsetup table
> vg_guests-lv_cache_cdata: 0 419430400 linear 8:33 2099200
> vg_guests-lv_cache_cmeta: 0 2097152 linear 8:33 2048
> vg_guests-home: 0 209715200 linear 9:127 2048
> vg_guests-libvirt--images: 0 1677721600 cache 253:1 253:0 253:2 128 0 default 0
> vg_guests-libvirt--images_corig: 0 1677721600 linear 9:127 2055211008
> $ sudo dmsetup status vg_guests-libvirt--images
> 0 1677721600 cache 8 10162/262144 128 39839/3276800 1087840 821795 116320 2057235 0 39835 0 1 writeback 2 migration_threshold 2048 mq 10 random_threshold 4 sequential_threshold 0 discard_promote_adjustment 1 read_promote_adjustment 0 write_promote_adjustment 0
> $ sudo dmsetup status vg_guests-lv_cache_cdata
> 0 419430400 linear 
> $ sudo dmsetup status vg_guests-lv_cache_cmeta
> 0 2097152 linear 
> $ sudo dmsetup status vg_guests-libvirt--images_corig
> 0 1677721600 linear 
> 
> > But I'm really wondering if your IO is misaligned (like my earlier email
> > brought up).  It _could_ be promoting 2 64K blocks from the origin for
> > every 64K IO.
> 
> There's nothing obviously wrong ...

I'm not talking about alignment relative to the physical device's
limits.  I'm talking about alignment of ext4's data areas relative to
the 64K block boundaries.

Also a point of conern would be: how fragmented is the ext4 space?  It
could be that it cannot get contiguous 64K regions from the namespace.
If that is the case than a lot more IO would get pulled in.

Can you try reducing the cache blocksize to 32K (lowest we support at
the moment, it'll require you to remove the cache and recreate) to see
if performance for this 64K random IO workload improves?  If so it does
start to add weight to my alignment concerns.

Mike




More information about the linux-lvm mailing list