[linux-lvm] pvmove speed

Mark Mielke mark.mielke at gmail.com
Sat Feb 18 16:55:43 UTC 2017


One aspect that has confused me in this discussion, that I was hoping
somebody would address...

I believe I have seen slower than expected pvmove times in the past (but I
only rarely do it, so it has never particularly concerned me). When I saw
it, my first assumption was that the pvmove had to be done "carefully" to
ensure that every segment was safely moved in such a way that it was
definitely in one place, or definitely in the other, and not "neither" or
"both". This is particularly important if the volume is mounted, and is
being actively used, which was my case.

Would these safety checks not reduce overall performance? Sure, it would
transfer one segment at full speed, but then it might pause to do some
book-keeping, making sure to fully synch the data and metadata out to both
physical volumes and ensure that it was still crash-safe?

For SAN speeds - I don't think LVM has ever been proven to be a bottleneck
for me. On our new OpenStack cluster, I am seeing 550+ MByte/s  with iSCSI
backed disks, and 700+ MByte/s with NFS backed disks (with read and write
cached disabled). I don't even look at LVM as a cause of concern here as
there is usually something else at play. In fact, on the same OpenStack
cluster, I am using LVM on NVMe drives, with an XFS LV to back the QCOW2
images, and I can get 2,000+ MByte/s sustained with this setup. Again, LVM
isn't even a performance consideration for me.


On Sat, Feb 18, 2017 at 3:12 AM, Roy Sigurd Karlsbakk <roy at karlsbakk.net>
wrote:

> >    200-500... impressive for a SAN... but considering the bandwidth
> > you have to the box (4x1+10), I'd hope for at least 200 (what I get
> > w/just a 10)... so must be some parallel TCP channels there... he..
> > What showed those speeds?  I'm _guessing_, but its likely that pvmove
> > is single threaded.  So could be related to the I/O transfer size as
> > @pattonme was touching on, since multi-threaded I/O can slow things
> > down for local I/O when local I/O is in the .5-1GBps and higher range.
>
> Well, it’s a Compellent thing with net storage of around 350TiB, of which
> around 20 is on SSDs, so really, it should be good. Tiering is turned off
> during migration of this data, though (that is, we’re migrating the data
> directly to a low tier, since it’s 40TiB worth of archives).
>
> >    Curious -- do you know your network's cards' MTU size?
> > I know that even w/1Gb cards I got 2-4X speed improvement over
> > standard 1500 packets (run 9000/9014 byte MTU's over local net).
>
> Everything’s using jumboframes (9000) on the SAN (storage network), and
> it’s a dedicated network with its own switches and copper/fiber. The rest
> of the system works well (at least the Compellent things, EqualLogic has a
> bad nervous breakdown on its way to the cemetery, but that’s another story.
> The Exchange servers running off it, gave us around 400MB/s (that is,
> wirespeed) last backup. That wasn’t raw i/o from vmware, this is, but then
> again, I should at least be able to sustain a gigabit link (the EQL storage
> is hardly in use anymore, perhaps that’s why it’s depressed), and as shown,
> I’m limited to around half of that.
>
> Vennlig hilsen / Best regards
>
> roy
> --
> Roy Sigurd Karlsbakk
> (+47) 98013356
> http://blogg.karlsbakk.net/
> GPG Public key: http://karlsbakk.net/roysigurdkarlsbakk.pubkey.txt
> --
> Da mihi sis bubulae frustrum assae, solana tuberosa in modo Gallico
> fricta, ac quassum lactatum coagulatum crassum. Quod me nutrit me destruit.
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/




-- 
Mark Mielke <mark.mielke at gmail.com>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-lvm/attachments/20170218/70541cb0/attachment.htm>


More information about the linux-lvm mailing list