[linux-lvm] pvmove speed

L A Walsh lvm at tlinx.org
Fri Feb 17 00:44:36 UTC 2017


Roy Sigurd Karlsbakk wrote:
>> I'm doing pvmove of some rather large volumes from a Dell Equallogic system to
>> Dell Compellent. Both are connected on iSCSI ....
----
    I've never had very great speeds over a network.  I've gotten the 
impression
that iSCSI is slower than some other network protocols.

    Locally (RAID=>RAID) I got about 400-500MB/s, but the best I've 
gotten, recently, over a 10Gb network card has been about 200MB/s.  
Oddly, when I first got the cards, I was getting up to 400-600MB/s, but 
after MS started pushing
Win10 and "updates" to Win7 (my communication has been between 
Win7SP1<->linux server), my speed dropped to barely over 100MB/s which 
is about what I got with
a 1Gb card.  I wasn't able to get any better speeds using *windows* 
single-threaded SMB proto even using 2x10Gb (have a dedicated link tween
workstation and server) -- but I did notice the cpu maxing out on either the
windows or the Samba side depending on packet size and who was doing the 
sending.

    50MB sounds awfully slow, but not out of the ballpark -- I had 
benched a few
NAS solutions @ home, but could rarely get about 10MB/s (usually 
slower), so gave up on those and went w/a linux server -- but still alot 
slower than I'd
like (100-200MB/s sustained, but those figures may change w/the next MS 
"update").  But gave up on commercial, out-of-the-box solutions, and the 
4x1Gb
connect you have may be costing you more cpu than its worth... Problem I 
noted
on 2x10G was too many duplicate packets -- so running 1x10Gb now but 
still maxing out around 200MB/s over an unencrypted SMB/CIFS session.

    I'm not sure it could be an LVM problem given its local speed for 
pvmoves --
do you have some measurement of faster file I/O throughput using iSCSI 
over your
connections?





More information about the linux-lvm mailing list