[rhelv6-list] Limiting pvmove speed with control groups doesn't work as expected

Gianluca Cecchi gianluca.cecchi at gmail.com
Tue Jul 26 14:00:41 UTC 2016


Hello,
I'm testing what in subject because I have to move from a SAN storage to
another one and I want to limit impact on source storage.

System is RH EL 6.5 and source and target of pvmove are two multipath
devices

Basic steps done:

$ sudo yum install libcgroup

$ sudo service cgconfig start
Starting cgconfig service:                                 [  OK  ]

$ sudo service cgconfig status
Running

Source multipath device has 8 paths, divided in two groups with 4 path
active:

$ sudo multipath -l 360a9800037543543592442595559337a
360a9800037543543592442595559337a dm-2 NETAPP,LUN
size=50G features='4 queue_if_no_path pg_init_retries 50
retain_attached_hw_handle' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=0 status=active
| |- 7:0:2:13 sdah 66:16   active undef running
| |- 7:0:3:13 sdaw 67:0    active undef running
| |- 8:0:2:13 sdcp 69:208  active undef running
| `- 8:0:3:13 sdde 70:192  active undef running
`-+- policy='round-robin 0' prio=0 status=enabled
  |- 7:0:0:13 sdd  8:48    active undef running
  |- 7:0:1:13 sds  65:32   active undef running
  |- 8:0:0:13 sdbl 67:240  active undef running
  `- 8:0:1:13 sdca 68:224  active undef running

$ ll /dev/dm-2
brw-rw---- 1 root disk 253, 2 Jul 25 13:30 /dev/dm-2

$ sudo cgcreate -g blkio:/30M

# echo "253:2 31457280" > /cgroup/blkio/30M/blkio.throttle.read_bps_device

the multipath device is a PV of a VG
$ sudo pvs /dev/mapper/360a9800037543543592442595559337a
  PV                                            VG              Fmt  Attr
PSize  PFree
  /dev/mapper/360a9800037543543592442595559337a VG_ALMTEST_DATA lvm2 a--
 50.00g    0

Without limits, if I execute this command

dd if=/dev/mapper/360a9800037543543592442595559337a of=/dev/null bs=1024k
count=10240

I get
iostat -d 3 -m -p /dev/sdah,/dev/sdaw,/dev/sdcp,/dev/sdde

Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sdah            333.67        40.81         0.00        122          0
sdaw            333.67        40.76         0.00        122          0
sdcp            333.67        41.31         0.00        123          0
sdde            334.00        41.04         0.00        123          0

Instead called with cgexec:

# cgexec -g blkio:30M time dd
if=/dev/mapper/360a9800037543543592442595559337a of=/dev/null bs=1024k
count=10240

I get
Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sdah             64.00         7.51         0.00         22          0
sdaw             64.00         7.60         0.00         22          0
sdcp             64.00         7.43         0.00         22          0
sdde             64.33         7.45         0.00         22          0

So far so good. I have my 30MB/s...

But then if I try to use the same approach with pvmove command:

# cgexec -g blkio:30M pvmove -i 60
 /dev/mapper/360a9800037543543592442595559337a
/dev/mapper/3600a098038303769752b495147377857

I get

Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sdah             29.00        14.50         0.00         43          0
sdaw             29.00        14.37         0.00         43          0
sdcp             28.67        14.29         0.00         42          0
sdde             28.67        14.33         0.00         43          0

it seems an intermediate ....

The target multipath device is of this kind:

$ sudo multipath -l /dev/mapper/3600a098038303769752b495147377857
3600a098038303769752b495147377857 dm-39 NETAPP,LUN C-Mode
size=50G features='4 queue_if_no_path pg_init_retries 50
retain_attached_hw_handle' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  |- 7:0:5:13 sdds 71:160  active undef running
  `- 8:0:5:13 sdee 128:96  active undef running

I see also confirmed in terms of time elapsed (about 15 minutes for 50Gb
pvmove) that aggregate bandwith has been indeed 60MB/s instead of 30MB/s
And also from iotop command where I see that apparently the main process
accounting for I/O is named kcopyd using 60MB/s in reads....

Tapped also to 10MB/s with another control group but the real bandwith
during pvmove remains around 60 MB/s

Made the policy also for the single active paths with same results.

Thanks in advance for any insight or suggestions

Gianluca
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/rhelv6-list/attachments/20160726/1345185e/attachment.htm>


More information about the rhelv6-list mailing list