[dm-devel] DM-Multipath limiting IO to 127.000 MB/Sec

Merla, ShivaKrishna ShivaKrishna.Merla at netapp.com
Tue Sep 23 14:45:54 UTC 2014


> Hi Gurus,

> I'm experiencing an issue with DM-Multipath on Centos 6.5.
> DD reads 2.5 times faster when reading from a device directly compared to Multipath-ed device.
> I'm using device-mapper-multipath-0.4.9-72.el6_5.2.x86_64.
> Apparently Multipath is limiting throughput to 127.000 MB/s even though the underlying device is capable of more throughput which to me it looks like a software imposed limit. 

> [root at JS2 ~]# multipath -l mpathaf
> mpathaf (--------) dm-89 Rorke,G4S-16L-4F8
> size=7.3T features='0' hwhandler='0' wp=rw
> `-+- policy='round-robin 0' prio=0 status=active
>   |- 0:0:0:1  sdb  8:16   active undef running
>   |- 0:0:10:2 sdbb 67:80  active undef running
>   |- 1:0:11:1 sddl 71:48  active undef running
>   `- 1:0:12:2 sddp 71:112 active undef running

> [root at JS2 ~]#dd if=/dev/sddl of=/dev/null bs=128k count=32000 skip=14000
> 32000+0 records in
> 32000+0 records out
> 4194304000 bytes (4.2 GB) copied, 11.8166 s, 355 MB/s

> [root at JS2 ~]# dd if=/dev/dm-89 of=/dev/null bs=128k count=32000 skip=84000 # skipping 2GB array cache data 
> 32000+0 records in
> 32000+0 records out
> 4194304000 bytes (4.2 GB) copied, 31.4959 s, 133 MB/s


> This is on a machine connected over 2x8Gb/s links to a fiber switch and then to an HDX4 disk array, exporting the volume to 2 of it's 4 8Gbps fiber ports . 

> Multipath.conf looks like this : 
>
> defaults {
>         user_friendly_names yes
> }
 
> devices {
>
>        device { # normal HDX4 Disk arrays
>                vendor "Rorke"
>                product "G4S-16L-4F8"
>                path_grouping_policy multibus
>             }
>
> }
 
> blacklist {
> #blacklisting almost everything
>         devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
>         devnode "^hd[a-z][0-9]*"
> }

Check the request queue parameters for both dm and sd devices. Parameters that could affect this are 

#cat /sys/block/<dm-x>/queue/max_sectors_kb
#cat /sys/block/<dm-x>/queue/max_segments
#cat /sys/block/<dm-x>/queue/max_segment_size

#cat /sys/block/<sdx>/queue/max_sectors_kb
#cat /sys/block/<sdx>/queue/max_segments
#cat /sys/block/<sdx>/queue/max_segment_size

You can monitor the average request size(avgrq-sz) on both dm and sd device using
#iostat –xm –d /dev/dm-x /dev/sdx –t 2

Compare this with the request size while running I/O only sd device directly. There was a patch to fix
block stacking limits on dm devices upstream.








More information about the dm-devel mailing list