[dm-devel] DM-Multipath limiting IO to 127.000 MB/Sec
Ali Poursamadi
ali at firstperson.is
Tue Sep 23 00:30:16 UTC 2014
Hi Gurus,
I'm experiencing an issue with DM-Multipath on Centos 6.5.
DD reads 2.5 times faster when reading from a device directly compared to
Multipath-ed device.
I'm using device-mapper-multipath-0.4.9-72.el6_5.2.x86_64.
Apparently Multipath is limiting throughput to 127.000 MB/s even though
the underlying device is capable of more throughput which to me it looks
like a software imposed limit.
[root at JS2 ~]# multipath -l mpathaf
mpathaf (--------) dm-89 Rorke,G4S-16L-4F8
size=7.3T features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 0:0:0:1 sdb 8:16 active undef running
|- 0:0:10:2 sdbb 67:80 active undef running
|- 1:0:11:1 sddl 71:48 active undef running
`- 1:0:12:2 sddp 71:112 active undef running
[root at JS2 ~]#dd if=/dev/sddl of=/dev/null bs=128k count=32000 skip=14000
32000+0 records in
32000+0 records out
4194304000 bytes (4.2 GB) copied, 11.8166 s, 355 MB/s
[root at JS2 ~]# dd if=/dev/dm-89 of=/dev/null bs=128k count=32000 skip=84000
# skipping 2GB array cache data
32000+0 records in
32000+0 records out
4194304000 bytes (4.2 GB) copied, 31.4959 s, 133 MB/s
This is on a machine connected over 2x8Gb/s links to a fiber switch and
then to an HDX4 disk array, exporting the volume to 2 of it's 4 8Gbps fiber
ports .
Multipath.conf looks like this :
defaults {
user_friendly_names yes
}
devices {
device { # normal HDX4 Disk arrays
vendor "Rorke"
product "G4S-16L-4F8"
path_grouping_policy multibus
}
}
blacklist {
#blacklisting almost everything
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z][0-9]*"
}
Any help/pointer/information is highly appreciated.
Thanks.
Ali Poursamadi
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/dm-devel/attachments/20140922/947411d1/attachment.htm>
More information about the dm-devel
mailing list