[dm-devel] DM-Multipath limiting IO to 127.000 MB/Sec

Merla, ShivaKrishna ShivaKrishna.Merla at netapp.com
Tue Sep 23 22:17:12 UTC 2014


I think the reason behind this is read_ahead_kb which is set to 128 by default. Can you try passing “iflag=direct” for dd command and verify.
With sd device prefetching might be causing for better performance. As you are doing sequential I/O, with round-robin path selector, alternate
path will be selected for each I/O ( rr_min_io_rq set to 1 ) and prefetching will not help. This is due to the nature of this test ( 128K I/O ), you are seeing this.
In real scenario, there will be an overlap of I/O’s due to multiple instances of I/O threads and will benefit from read prefetching on dm device.

-Shiva

From: dm-devel-bounces at redhat.com [mailto:dm-devel-bounces at redhat.com] On Behalf Of Ali Poursamadi
Sent: Tuesday, September 23, 2014 2:57 PM
To: device-mapper development
Subject: Re: [dm-devel] DM-Multipath limiting IO to 127.000 MB/Sec


Thanks ShivaKrishna,

max_sectors_kb , max_segments, max_segment_size is identical on both sdx-s and the relevan dm-x device and are as follows :
max_sectors_kb : 512
max_segments : 1024
max_segment_size :  65536

I did the iostat tests, turns out that average Request Size show close readings ( 244,251 for dm-x, sdx respectively ) but the Average Queue Size is close to 1.9 for sdx vs 1.1 for dm-x and also the device utilization (%util) is ~99% on sd-x vs ~60% on dm-xx. It seems that dm multipath is under-utilizing the underlaying devices.
is there a way to change this behavior ?
-Ali Poursamadi



On Tue, Sep 23, 2014 at 7:45 AM, Merla, ShivaKrishna <ShivaKrishna.Merla at netapp.com<mailto:ShivaKrishna.Merla at netapp.com>> wrote:
> Hi Gurus,

> I'm experiencing an issue with DM-Multipath on Centos 6.5.
> DD reads 2.5 times faster when reading from a device directly compared to Multipath-ed device.
> I'm using device-mapper-multipath-0.4.9-72.el6_5.2.x86_64.
> Apparently Multipath is limiting throughput to 127.000 MB/s even though the underlying device is capable of more throughput which to me it looks like a software imposed limit.

> [root at JS2 ~]# multipath -l mpathaf
> mpathaf (--------) dm-89 Rorke,G4S-16L-4F8
> size=7.3T features='0' hwhandler='0' wp=rw
> `-+- policy='round-robin 0' prio=0 status=active
>   |- 0:0:0:1  sdb  8:16   active undef running
>   |- 0:0:10:2 sdbb 67:80  active undef running
>   |- 1:0:11:1 sddl 71:48  active undef running
>   `- 1:0:12:2 sddp 71:112 active undef running

> [root at JS2 ~]#dd if=/dev/sddl of=/dev/null bs=128k count=32000 skip=14000
> 32000+0 records in
> 32000+0 records out
> 4194304000<tel:4194304000> bytes (4.2 GB) copied, 11.8166 s, 355 MB/s

> [root at JS2 ~]# dd if=/dev/dm-89 of=/dev/null bs=128k count=32000 skip=84000 # skipping 2GB array cache data
> 32000+0 records in
> 32000+0 records out
> 4194304000<tel:4194304000> bytes (4.2 GB) copied, 31.4959 s, 133 MB/s


> This is on a machine connected over 2x8Gb/s links to a fiber switch and then to an HDX4 disk array, exporting the volume to 2 of it's 4 8Gbps fiber ports .

> Multipath.conf looks like this :
>
> defaults {
>         user_friendly_names yes
> }

> devices {
>
>        device { # normal HDX4 Disk arrays
>                vendor "Rorke"
>                product "G4S-16L-4F8"
>                path_grouping_policy multibus
>             }
>
> }

> blacklist {
> #blacklisting almost everything
>         devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
>         devnode "^hd[a-z][0-9]*"
> }
Check the request queue parameters for both dm and sd devices. Parameters that could affect this are

#cat /sys/block/<dm-x>/queue/max_sectors_kb
#cat /sys/block/<dm-x>/queue/max_segments
#cat /sys/block/<dm-x>/queue/max_segment_size

#cat /sys/block/<sdx>/queue/max_sectors_kb
#cat /sys/block/<sdx>/queue/max_segments
#cat /sys/block/<sdx>/queue/max_segment_size

You can monitor the average request size(avgrq-sz) on both dm and sd device using
#iostat –xm –d /dev/dm-x /dev/sdx –t 2

Compare this with the request size while running I/O only sd device directly. There was a patch to fix
block stacking limits on dm devices upstream.





--
dm-devel mailing list
dm-devel at redhat.com<mailto:dm-devel at redhat.com>
https://www.redhat.com/mailman/listinfo/dm-devel



--




[First Person Logo]<http://firstperson.is/>


[Signature]

[Facebook] <https://www.facebook.com/pages/First-Person/677554662257977> [LinkedIn]  <http://www.linkedin.com/company/first-person> [Twitter] <https://twitter.com/FirstPersonsf>


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/dm-devel/attachments/20140923/f176d32f/attachment.htm>


More information about the dm-devel mailing list