<div dir="ltr"><div><div><div><br></div><div>Thanks ShivaKrishna,<br></div><div><br></div><div>max_sectors_kb , max_segments, max_segment_size is identical on both sdx-s and the relevan dm-x device and are as follows : <br></div>max_sectors_kb : 512<br>max_segments : 1024<br>max_segment_size : 65536<br></div><div><br>I did the iostat tests, turns out that average Request Size show close readings ( 244,251 for dm-x, sdx respectively ) but the Average Queue Size is close to 1.9 for sdx vs 1.1 for dm-x and also the device utilization (%util) is ~99% on sd-x vs ~60% on dm-xx. It seems that dm multipath is under-utilizing the underlaying devices.<br></div><div>is there a way to change this behavior ? <br><br></div><div>-Ali Poursamadi<br><br></div></div><div><div><div><div><div><br><br><br></div></div></div></div></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Sep 23, 2014 at 7:45 AM, Merla, ShivaKrishna <span dir="ltr"><<a href="mailto:ShivaKrishna.Merla@netapp.com" target="_blank">ShivaKrishna.Merla@netapp.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">> Hi Gurus,<br>
<br>
> I'm experiencing an issue with DM-Multipath on Centos 6.5.<br>
> DD reads 2.5 times faster when reading from a device directly compared to Multipath-ed device.<br>
> I'm using device-mapper-multipath-0.4.9-72.el6_5.2.x86_64.<br>
> Apparently Multipath is limiting throughput to 127.000 MB/s even though the underlying device is capable of more throughput which to me it looks like a software imposed limit. <br>
<br>
> [root@JS2 ~]# multipath -l mpathaf<br>
> mpathaf (--------) dm-89 Rorke,G4S-16L-4F8<br>
> size=7.3T features='0' hwhandler='0' wp=rw<br>
> `-+- policy='round-robin 0' prio=0 status=active<br>
> |- 0:0:0:1 sdb 8:16 active undef running<br>
> |- 0:0:10:2 sdbb 67:80 active undef running<br>
> |- 1:0:11:1 sddl 71:48 active undef running<br>
> `- 1:0:12:2 sddp 71:112 active undef running<br>
<br>
> [root@JS2 ~]#dd if=/dev/sddl of=/dev/null bs=128k count=32000 skip=14000<br>
> 32000+0 records in<br>
> 32000+0 records out<br>
> <a href="tel:4194304000" value="+14194304000">4194304000</a> bytes (4.2 GB) copied, 11.8166 s, 355 MB/s<br>
<br>
> [root@JS2 ~]# dd if=/dev/dm-89 of=/dev/null bs=128k count=32000 skip=84000 # skipping 2GB array cache data <br>
> 32000+0 records in<br>
> 32000+0 records out<br>
> <a href="tel:4194304000" value="+14194304000">4194304000</a> bytes (4.2 GB) copied, 31.4959 s, 133 MB/s<br>
<br>
<br>
> This is on a machine connected over 2x8Gb/s links to a fiber switch and then to an HDX4 disk array, exporting the volume to 2 of it's 4 8Gbps fiber ports . <br>
<br>
> Multipath.conf looks like this : <br>
><br>
> defaults {<br>
> user_friendly_names yes<br>
> }<br>
<br>
> devices {<br>
><br>
> device { # normal HDX4 Disk arrays<br>
> vendor "Rorke"<br>
> product "G4S-16L-4F8"<br>
> path_grouping_policy multibus<br>
> }<br>
><br>
> }<br>
<br>
> blacklist {<br>
> #blacklisting almost everything<br>
> devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"<br>
> devnode "^hd[a-z][0-9]*"<br>
> }<br>
<br>
</div></div>Check the request queue parameters for both dm and sd devices. Parameters that could affect this are<br>
<br>
#cat /sys/block/<dm-x>/queue/max_sectors_kb<br>
#cat /sys/block/<dm-x>/queue/max_segments<br>
#cat /sys/block/<dm-x>/queue/max_segment_size<br>
<br>
#cat /sys/block/<sdx>/queue/max_sectors_kb<br>
#cat /sys/block/<sdx>/queue/max_segments<br>
#cat /sys/block/<sdx>/queue/max_segment_size<br>
<br>
You can monitor the average request size(avgrq-sz) on both dm and sd device using<br>
#iostat –xm –d /dev/dm-x /dev/sdx –t 2<br>
<br>
Compare this with the request size while running I/O only sd device directly. There was a patch to fix<br>
block stacking limits on dm devices upstream.<br>
<br>
<br>
<br>
<br>
<br>
--<br>
dm-devel mailing list<br>
<a href="mailto:dm-devel@redhat.com">dm-devel@redhat.com</a><br>
<a href="https://www.redhat.com/mailman/listinfo/dm-devel" target="_blank">https://www.redhat.com/mailman/listinfo/dm-devel</a></blockquote></div><br><br clear="all"><br>-- <br><div dir="ltr"><br><table><tbody><tr><td></td></tr></tbody></table><table style="font-family:Helvetica,'Trebuchet MS',sans-serif;font-size:9pt"><tbody><tr><td style="vertical-align:top"><p><a href="http://firstperson.is/" style="border:none;text-decoration:none" target="_blank"><img src="http://westernized.com/email_assets/fp_email_sig-fp_logo-2x.png" width="40px" height="44px" alt="First Person Logo" border="0" style="border:none;margin-right:20px"></a></p></td><td style="vertical-align:top"><p><img src="http://westernized.com/email_assets/signatures/fp_email_sig-ali_poursamadi-2x.png" width="320px" height="154px" alt="Signature" border="0"></p><p><a href="https://www.facebook.com/pages/First-Person/677554662257977" style="border:none;text-decoration:none" target="_blank"><img src="http://westernized.com/email_assets/fp_email_sig-facebook-2x.png" width="22px" height="22px" alt="Facebook" border="0"> </a><a href="http://www.linkedin.com/company/first-person" style="border:none;text-decoration:none" target="_blank"><img src="http://westernized.com/email_assets/fp_email_sig-linkedin-2x.png" width="22px" height="22px" alt="LinkedIn" border="0"> </a><a href="https://twitter.com/FirstPersonsf" style="border:none;text-decoration:none" target="_blank"><img src="http://westernized.com/email_assets/fp_email_sig-twitter-2x.png" width="22px" height="22px" alt="Twitter" border="0"></a></p></td></tr></tbody></table></div>
</div>