[dm-devel] rr_min_io vs rr_min_io_rq on 6.5

Vaknin, Rami Rami.Vaknin at emc.com
Mon Oct 27 09:28:33 UTC 2014


Hey again, I appreciate your help,

I ran a few tests for the weekend, various rr_min_io_rq values result in the same performance when running with queue-length path_selector, however rr_min_io does affect performance on the same path_selector.
The other way around happens on round robin path_selector - different rr_min_io_rq values result in different performance, rr_min_io changes do not affect performance.

I still wonder why it behaves like that, I'm still not sure for the following:

What does the "rr" prefix of "rr_min_io" stands for, round robin, is that a legacy name which now relates to all path_selectors?
Is the dmsetup table valid for all path_selectors?
Where can I found the counters of these paths, I'd like to make sure the path_selector works a defined, maybe my test do not uses them well or any imbalance issue.
Does it matter what block size I use? What happens with block sizes bigger than 512k?

Thanks, Rami.



-----Original Message-----
From: Vaknin, Rami 
Sent: Wednesday, October 22, 2014 11:29 AM
To: 'device-mapper development'
Cc: 'Benjamin Marzinski'
Subject: RE: [dm-devel] rr_min_io vs rr_min_io_rq on 6.5

Thanks Ben, that was very helpful.
I can see now that only changes of the rr_min_io_rq value inflected in dmsetup table, no matter what path_selector I use.
I also see that rr_min_io_rq influences performance only in round-robin path_selector while only rr_min_io influences performance in queue-length path_selector.


-----Original Message-----
From: dm-devel-bounces at redhat.com [mailto:dm-devel-bounces at redhat.com] On Behalf Of Benjamin Marzinski
Sent: Wednesday, October 22, 2014 12:00 AM
To: device-mapper development
Subject: Re: [dm-devel] rr_min_io vs rr_min_io_rq on 6.5

On Tue, Oct 21, 2014 at 08:03:09AM +0000, Vaknin, Rami wrote:
>    Hi,
> 
>     
> 
>    I'm working on 6.5 running with 2.6.32-431.el6.x86_64, my multipath.conf
>    if configured with both rr_min_io and rr_min_io_rq, running 64k 100% write
>    load to my storage system using 8 paths over FC ("queue-length" policy).
> 
>     
> 
>    I found [1] that rr_min_io is deprecated in favor of rr_min_io_rq, but
>    playing with these values show that different rr_min_io_rq have no
>    influence on performance in my env, however, different values of rr_min_io
>    have huge influence on the average latency I get for the same iops.
> 
>     
> 
>    Am I missing something?

I'm not sure what you're seeing. As long as your dm_multipath modules is at least version 1.1.0, you should be using rr_min_io_rq, and with that kernel, it definitely should be.  You can check by rmmod and modprobing the dm_multipath module and checking /var/log/messages. You should see a line like

kernel: device-mapper: multipath: version 1.6.0 loaded

You can also check exactly what changing rr_min_io and rr_min_io_rq is doing.  All they do is set the number of IOs to send down a path before switching.  They do this by changing the value in the table you pass to device-mapper.  You can see that table by running

# dmsetup table <devname>

You will see something like this

# dmsetup table mpathaa
0 204800 multipath 0 0 3 1 round-robin 0 1 1 8:112 5 round-robin 0 1 1
8:144 5 round-robin 0 1 1 8:128 5

this "round-robin 0 1 1 8:112 5" is the setup of a single path group with one path.  If this was a multibus device with all paths in one pathgroup, it would look like

# dmsetup table mpathaa
0 204800 multipath 0 0 1 1 round-robin 0 3 1 8:112 5 8:144 5 8:128 5

After each device major:minor, there is number (5 in this case). That's the number of IOs to send to this path before switching.  If multipath is using rr_min_io, then this will be whatever you set rr_min_io to. If multipath is using rr_min_io_rq this will be whatever you set rr_min_io_rq to.  In my case, I was running these commands on a RHEL-6.3 machine, with rr_min_io_rq set to 5 (just to make it stand out from all the 1s in table for this example. Not because it gives me better
performance)

try this and let me know what you see.

-Ben

> 
>     
> 
>     
> 
>    [1]
>    
> https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux
> /6/html/DM_Multipath/MPIO_Overview.html
> 
>     
> 
>    device-mapper-libs-1.02.79-8.el6.x86_64
> 
>    device-mapper-event-libs-1.02.79-8.el6.x86_64
> 
>    device-mapper-persistent-data-0.2.8-2.el6.x86_64
> 
>    device-mapper-1.02.79-8.el6.x86_64
> 
>    device-mapper-event-1.02.79-8.el6.x86_64
> 
>    device-mapper-multipath-libs-0.4.9-72.el6_5.3.x86_64
> 
>    device-mapper-multipath-0.4.9-72.el6_5.3.x86_64
> 
>     

> --
> dm-devel mailing list
> dm-devel at redhat.com
> https://www.redhat.com/mailman/listinfo/dm-devel

--
dm-devel mailing list
dm-devel at redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel




More information about the dm-devel mailing list