[dm-devel] [RFC] [PATCH] add serial keyword to the weightedpath prioritizer

Christophe Varoqui christophe.varoqui at opensvc.com
Mon Aug 1 12:25:06 UTC 2016


Or we could honor arythmetic expressions like "5*alua+weightedpath", giving
users more control about preferences (and more opportunities to step on
their toes, sure).

Another idea, less invasive if possible, but less versatile :

- Merge the alua prioritizer into the weightedpath prioritizer, given the
optimized/non-optimized and other ALUA states are available in the path
struct and have their snprint_path_* function and %wildcard.

- Then weightedpath prio_args can be extended to support additive
priorities, like "alua <optimized state>:<... state> 10 serial foo 20"


On Mon, Aug 1, 2016 at 1:40 PM, Hannes Reinecke <hare at suse.de> wrote:

> On 08/01/2016 10:42 AM, Christophe Varoqui wrote:
>
>>
>>
>> On Mon, Aug 1, 2016 at 9:49 AM, Hannes Reinecke <hare at suse.de
>> <mailto:hare at suse.de>> wrote:
>>
>>     On 07/31/2016 09:26 PM, Christophe Varoqui wrote:
>>
>>         Ben, Hannes,
>>
>>         Can you review this patch, adding a new 'serial' keyword to the
>>         weightedpath prioritizer.
>>
>>         I compile-tested it only, as I have no testing environment at
>>         hand at
>>         the moment.
>>
>>         I commited it in a separate 'weightedpath-serial' branch for now.
>>
>>
>> http://git.opensvc.com/?p=multipath-tools/.git;a=commitdiff;h=4dd16d99281104fc3504ad73626894a5c3702fb3
>>
>>         Thanks,
>>         Christophe Varoqui
>>         OpenSVC
>>
>>     Well.
>>     In general, sure, fine, I don't have any issues with that.
>>     If the customer wants to diddle with his array that way...
>>
>>     The more general problem I'm seeing is that our current two-layered
>>     priority setup (path groups with distinct priorities and paths
>>     within them) might not be leading to issues with larger and more
>>     complex scenarios.
>>
>>     ATM we already have the problem that clustered scenarios like this:
>>
>>     Storage node 1(active):
>>       Path 1 (optimal):
>>         LUN 1, LUN2
>>       Path 2 (non-optimal):
>>         LUN 1, LUN2
>>
>>     Storage node 2(passive):
>>       Path 1(optimal):
>>          LUN 1, LUN2
>>       Path 2(non-optimal):
>>          LUN 1, LUN2
>>
>>     can not be represented properly with multipath tools.
>>     We are forced to either
>>     a) set 'storage node 2' to 'failed', which would kill
>>        any cluster instance accessing only 'storage node 2'
>>     or
>>     b) map all priorities from 'storage node 2' to '0',
>>        thereby losing all priority information
>>
>>     Things become even more convoluted if both storage nodes are in fact
>>     accessible, or if someone would be using different transports.
>>
>> Would something like "prio alua+weightedpath" produce correct priorities
>> for the path grouping ? where priorities reported by alua would be added
>> to those reported by weighted path. That syntax extension would reduce
>> the need to develop more complex prioritizers.
>>
>> Hmm.
> Allowing stacked prioritizers is a nice idea.
> But then we need to impose some preference here; if we do not set any
> restrictions on the value of the prioritizers we end up with a jumble of
> (essentially unreadable) priorities.
> EG if your weightedpath returns values of '5' or '0' they'll be readily
> obscured by alua information, which uses '5' for the non-optimized path.
>
> So if we were to got that route we need to restrict the values of the
> prioritizers to eg 256, and shift the stacked prioritizer values ontop of
> each other.
> EG with a stacked 'prio_alua+weightedpath' we should end up with a
> priority of 0xAAWW.
> With that we can allow up to 4 levels of stacking (or 8 if we extend that
> to 64 bits), and still keep source-level compability with the original code.
> We could even reduce the permissive values for the prioritzers even more;
> 16 is enough even for ALUA, and that would leave us with enough room of
> 1024 possible stacking levels :-)
>
> But in general I like the idea.
>
>
> Cheers,
>
> Hannes
> --
> Dr. Hannes Reinecke                Teamlead Storage & Networking
> hare at suse.de                                   +49 911 74053 688
> SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
> GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
> HRB 21284 (AG Nürnberg)
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/dm-devel/attachments/20160801/ebe9c415/attachment.htm>


More information about the dm-devel mailing list