[dm-devel] Powerpath vs dm-multipath - two points of FUD?

Hannes Reinecke hare at suse.de
Sun Sep 14 08:39:34 UTC 2014


On 09/09/2014 06:50 PM, Rob wrote:
> Hi List,
>
> Firstly, apologies if this is a common topic and my intentions are not
> to start a flame war. I've googled extensively but haven't found
> specific information to address my queries, so I thought I would turn here.
>
> We have a rather large multi-tenant infrastructure using PowerPath.
> Since this inherently comes with increased maintenance costs
> (recompiling the module, requiring extra steps / care when upgrading
> etc) we are looking at using dm-multipath as the defacto standard
> SAN-connection abstraction layer for installations of RHEL 7+.
>
> After discussions with our SAN Architect team, we were given the below
> points to chew over and we were met with stiff resistance to moving away
> from Powerpath. Since there was little right-of-reply, I'd like to run
> these points past the minds of this list to understand if these are
> valid enough to justify a valid business case of keeping Powerpath over
> Multipath.
>
Hehe. PowerPath again.
Mind you, device-mapper multipathing is fully supported by EMC ...

>
> /Here’s a couple of reasons to stick with powerpath:
>
> * Load Balancing:
>
>   Whilst dm-multipath can make use of more than one of the paths to an
> array, .i.e with round-robin, this isn’t true load-balancing.  Powerpath
> is able to examine the paths down to the array and balance workload
> based on how busy the storage controller / ports are.  AFAIK Rhel6 has
> added functionality to make path choices based on queue depth and
> service time, which does add some improvement over vanilla round-robin.
>
We do this with the switch to request-based multipathing.
Using one of the other load balancers (eg least-pending) and set 
rr_min_io to '1' will give you exactly that behaviour.

>   For VMAX and CX/VNX, powerpath uses the following parameters to
> balance the paths out: Pending I/Os on the path, Size of I/Os, Types of
> I/Os, and Paths most recently used.
>
pending I/O is covered with the 'least-pending' I/O scheduler; I fail to 
see the value in any of the others (where would be the point in 
switching I/O based on the _size_ of the I/O request ?)

>   * Flakey Path Detection:
>
>   The latest versions of powerpath can proactively take paths out of
> service should it observe intermittent IO failures (remember any IO
> failure can hold a thread for 30-60 seconds whilst the SCSI command
> further up the stack times out, and a retry is sent).  dm-multipath
> doesn’t have functionality to remove a flakey path, paths can only be
> marked out of service on hard failure./
>
Wrong. I've added flakey path detection a while back. I'll be looking
at the sources and will be checking the current status; might be I've 
not gotten around to send it upstream.
So you _might_ need to switch to SLES :-)

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		      zSeries & Storage
hare at suse.de			      +49 911 74053 688
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: J. Hawn, J. Guild, F. Imendörffer, HRB 16746 (AG Nürnberg)




More information about the dm-devel mailing list