[dm-devel] Improve processing efficiency for addition and deletion of multipath devices

Martin Wilck mwilck at suse.com
Tue Nov 29 08:16:53 UTC 2016


On Tue, 2016-11-29 at 09:10 +0100, Zdenek Kabelac wrote:
> Dne 29.11.2016 v 09:02 Martin Wilck napsal(a):
> > On Tue, 2016-11-29 at 07:47 +0100, Hannes Reinecke wrote:
> > > On 11/28/2016 07:46 PM, Benjamin Marzinski wrote:
> > > > On Thu, Nov 24, 2016 at 10:21:10AM +0100, Martin Wilck wrote:
> > > > > On Fri, 2016-11-18 at 16:26 -0600, Benjamin Marzinski wrote:
> > > > > 
> > > > > > At any rate, I'd rather get rid of the gazillion waiter
> > > > > > threads
> > > > > > first.
> > > > > 
> > > > > Hm, I thought the threads are good because this avoids one
> > > > > unresponsive
> > > > > device to stall everything?
> > > > 
> > > > There is work making dm events pollable, so that you can wait
> > > > for
> > > > any
> > > > number of them with one thread. At the moment, once we get an
> > > > event, we
> > > > lock the vecs lock, which pretty much keeps everything else
> > > > from
> > > > running, so this doesn't really change that.
> > > > 
> > > 
> > > Which again leads me to the question:
> > > Why are we waiting for dm events?
> > > The code handling them is pretty arcane, and from what I've seen
> > > there
> > > is nothing in there which we wouldn't be informed via other
> > > mechanisms
> > > (path checker, uevents).
> > > So why do we still bother with them?
> > 
> > I was asking myself the same question. From my inspection of the
> > kernel
> > code, there are two code paths that trigger a dm event but no
> > uevent
> > (bypass_pg() and switch_pg_num(), both related to path group
> > switching). If these are covered by the path checker, I see no
> > point in
> > waiting for DM events. But of course, I may be missing something.
> > 
> 
> Processing of 'dm' events likely should be postponed to 'dmeventd' -
> which is a daemon resolving the problem here with waiting for an
> event.
> 
> Plugin just takes the action.
> 
> IMHO there is nothing easier you can have.

> It's then upto dmeventd to maintain the best 'connection' with kernel
> and events.

But that would simply move the "gazillion waiter threads" from
multipathd to dmeventd, right? And it would introduce another boot
sequence dependency for multipathd, I'm not sure if that's desirable.

Regards
Martin

-- 
Dr. Martin Wilck <mwilck at suse.com>, Tel. +49 (0)911 74053 2107
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)




More information about the dm-devel mailing list