[dm-devel] [PATCH 2/2] multipath: attempt at common multipath.rules

Martin Wilck mwilck at suse.com
Wed Jun 28 13:00:08 UTC 2017


On Tue, 2017-06-27 at 23:41 +0200, Martin Wilck wrote:
> 
> > The other change is the redhat multipath rules remove the partition
> > device nodes for devices claimed by multipath. The udev rule will
> > only
> > do this one time (both to keep from running partx on every event,
> > and
> > so
> > that if users manually reread the partition table, we don't keep
> > removing them when clearly they are wanted). Redhat does this
> > because
> > we
> > had multiple customer issues where they were using the scsi
> > partitions
> > instead of the kpartx devices. Obviously, with setting the
> > partition
> > devices to not ready and clearing their fs_type, this isn't
> > essential,
> > but it has helped make customers do the right thing.
> 
> Isn't this racy? You call "partx -d" on the parent device (e.g. sda).
> At this point in time, the kernel will already have detected the
> partitions and "add" uevents for them are likely to follow up
> quickly.
> You generate "remove" events that may race with the "add" - or am I
> overlooking something?
> 
> I'd feel better if we'd call "partx -d" while processing the "add"
> uevent for the _partitions_, ideally late in that process. That would
> make (almost) sure that "add" was finished when "remove" was
> generated.
> See below.

It turns out that my idea doesn't work quite like your approach,
because users can't simply run "partx -a /dev/sd$X" to get the deleted
partitions back. They need an additional, manual "echo change
>/sys/class/block/sd$X/uevent" first to make "partx -a work".
So, if you can confirm that uevent races are no problem, your technique
seems to be more user-friendly.


> > +LABEL="check_kpartx"
> > +
> > +IMPORT{db}="DM_MULTIPATH_NEED_KPARTX"
> 
> As remarked above, I'd vote for removing "MULTIPATH" from this
> property
> name.
> 
> > +ENV{DM_UDEV_PRIMARY_SOURCE_FLAG}!="1",
> > IMPORT{db}="DM_SUBSYSTEM_UDEV_FLAG1"
> > +ENV{DM_SUBSYSTEM_UDEV_FLAG1}=="1", GOTO="end_mpath"
> > +ACTION!="change", GOTO="end_mpath"
> > +ENV{DM_UUID}!="mpath-?*", GOTO="end_mpath"
> > +ENV{DM_ACTIVATION}=="1", ENV{DM_MULTIPATH_NEED_KPARTX}="1"
> > +ENV{DM_SUSPENDED}=="1", GOTO="end_mpath"
> > +ENV{DM_ACTION}=="PATH_FAILED", GOTO="end_mpath"
> 
> The previous code had ENV{DM_ACTION}=="PATH_FAILED|PATH_REINSTATED",
> why did you drop the latter? 
> Anyway, I think both aren't needed because in 11-dm-mpath.rules,
> "DM_ACTIVATION" is set to "0" in the "reload if paths are
> lost/recovered" case. I think the cleaner way to express the logic
> here
> would be:
> 
> # don't rerun kpartx on "reload" events (see 11-dm-mpath.rules)
> ENV{DM_ACTIVATION}=="0", GOTO="end_mpath"
> # don't run kpartx when there are no paths
> ENV{MPATH_DEVICE_READY}=="0", GOTO="end_mpath"

The ENV{DM_ACTIVATION}=="0" can be left out, too, I think. I'll post a
patch with what I think the setup should be soon.

Best
Martin
-- 
Dr. Martin Wilck <mwilck at suse.com>, Tel. +49 (0)911 74053 2107
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)




More information about the dm-devel mailing list