[dm-devel] [PATCH] libmultipath: check if adopt_path() really added current path
Martin Wilck
mwilck at suse.com
Thu Feb 4 11:14:31 UTC 2021
On Thu, 2021-02-04 at 15:41 +0800, lixiaokeng wrote:
>
>
> On 2021/2/3 21:14, Martin Wilck wrote:
> > On Wed, 2021-02-03 at 17:42 +0800, lixiaokeng wrote:
> > >
> > >
> > > On 2021/2/3 16:14, Martin Wilck wrote:
> > > > Is this also a Tested-by:?
> > > > IOW, did it fix your issue?
> > >
> > > Yes, it solves the crash.But there is an other issue.
> > >
> > > multipath.conf
> > > defaults {
> > > find_multipaths no
> > > }
> > >
> > > [root at localhost coredump]# multipathd add path sdb
> > > fail
> > > [root at localhost coredump]# multipath -ll
> > > [root at localhost coredump]# multipathd add path sdb
> > > ok
> > > [root at localhost coredump]# multipath -ll
> > > 0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-1 dm-3 QEMU,QEMU HARDDISK
> > > size=1.0G features='0' hwhandler='0' wp=rw
> > > `-+- policy='service-time 0' prio=1 status=enabled
> > > `- 2:0:0:1 sdb 8:16 active ready running
> > >
> > > I add local path twice. The first fails while the second
> > > succeeds.
> >
> > More details please. What exactly were you doing? Was this a
> > regression
> > caused by my patch? Please provide multipathd -v3 logs.
>
> I did nothing just "multipathd add path sdb" twice.
> Here I do that again with multipath -v3. The attachment shows all
> messages.
This is a misunderstanding, sorry for being unlear. What I meant was
the logs of *multipathd* running in the background with -v3. IOW, the
journal or syslog or whatever showing what went wrong the first time
around when you tried to add the disk.
But I was able to reproduce the issue, so I can do this myself.
1st time:
994.196771 | sdb: prio args = "" (setting: multipath internal)
994.196781 | sdb: const prio = 1
994.196831 | QEMU_HARDDISK_QM00007: user_friendly_names = no (setting:
multipath internal)
994.196982 | QEMU_HARDDISK_QM00007: alias = QEMU_HARDDISK_QM00007
(setting: default to WWID)
994.197053 | adopt_paths: pathinfo failed for sdb
994.197065 | sdb: orphan path, failed to add path
2nd time:
1012.157422 | sdb: path already in pathvec
Here, cli_add_path() calls ev_add_path() right away:
1012.157433 | QEMU_HARDDISK_QM00007: user_friendly_names = no (setting:
multipath internal)
1012.157440 | QEMU_HARDDISK_QM00007: alias = QEMU_HARDDISK_QM00007
(setting: default to WWID)
1012.157688 | sdb: detect_checker = yes (setting: multipath internal)
...
1012.158342 | sdb: ownership set to QEMU_HARDDISK_QM00007
The problem here is, again, that we don't handle blacklisting by
property consistently.
Please apply my recent series "consistent behavior of
filter_property()". It should fix the issue (did so for me).
>
> > Also, you're aware that "find_multipaths no" is discouraged?
> > It leads to inconsistent behavior between multipath and multipathd.
> >
> There are some different things about local disks between 0.8.5 and
> 0.7.7.
> I just test that.
Sure. I just wanted to make you aware that you are using a possibly
dangerous setting.
Thank you for you hard work and your valuable contributions!
Regards
Martin
More information about the dm-devel
mailing list