[dm-devel] [PATCH 5/7] dm: track per-add_disk holder relations in DM

Mike Snitzer snitzer at redhat.com
Thu Nov 10 18:09:20 UTC 2022


On Wed, Nov 09 2022 at  3:26P -0500,
Christoph Hellwig <hch at lst.de> wrote:

> On Wed, Nov 09, 2022 at 10:08:14AM +0800, Yu Kuai wrote:
> >> diff --git a/drivers/md/dm.c b/drivers/md/dm.c
> >> index 2917700b1e15c..7b0d6dc957549 100644
> >> --- a/drivers/md/dm.c
> >> +++ b/drivers/md/dm.c
> >> @@ -751,9 +751,16 @@ static struct table_device *open_table_device(struct mapped_device *md,
> >>   		goto out_free_td;
> >>   	}
> >>   -	r = bd_link_disk_holder(bdev, dm_disk(md));
> >> -	if (r)
> >> -		goto out_blkdev_put;
> >> +	/*
> >> +	 * We can be called before the dm disk is added.  In that case we can't
> >> +	 * register the holder relation here.  It will be done once add_disk was
> >> +	 * called.
> >> +	 */
> >> +	if (md->disk->slave_dir) {
> > If device_add_disk() or del_gendisk() can concurrent with this, It seems
> > to me that using 'slave_dir' is not safe.
> >
> > I'm not quite familiar with dm, can we guarantee that they can't
> > concurrent?
> 
> I assumed dm would not get itself into territory were creating /
> deleting the device could race with adding component devices, but
> digging deeper I can't find anything.  This could be done
> by holding table_devices_lock around add_disk/del_gendisk, but
> I'm not that familar with the dm code.
> 
> Mike, can you help out on this?

Maybe :/

Underlying component devices can certainly come and go at any
time. And there is no DM code that can, or should, prevent that. All
we can do is cope with unavailability of devices. But pretty sure that
isn't the question.

I'm unclear about the specific race in question:
if open_table_device() doesn't see slave_dir it is the first table
load. Otherwise, the DM device (and associated gendisk) shouldn't have
been torn down while a table is actively being loaded for it. But
_where_ the code lives, to ensure that, is also eluding me...

You could use a big lock (table_devices_lock) to disallow changes to
DM relations while loading the table. But I wouldn't think it needed
as long as the gendisk's lifecycle is protected vs table loads (or
other concurrent actions like table load vs dm device removal). Again,
more code inspection needed to page all this back into my head.

The concern for race aside:
I am concerned that your redundant bd_link_disk_holder() (first in
open_table_device and later in dm_setup_md_queue) will result in
dangling refcount (e.g. increase of 2 when it should only be by 1) --
given bd_link_disk_holder will gladly just bump its holder->refcnt if
bd_find_holder_disk() returns an existing holder. This would occur if
a DM table is already loaded (and DM device's gendisk exists) and a
new DM table is being loaded.

Mike



More information about the dm-devel mailing list