[dm-devel] staged dm_internal_{suspend, resume} related changes for wider review

Mikulas Patocka mpatocka at redhat.com
Fri Nov 7 16:20:30 UTC 2014



On Wed, 5 Nov 2014, Mike Snitzer wrote:

> On Wed, Nov 05 2014 at  9:37am -0500,
> Mikulas Patocka <mpatocka at redhat.com> wrote:
> 
> > The patch series introduces two suspend mechanisms and it is unclear how 
> > should they interact with each other.
> 
> And this point is not correct.  As you know dm_internal_suspend and
> dm_internal_resume interface predates any of my changes.
> 
> That existing interface was extended them to be (mostly) fully formed
> equivalents of dm_suspend() and dm_resume().
> 
> I say "mostly" because dm_internal_resume() doesn't call into the targets'
> resume hooks because no existing callers (dm-stats or dm-thinp) need
> to.  But obviously dm_resume() does need to so it passes @resume_targets
> as true to __dm_resume().
> 
> I'm not trying to suggest there is a bug or bugs in this new code (you
> already pointed out the locking issue across ioctls that I fixed).
> 
> But a bug doesn't implicitly mean this is an imperfect way forward --
> if/when a bug is found we'll deal with it.. so feel free to pour over
> this code to see if there is a bug or bugs.  I really do welcome your
> review -- I would just like technical issues to be the focus of any
> technical review.

Problems with that patch set:

1. You walk all thin targets in pool_presuspend and call internal_suspend 
on them and then again in pool_resume call internal_resume on them. 
Between calls to pool_presuspend and pool_resume, dm-thin devices may be 
added or removed, resulting in unballanced calls.

2. You walk all thin targets and call internal_suspend on them. If two 
thin targets are in one dm table, it calls internal_suspend twice on the 
same md device. It also calls internal_suspend twice on the same md device 
if the device has both active and inactive table with a thin target.

3. The device may be suspended internally by pool presuspend and by 
statistics at the same time - the code doesn't handle that:
if (WARN_ON(dm_suspended_internally_md(md))) goto out; /* disallow nested 
internal suspends! */

4. when pool_presuspend is called, md->suspend_lock is alrady held. Taking 
md->suspend_lock on another device results in lockdep warning.


For reasons 1 and 2, I wouldn't really deal with "thin" targets at all - 
they may be created or deleted independent on pool status. Instead, we 
should block all active bios inside the pool - the bios are already 
registered in dm_deferred_set or in the prison, so all you need to do is 
to set a flag pool's presuspend method that causes all new bios to be 
queues and the wait until the prison is empty and the counters in 
deferred_set reach zero.


Mikulas




More information about the dm-devel mailing list