[linux-lvm] lvmlockd: about the limitation on lvresizing the LV active on multiple nodes

David Teigland teigland at redhat.com
Tue Jan 2 17:10:34 UTC 2018


> * resizing an LV that is active in the shared mode on multiple hosts
> 
> It seems a big limitation to use lvmlockd in cluster:

Only in the case where the LV is active on multiple hosts at once,
i.e. a cluster fs, which is less common than a local fs.

In the general case, it's not safe to assume that an LV can be modified by
one node while it's being used by others, even when all of them hold
shared locks on the LV.  You'd want to prevent that in general.
Exceptions exist, but whether an exception is ok will likely depend on
what the specific change is, what application is using the LV, whether
that application can tolerate such a change.

One (perhaps the only?) valid exception I know about is extending an LV
while it's being used under a cluster fs (any cluster fs?)

(In reference to your later email, this is not related to lock queueing,
but rather to basic ex/sh lock incompatibility, and when/how to allow
exceptions to that.)

The simplest approach I can think of to allow lvextend under a cluster fs
would be a procedure like:

1. one one node: lvextend --lockopt skip -L+1G VG/LV

   That option doesn't exist, but illustrates the point that some new
   option could be used to skip the incompatible LV locking in lvmlockd.

2. on each node: lvchange --refresh VG/LV

   This updates dm on each node with the new device size.

3. gfs2_grow VG/LV or equivalent

   At this point the fs on any node can begin accessing the new space.




More information about the linux-lvm mailing list