[linux-lvm] [PATCH 10/10] man: document --node option to lvchange

Zdenek Kabelac zkabelac at redhat.com
Wed Mar 20 08:45:12 UTC 2013

Dne 19.3.2013 18:36, Vladislav Bogdanov napsal(a):
> 19.03.2013 20:16, David Teigland wrote:
>> On Tue, Mar 19, 2013 at 07:52:14PM +0300, Vladislav Bogdanov wrote:
>>> And, do you have any estimations, how long may it take to have you ideas
>>> ready for production use?
>> It'll be quite a while (and the new locking scheme I'm working on will not
>> include remote command execution.)
>>> Also, as you're not satisfied with this implementation, what alternative
>>> way do you see? (calling ssh from libvirt or LVM API is not a good idea
>>> at all I think)
>> Apart from using ovirt/rhev, I'd try one of the following behind the
>> libvirt locking api: sanlock, dlm, file locks on nfs, file locks on gfs2.
> Unfortunately none of these solve the main thing I need: Allow LVM
> snapshots without breaking live VM migration :(
> Cluster-wide snapshots (with shared lock) would solve this, but I do not
> expect to see this implemented soon.

Before I'll go any deeper with reviewing patches myself - I'd like to
make myself clean about this 'snapshot' issue.

(BTW there is already one thing which will surely not pass - it's the 'node' 
option for lvm command - this would have to be made diferently).

But back to snapshots -

What would be the point of having (old, non thinp) snapshots active at the 
same time on more then 1 node ?

That would simply not work - since you would have to ensure that noone will 
write to  snapshot & origin on either of those nodes?

Is your code doing some transition which needs active device on both nodes
treating them in read-only way ?

Since metadata for snapshot are only parsed during first activation of 
snapshot, there is no way, the second node could resync if you would have 
written to the snapshot/origin on the first node.

So could you please describe in more details how it's supposed to work?


More information about the linux-lvm mailing list