<div>Hi,</div>
<div> </div>
<div>From my understanding, any change request (lv, vg, pv...) should be blocked as long as a lock is held by another alive node in the cluster.</div>
<div> </div>
<div>I mean the "exclusive flag" was here at the origin to address this. </div>
<div> </div>
<div>I think there is a sort of misconcept in LVM2 may be due to the fact that for many people assume a cluster is necessarly a "share-everything" infrastructure (a la VMS).</div>
<div> </div>
<div>This is the good approach for clustering file servers (NFS, CIFS), web servers, etc... where one would take advantage of load balancing user sessions accross multiple nodes.</div>
<div> </div>
<div>The other need for clustering is to run "mono instance" databases that are not by concept (except Oracle RAC) designed to run accross multiple nodes.</div>
<div> </div>
<div>In this case, only one node is holding the database instance with all of its storage, and putting it on a "share everything" cluster based on a Cluster filesystem (GFS) would imply :</div>
<div> </div>
<div> - performance penalty: each storage IO having to request the FS lock manager to be executed</div>
<div> --> managing the locks at a lower level (VG and LV) wouldn't imply so as one the VG is exclusively activated no other node can remove the lock, every IO being done on a regular FS (ext3 for instance) without having to manage any kind of locks</div>
<div> - Security issue : To bypass this lock mechanism, one should start the clustered FS without locks, but this will certainly lead to FS corruption.</div>
<div> </div>
<div>Brem</div>
<p><br><br></p>
<div><span class="gmail_quote">2009/7/28, Xinwei Hu <<a href="mailto:hxinwei@gmail.com">hxinwei@gmail.com</a>>:</span></div>
<blockquote class="gmail_quote" style="PADDING-LEFT: 1ex; MARGIN: 0px 0px 0px 0.8ex; BORDER-LEFT: #ccc 1px solid">Hi Brem,<br><br>I guess the cause of the problem is using 'lvchange -ay'.<br><br>Clvmd actually does lock conversion underlying. So when you tried<br>
'lvchange -ay',<br>the exclusive lock will be converted to an non-exclusive lock. And that means<br>the second try will success anyway.<br><br>2009/7/28 brem belguebli <<a href="mailto:brem.belguebli@gmail.com">brem.belguebli@gmail.com</a>>:<br>
> Hi Rafael,<br>> On RHEL 5.3, locks thru DLM aren't reliable all the time, I've been able to<br>> activate a VG on a node of the cluster even if it was already activated<br>> exclusively on another node.<br>
> Also, I've been able to activate a LV on a node not holding the exclusive<br>> lock on the VG by typing 2 times lvchange -a y /dev/VGX/lvoly .<br>> First try lvchange tells you there is a lock, second try activates it.<br>
> Checking dlm debug (mount -t debugfs debug /debug) /debug/dlm/clvmd_locks<br>> gives a lock for nodeid 0 ....<br>> Brem<br>><br>> 2009/7/27 Rafael Micó Miranda <<a href="mailto:rmicmirregs@gmail.com">rmicmirregs@gmail.com</a>><br>
>><br>>> Hi Brem<br>>><br>>> So, does it work successfully? I made some testing before I submitted it<br>>> to the list and AFAIK i found no errors.<br>>><br>>> What do you mean exactly with "some CLVM strange behaviours"? Could you<br>
>> be more specific?<br>>><br>>> I'm not subscribed to linux-lvm, please keep us informed through this<br>>> list.<br>>><br>>> Thanks in advance. Cheers,<br>>><br>>> Rafael<br>
>><br>>> El lun, 27-07-2009 a las 21:02 +0200, brem belguebli escribió:<br>>> > Hi Rafael,<br>>> ><br>>> > It works fine, well at least when not hiting some CLVM strange<br>>> > behaviours, that I'm able to replay by hand, so your script is<br>
>> > allright.<br>>> ><br>>> > I'll post to linux-lvm what I could see.<br>>> ><br>>> > Brem<br>>> ><br>>> ><br>>> > 2009/7/21, brem belguebli <<a href="mailto:brem.belguebli@gmail.com">brem.belguebli@gmail.com</a>>:<br>
>> > Hola Rafael,<br>>> ><br>>> > Thanks a lot, that'll avoid me going from scratch.<br>>> ><br>>> > I'll have a look at them and keep you updated.<br>
>> ><br>>> > Brem<br>>> ><br>>> ><br>>> ><br>>> > 2009/7/21, Rafael Micó Miranda <<a href="mailto:rmicmirregs@gmail.com">rmicmirregs@gmail.com</a>>:<br>
>> > Hi Brem,<br>>> ><br>>> > El mar, 21-07-2009 a las 16:40 +0200, brem belguebli<br>>> > escribió:<br>>> > > Hi,<br>
>> > ><br>>> > > That's what I 'm trying to do.<br>>> > ><br>>> > > If you mean lvm.sh, well, I've been playing with it,<br>
>> > but it does some<br>>> > > "sanity" checks that are wierd<br>>> > > 1. It expects HA LVM to be setup (why such<br>>> > check if we want to<br>
>> > > use CLVM).<br>>> > > 2. it exits if it finds a CLVM VG (kind of<br>>> > funny !)<br>>> > > 3. it exits if the lvm.conf is newer<br>
>> > than /boot/*.img (about this<br>>> > > one, we tend to prevent the cluster from<br>>> > automatically<br>>> > > starting ...)<br>
>> > > I was looking to find some doc on how to write my<br>>> > own resources, ie<br>>> > > CLVM resource that checks if the vg is clustered, if<br>
>> > so by which node<br>>> > > is it exclusively held, and if the node is down to<br>>> > activate<br>>> > > exclusively the VG.<br>
>> > ><br>>> > > If you have some good links to provide me, that'll<br>>> > be great.<br>>> > ><br>>> > > Thanks<br>
>> > ><br>>> > ><br>>> > > 2009/7/21, Christine Caulfield<br>>> > <<a href="mailto:ccaulfie@redhat.com">ccaulfie@redhat.com</a>>:<br>
>> > > On 07/21/2009 01:11 PM, brem belguebli<br>>> > wrote:<br>>> > > Hi,<br>>> > > When creating the VG by default<br>
>> > clustered, you<br>>> > > implicitely assume that<br>>> > > it will be used with a clustered FS<br>>> > on top of it (gfs,<br>
>> > > ocfs, etc...)<br>>> > > that will handle the active/active<br>>> > mode.<br>>> > > As I do not intend to use GFS in<br>
>> > this particular case,<br>>> > > but ext3 and raw<br>>> > > devices, I need to make sure the vg<br>>> > is exclusively<br>
>> > > activated on one<br>>> > > node, preventing the other nodes to<br>>> > access it unless<br>>> > > it is the failover<br>
>> > > procedure (node holding the VG<br>>> > crashed) and then re<br>>> > > activate it<br>>> > > exclusively on the failover node.<br>
>> > > Thanks<br>>> > ><br>>> > ><br>>> > > In that case you probably ought to be using<br>
>> > rgmanager to do<br>>> > > the failover for you. It has a script for<br>>> > doing exactly<br>>> > > this :-)<br>
>> > ><br>>> > > Chrissie<br>>> > ><br>>> > ><br>>> > > --<br>>> > > Linux-cluster mailing list<br>
>> > > <a href="mailto:Linux-cluster@redhat.com">Linux-cluster@redhat.com</a><br>>> > ><br>>> > <a href="https://www.redhat.com/mailman/listinfo/linux-cluster">https://www.redhat.com/mailman/listinfo/linux-cluster</a><br>
>> > ><br>>> > ><br>>> > > --<br>>> > > Linux-cluster mailing list<br>>> > > <a href="mailto:Linux-cluster@redhat.com">Linux-cluster@redhat.com</a><br>
>> > ><br>>> > <a href="https://www.redhat.com/mailman/listinfo/linux-cluster">https://www.redhat.com/mailman/listinfo/linux-cluster</a><br>>> ><br>>> > Please, check this link:<br>
>> ><br>>> ><br>>> > <a href="https://www.redhat.com/archives/cluster-devel/2009-June/msg00020.html">https://www.redhat.com/archives/cluster-devel/2009-June/msg00020.html</a><br>>> ><br>
>> > I found exactly the same problem as you, and i<br>>> > developed the<br>>> > "lvm-cluster.sh" script to solve the needs I had. You<br>
>> > can find the<br>>> > script on the last message of the thread.<br>>> ><br>>> > I submitted it to make it part of the main project,<br>
>> > but i have no news<br>>> > about that yet.<br>>> ><br>>> > I hope this helps.<br>>> ><br>>> > Cheers,<br>
>> ><br>>> > Rafael<br>>> ><br>>> > --<br>>> > Rafael Micó Miranda<br>>> ><br>>> > --<br>>> > Linux-cluster mailing list<br>
>> > <a href="mailto:Linux-cluster@redhat.com">Linux-cluster@redhat.com</a><br>>> > <a href="https://www.redhat.com/mailman/listinfo/linux-cluster">https://www.redhat.com/mailman/listinfo/linux-cluster</a><br>
>> ><br>>> ><br>>> > --<br>>> > Linux-cluster mailing list<br>>> > <a href="mailto:Linux-cluster@redhat.com">Linux-cluster@redhat.com</a><br>>> > <a href="https://www.redhat.com/mailman/listinfo/linux-cluster">https://www.redhat.com/mailman/listinfo/linux-cluster</a><br>
>> --<br>>> Rafael Micó Miranda<br>>><br>>> --<br>>> Linux-cluster mailing list<br>>> <a href="mailto:Linux-cluster@redhat.com">Linux-cluster@redhat.com</a><br>>> <a href="https://www.redhat.com/mailman/listinfo/linux-cluster">https://www.redhat.com/mailman/listinfo/linux-cluster</a><br>
><br>><br>> --<br>> Linux-cluster mailing list<br>> <a href="mailto:Linux-cluster@redhat.com">Linux-cluster@redhat.com</a><br>> <a href="https://www.redhat.com/mailman/listinfo/linux-cluster">https://www.redhat.com/mailman/listinfo/linux-cluster</a><br>
><br><br>--<br>Linux-cluster mailing list<br><a href="mailto:Linux-cluster@redhat.com">Linux-cluster@redhat.com</a><br><a href="https://www.redhat.com/mailman/listinfo/linux-cluster">https://www.redhat.com/mailman/listinfo/linux-cluster</a><br>
</blockquote><br>