From brem.belguebli at gmail.com Sat Aug 1 00:25:19 2009 From: brem.belguebli at gmail.com (brem belguebli) Date: Sat, 1 Aug 2009 02:25:19 +0200 Subject: [Linux-cluster] CLVMD without GFS In-Reply-To: <1249079337.6494.10.camel@mecatol> References: <1249079337.6494.10.camel@mecatol> Message-ID: <29ae894c0907311725k4759e631m332ffd38857a6f99@mail.gmail.com> Hi Rafael, Of course, that is what I was telling you the other day, your script works fine, but I just wanted to twist a little clvm. The prerequisites are already setup (locking type, dm-mp, lvm, etc...). The only thing that disturbes me is that you can bypass under certain conditions the locking. Thanks for the help Brem 2009/8/1 Rafael Mic? Miranda > Hi Brem, > > El vie, 31-07-2009 a las 06:09 -0400, crosa at redhat.com escribi?: > > > > --- mensagem original --- > > De: brem belguebli > > Assunto: Re: [Linux-cluster] CLVMD without GFS > > Data: 29 de Julho de 2009 > > Hora: 10:1:29 > > > > Hi Rafael, > > > > Just posted the basic tests I'm doing on both linux-cluster and > linux-lvm. > > > > I can't get exclusive activation to work properly, I may be missing some > > step in my process. > > > > Brem > > > > > > > > > -- > > Linux-cluster mailing list > > Linux-cluster at redhat.com > > https://www.redhat.com/mailman/listinfo/linux-cluster > > Sorry, i'm not sure if I missed this mail. > > Maybe i need to explain a couple of things first, just to make them > clear. > > As part of the use of the lvm-cluster.sh resource script, you need to: > > 1.- Configure properly the lock type into lvm.conf to type 3 (example > into this link: > > https://www.redhat.com/archives/linux-cluster/2009-July/msg00253.html > > ) > > 2.- Configure your multipathing software > > 3.- Create your Physical Volumes > > 4.- Configure your Volume Groups as clustered Volume Groups > > 5.- Create your Logical Volumens into the clustered Volume Groups > > 6.- And, and the not so obvious, de-activate all the Logical Volumes you > plan to use as exclusive Logical Volumes (using exclusive flag). > > De-activation can be done with "vgchange -an volgrp01/logvol01" or > similar command. > > If you don't do step 6, you will receive an error message when executing > "lvchange -aey volgrp01/logvol01". This command is executed (with the > proper volume group and logical volume names) internally in the resource > script. > > I designed the lvm-cluster.sh resource script to be verbose, maybe you > can copy here your logs (default to /var/log/messages on RHEL systems). > > Cheers and thanks, > > Rafael > > -- > Rafael Mic? Miranda > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alietsantiesteban at gmail.com Sat Aug 1 18:07:20 2009 From: alietsantiesteban at gmail.com (Aliet Santiesteban Sifontes) Date: Sat, 1 Aug 2009 14:07:20 -0400 Subject: [Linux-cluster] Lvm2-cluster git repo?? Message-ID: <365467590908011107x195ccaabiedd217907183faab@mail.gmail.com> Hi, list, just wondering if anybody can point me to the right place to find the lvm2-cluster-2.02.42-5.el4.src.rpm related to RHBA-2009-1047, I have looked in rh ftp site but the file there is outdated, I would like to see the patches included in this srpm release wich affects the original lvm2 2.02.42 release code. Thank's in advance.., Aliet From sergio_gonra at yahoo.es Mon Aug 3 07:59:43 2009 From: sergio_gonra at yahoo.es (Sergio Gonzalez Ramos) Date: Mon, 3 Aug 2009 07:59:43 +0000 (GMT) Subject: [Linux-cluster] System load at 1.00 + Message-ID: <103431.77947.qm@web28609.mail.ukl.yahoo.com> Hi There: Seems that both nodes of the cluster , have System load at 1.00 + when the cluster resources are running ( mounted Filesystems in a EVA 4400 and ip ) I saw that when there are no cluster resources working, the system load is normal and get reduces near to 0 Is there any issue with cluster / filesystems mounted that get's the load averge increased one point??. Seams that the box is working properly , it's just that confuses me with one more point of load average i paste here my Red hat version and the cluster.conf [root at OCEANO1CLUN ~]# cat /etc/cluster/cluster.conf