From linux4dave at gmail.com Fri Dec 1 01:06:04 2006 From: linux4dave at gmail.com (dave first) Date: Thu, 30 Nov 2006 17:06:04 -0800 Subject: [Linux-cluster] qstat -f and exec_host output Message-ID: <207649d0611301706s44a7026cr9f6cdc5018a24d96@mail.gmail.com> Hey all, I hope someone can help me understand exec_host part of the qstat -f output. Got a 15 node cluster with 4 CPUs per node, nodes are called n01, n02... n15. Submitted a job with, #PBS -l nodes=3:ppn=4 Job is running fine, but I don't understand the numbers for exec_host in qstat -f output. Okay, I do understand that the nodes displayed are n01 through n03. What do the other numbers mean? For instance, n03/3+n03/2. What does that mean? exec_host = n03/3+n03/2+n03/1+n03/0+n02/3+n02/2+n02/1+n02/0+n01/3+n01/2+n01/1+n01/0 TIA, dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From msarmadi at gmail.com Fri Dec 1 06:28:06 2006 From: msarmadi at gmail.com (Mehdi Sarmadi) Date: Fri, 1 Dec 2006 09:58:06 +0330 Subject: [Linux-cluster] Clustering MySQL DB Message-ID: Dear All Where could I find out more about Clustering MySQL using redhat solutions/models and Pros/Cons? TIA Regards -- Mehdi Sarmadi From dan.hawker at astrium.eads.net Fri Dec 1 09:12:33 2006 From: dan.hawker at astrium.eads.net (HAWKER, Dan) Date: Fri, 1 Dec 2006 09:12:33 -0000 Subject: [Linux-cluster] fedora core 5 removed lvm2-cluster Message-ID: <7F6B06837A5DBD49AC6E1650EFF549060122313A@auk52177.ukr.astrium.corp> > > > Alasdair G Kergon wrote: > > On Thu, Nov 30, 2006 at 05:03:15PM -0600, Greg Swift wrote: > > > >> But I cannot find what replaced it? > >> > > > > Move up to fc6 and it is back again. > > > > Alasdair > > > I tried fc6, wouldn't work properly. thus i'm rolling > backwards... but > apparently its not there either *grumble* why do i even > bother.. *sigh* > I'm pretty sure I saw a 3rd party one somewhere (same problem as you so created one), although obviously you'd have to decide whether you trust the 3rd party enough to install it :) However IIRC, all he did was rehash a srpm to recreate the rpm for fc5. Am sure you could do something similar, if you can ensure everything matches up. Dan This email (including any attachments) may contain confidential and/or privileged information or information otherwise protected from disclosure. If you are not the intended recipient, please notify the sender immediately, do not copy this message or any attachments and do not use it for any purpose or disclose its content to any person, but delete this message and any attachments from your system. Astrium disclaims any and all liability if this email transmission was virus corrupted, altered or falsified. --------------------------------------------------------------------- Astrium Limited, Registered in England and Wales No. 2449259 Registered Office: Gunnels Wood Road, Stevenage, Hertfordshire, SG1 2AS, England From basv at sara.nl Fri Dec 1 11:47:40 2006 From: basv at sara.nl (Bas van der Vlies) Date: Fri, 01 Dec 2006 12:47:40 +0100 Subject: [Linux-cluster] qstat -f and exec_host output In-Reply-To: <207649d0611301706s44a7026cr9f6cdc5018a24d96@mail.gmail.com> References: <207649d0611301706s44a7026cr9f6cdc5018a24d96@mail.gmail.com> Message-ID: <4570165C.4030705@sara.nl> dave first wrote: > Hey all, I hope someone can help me understand exec_host part of the > qstat -f output. > > Got a 15 node cluster with 4 CPUs per node, nodes are called n01, n02... > n15. Submitted a job with, > > #PBS -l nodes=3:ppn=4 > > Job is running fine, but I don't understand the numbers for exec_host in > qstat -f output. Okay, I do understand that the nodes displayed are n01 > through n03. What do the other numbers mean? For instance, > n03/3+n03/2. What does that mean? > > exec_host = > n03/3+n03/2+n03/1+n03/0+n02/3+n02/2+n02/1+n02/0+n01/3+n01/2+n01/1+n01/0 Dave, You post to the wrong list try: torqueusers at supercluster.org n03/3+n03/2 This means that you have node/host n03 and use cpu 3 and cpu 2. See for documentation www.supercluster.org -- ******************************************************************** * * * Bas van der Vlies e-mail: basv at sara.nl * * SARA - Academic Computing Services phone: +31 20 592 8012 * * Kruislaan 415 fax: +31 20 6683167 * * 1098 SJ Amsterdam * * * ******************************************************************** From rpeterso at redhat.com Fri Dec 1 15:03:32 2006 From: rpeterso at redhat.com (Robert Peterson) Date: Fri, 01 Dec 2006 09:03:32 -0600 Subject: [Linux-cluster] Clustering MySQL DB In-Reply-To: References: Message-ID: <45704444.104@redhat.com> Mehdi Sarmadi wrote: > Dear All > > Where could I find out more about Clustering MySQL using redhat > solutions/models and Pros/Cons? > > TIA > Regards > Hi Mehdi, Well, I'd start with the cluster faq, here: http://sources.redhat.com/cluster/faq.html#gfs_mysql Regards, Bob Peterson Red Hat Cluster Suite From msarmadi at gmail.com Fri Dec 1 15:36:09 2006 From: msarmadi at gmail.com (Mehdi Sarmadi) Date: Fri, 1 Dec 2006 19:06:09 +0330 Subject: [Linux-cluster] Clustering MySQL DB In-Reply-To: <45704444.104@redhat.com> References: <45704444.104@redhat.com> Message-ID: Hi Thanks Robert, I've read that. I'm just wondering what model does RH Cluster Suite uses for clustering MySQL. I know much about MySQL Cluster & Replication. I'm looking for cluster suites e.g. Redhat or Sun. I wonder how cluster suite cope with - known replication problems and - cluster unawareness of MySQL engine and - shared-nothing policy that mysql does. I heard of something here: http://www.redhat.com/archives/linux-cluster/2006-June/msg00158.html Afterall, I look for what Redhat proposes and recommend for HA/Failover and Clustering for MySQL. Looking fwd to your reply Best Regards On 12/1/06, Robert Peterson wrote: > Mehdi Sarmadi wrote: > > Dear All > > > > Where could I find out more about Clustering MySQL using redhat > > solutions/models and Pros/Cons? > > > > TIA > > Regards > > > Hi Mehdi, > > Well, I'd start with the cluster faq, here: > http://sources.redhat.com/cluster/faq.html#gfs_mysql > > Regards, > > Bob Peterson > Red Hat Cluster Suite > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -- Mehdi Sarmadi From linux4dave at gmail.com Fri Dec 1 16:00:22 2006 From: linux4dave at gmail.com (dave first) Date: Fri, 1 Dec 2006 08:00:22 -0800 Subject: [Linux-cluster] qstat -f and exec_host output In-Reply-To: <4570165C.4030705@sara.nl> References: <207649d0611301706s44a7026cr9f6cdc5018a24d96@mail.gmail.com> <4570165C.4030705@sara.nl> Message-ID: <207649d0612010800u2bedc3d4mf0238035798cb7c5@mail.gmail.com> Okay! Thanks for the heads-up and re-direct to the proper venue for this question. After I sent my query, I got to thinking that this really wasn't the place for it. As it is, I figured out the answer. Each node name is listed, and the 1-digit number signifies the CPU that is running the job. dave On 12/1/06, Bas van der Vlies wrote: > > dave first wrote: > > Hey all, I hope someone can help me understand exec_host part of the > > qstat -f output. > > > > Got a 15 node cluster with 4 CPUs per node, nodes are called n01, n02... > > n15. Submitted a job with, > > > > #PBS -l nodes=3:ppn=4 > > > > Job is running fine, but I don't understand the numbers for exec_host in > > qstat -f output. Okay, I do understand that the nodes displayed are n01 > > through n03. What do the other numbers mean? For instance, > > n03/3+n03/2. What does that mean? > > > > exec_host = > > n03/3+n03/2+n03/1+n03/0+n02/3+n02/2+n02/1+n02/0+n01/3+n01/2+n01/1+n01/0 > > Dave, > You post to the wrong list try: torqueusers at supercluster.org > > n03/3+n03/2 This means that you have node/host n03 and use cpu 3 and cpu > 2. See for documentation www.supercluster.org > > -- > ******************************************************************** > * * > * Bas van der Vlies e-mail: basv at sara.nl * > * SARA - Academic Computing Services phone: +31 20 592 8012 * > * Kruislaan 415 fax: +31 20 6683167 * > * 1098 SJ Amsterdam * > * * > ******************************************************************** > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -------------- next part -------------- An HTML attachment was scrubbed... URL: From storm at elemental.it Fri Dec 1 16:32:23 2006 From: storm at elemental.it (St0rM) Date: Fri, 01 Dec 2006 17:32:23 +0100 Subject: [Linux-cluster] Newbie question Message-ID: <45705917.3080700@elemental.it> Greetings. I would like to build up a cluster from two identical FC5 or FC6 machines, for HA. I would like to have the filesystem duplicated (or whatever) on the two server as I can't use a SAN, and the two servers should share the load if possible. Is this clustering solution able to do that? Is there a guide on how to build up a system like that ? -- St0rM -----BEGIN GEEK CODE BLOCK----- Version: 3.1 GIT d-() s:+>: a- C++(++++) UL++++$ P+ L++++$ E- W+++$ N- o+ K w--() !O !M>+ !V PS+ PE Y+(++) PGP>+ t+ 5?>+ X++ R++ tv-- b+ DI+++ D+ G+ e* h--- r++ y+++ ------END GEEK CODE BLOCK------ "There are only 10 types of people in the world: Those who understand binary, and those who don't" From storm at elemental.it Fri Dec 1 16:36:05 2006 From: storm at elemental.it (St0rM) Date: Fri, 01 Dec 2006 17:36:05 +0100 Subject: [Linux-cluster] Newbie question In-Reply-To: <45705917.3080700@elemental.it> References: <45705917.3080700@elemental.it> Message-ID: <457059F5.3080901@elemental.it> Ok I do it myself. > Is this clustering solution able to do that? Is there a guide on how to > build up a system like that ? From the FAQ. # Can I use GFS to take two off-the-shelf PCs and cluster their storage? No. GFS will only allow PCs with shared storage, such as a SAN with a Fibre Channel switch, to work together cooperatively on the same storage. Off-the-shelf PCs don't have shared storage. Thanks, it was quick but intense. -- St0rM -----BEGIN GEEK CODE BLOCK----- Version: 3.1 GIT d-() s:+>: a- C++(++++) UL++++$ P+ L++++$ E- W+++$ N- o+ K w--() !O !M>+ !V PS+ PE Y+(++) PGP>+ t+ 5?>+ X++ R++ tv-- b+ DI+++ D+ G+ e* h--- r++ y+++ ------END GEEK CODE BLOCK------ "There are only 10 types of people in the world: Those who understand binary, and those who don't" From rpeterso at redhat.com Fri Dec 1 16:36:09 2006 From: rpeterso at redhat.com (Robert Peterson) Date: Fri, 01 Dec 2006 10:36:09 -0600 Subject: [Linux-cluster] Clustering MySQL DB In-Reply-To: References: <45704444.104@redhat.com> Message-ID: <457059F9.70501@redhat.com> Mehdi Sarmadi wrote: > Hi > > Thanks Robert, I've read that. I'm just wondering what model does RH > Cluster Suite uses for clustering MySQL. I know much about MySQL > Cluster & Replication. I'm looking for cluster suites e.g. Redhat or > Sun. I wonder how cluster suite cope with > - known replication problems and > - cluster unawareness of MySQL engine and > - shared-nothing policy that mysql does. > > I heard of something here: > http://www.redhat.com/archives/linux-cluster/2006-June/msg00158.html > > Afterall, I look for what Redhat proposes and recommend for > HA/Failover and Clustering for MySQL. > > Looking fwd to your reply > Best Regards Hi Medhi, I'm not sure what you mean by "what model" RH Cluster Suite uses. I don't know the answers to MySQL-specific questions regarding known replication problems, cluster unawareness and share-policy. I also can't speak for what "Red Hat Recommends" but I can perhaps tell you what I know about the topic: - Cluster Suite is able to do High Availability (HA) MySQL with active/passive MySQL. That is, have a single MySQL server in a cluster, and if that server goes down, another node in the cluster takes over its MySQL server duties. No problem. - To accomplish this, I recommend using GFS file system on shared storage, because then updates made to the MySQL data will be seamlessly seen by the other nodes that are standing by (passive). - If you don't use GFS and shared storage, then you might be able to have multiple MySQL servers running simultaneously on their own copies of the database (Active/Active). Then, of course, you run into problems of how to replicate the data properly, which is what you were probably talking about with replication problems and share policy. I'm sorry, but I can't help you there. If you could solve those replication issues, you could then use something like LVS / Piranha to do load balancing of the MySQL requests. - Since normal MySQL isn't cluster-aware, I think database updates from multiple servers (Active/Active) over GFS are likely to cause database corruption unless you're using the "MySQL Cluster" product which I don't know much about. - Other people on this list have talked about getting multiple MySQL servers (Active/Active) to work cooperatively over GFS without corruption as long as they're not updating records. In other words, just for read-only queries. I'm not sure what kinds of things they need to get this to work properly. There was a thread in October in linux-cluster under the subject "Multiple Active MySQL instances", but I don't remember what all was said. I do remember them saying that it only works with MyISAM tables. I recommend reading the archives, at this link: http://www.redhat.com/archives/linux-cluster/ If they have gotten this working, then again, you could use LVS to do load balancing if you want. I hope this helps. Regards, Bob Peterson Red Hat Cluster Suite From msarmadi at gmail.com Fri Dec 1 17:11:12 2006 From: msarmadi at gmail.com (Mehdi Sarmadi) Date: Fri, 1 Dec 2006 20:41:12 +0330 Subject: [Linux-cluster] Clustering MySQL DB In-Reply-To: <457059F9.70501@redhat.com> References: <45704444.104@redhat.com> <457059F9.70501@redhat.com> Message-ID: Hi Dear Robert, Thank you so much, very nice info I mean, master/slave or clustering active/active or active/passive oe MySQL Cluster, by model. Any other reference, document, successful experience with share-storage or using mysql with cluter suites would be appreciated. Looking forward to your replies and kind oppinions TIA Best Regards On 12/1/06, Robert Peterson wrote: > Mehdi Sarmadi wrote: > > Hi > > > > Thanks Robert, I've read that. I'm just wondering what model does RH > > Cluster Suite uses for clustering MySQL. I know much about MySQL > > Cluster & Replication. I'm looking for cluster suites e.g. Redhat or > > Sun. I wonder how cluster suite cope with > > - known replication problems and > > - cluster unawareness of MySQL engine and > > - shared-nothing policy that mysql does. > > > > I heard of something here: > > http://www.redhat.com/archives/linux-cluster/2006-June/msg00158.html > > > > Afterall, I look for what Redhat proposes and recommend for > > HA/Failover and Clustering for MySQL. > > > > Looking fwd to your reply > > Best Regards > Hi Medhi, > > I'm not sure what you mean by "what model" RH Cluster Suite uses. > I don't know the answers to MySQL-specific questions regarding > known replication problems, cluster unawareness and share-policy. > I also can't speak for what "Red Hat Recommends" but I can perhaps > tell you what I know about the topic: > > - Cluster Suite is able to do High Availability (HA) MySQL > with active/passive MySQL. That is, have a single MySQL server > in a cluster, and if that server goes down, another node in the cluster > takes over its MySQL server duties. No problem. > - To accomplish this, I recommend using GFS file system on shared > storage, because then updates made to the MySQL data will be > seamlessly seen by the other nodes that are standing by (passive). > - If you don't use GFS and shared storage, then you might be able to > have multiple MySQL servers running simultaneously on their own > copies of the database (Active/Active). Then, of course, you run into > problems of how to replicate the data properly, which is what you > were probably talking about with replication problems and share > policy. I'm sorry, but I can't help you there. If you could solve those > replication issues, you could then use something like LVS / Piranha > to do load balancing of the MySQL requests. > - Since normal MySQL isn't cluster-aware, I think database updates > from multiple servers (Active/Active) over GFS are likely to cause > database corruption unless you're using the "MySQL Cluster" > product which I don't know much about. > - Other people on this list have talked about getting multiple MySQL > servers (Active/Active) to work cooperatively over GFS without > corruption as long as they're not updating records. In other words, > just for read-only queries. I'm not sure what kinds of things they > need to get this to work properly. There was a thread in October > in linux-cluster under the subject "Multiple Active MySQL > instances", but I don't remember what all was said. I do remember > them saying that it only works with MyISAM tables. I recommend > reading the archives, at this link: > > http://www.redhat.com/archives/linux-cluster/ > > If they have gotten this working, then again, you could use LVS > to do load balancing if you want. > > I hope this helps. > > Regards, > > Bob Peterson > Red Hat Cluster Suite > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -- Mehdi Sarmadi From tmornini at engineyard.com Fri Dec 1 17:19:18 2006 From: tmornini at engineyard.com (Tom Mornini) Date: Fri, 1 Dec 2006 09:19:18 -0800 Subject: [Linux-cluster] Clustering MySQL DB In-Reply-To: References: Message-ID: <40B9260C-92F3-4376-AA59-7B1677918D02@engineyard.com> I did some research and testing of this configuration. 1) During research, I found messages from the NDB cluster group at MySQL saying that performance on GFS would be poor, no doubt in relation to NDB clustering. 2) During testing, I found to my horror that this solution *does not work* for InnoDB tables, only for MyISAM table types. This makes this configuration completely useless to me. After I found that, I confirmed it by more refined Google searching. 3) I ended up going with master/slave replication and wishing there were a way to use GFS and master/master. On Nov 30, 2006, at 10:28 PM, Mehdi Sarmadi wrote: > Where could I find out more about Clustering MySQL using redhat > solutions/models and Pros/Cons? -- -- Tom Mornini, CTO -- Engine Yard, Ruby on Rails Hosting -- Reliability, Ease of Use, Scalability -- (866) 518-YARD (9273) From gsml at netops.gvtc.com Fri Dec 1 17:29:50 2006 From: gsml at netops.gvtc.com (Greg Swift) Date: Fri, 01 Dec 2006 11:29:50 -0600 Subject: [Linux-cluster] fedora core 5 removed lvm2-cluster In-Reply-To: <7F6B06837A5DBD49AC6E1650EFF549060122313A@auk52177.ukr.astrium.corp> References: <7F6B06837A5DBD49AC6E1650EFF549060122313A@auk52177.ukr.astrium.corp> Message-ID: <4570668E.30008@netops.gvtc.com> > > I'm pretty sure I saw a 3rd party one somewhere (same problem as you so > created one), although obviously you'd have to decide whether you trust the > 3rd party enough to install it :) > > However IIRC, all he did was rehash a srpm to recreate the rpm for fc5. Am > sure you could do something similar, if you can ensure everything matches > up. > yeah... i figured i could probably find it and make it work, but honestly at this point i give up on gfs in fedora... (i realize that my installation is what doesnt work, now whether that is because i suck, which given my rhce and the fact that i've got it working in rhel on the same equipment, i would hope isnt the case; or its cause isnt ready to work on my setup.. but whatever). -greg -- http://www.gvtc.com -- ?While it is possible to change without improving, it is impossible to improve without changing.? -anonymous ?only he who attempts the absurd can achieve the impossible.? -anonymous From mv at umanaged.com Fri Dec 1 13:27:52 2006 From: mv at umanaged.com (max vakulenko) Date: Fri, 01 Dec 2006 15:27:52 +0200 Subject: [Linux-cluster] fedora core 5 removed lvm2-cluster In-Reply-To: <456F6333.6070309@netops.gvtc.com> References: <456F6333.6070309@netops.gvtc.com> Message-ID: <45702DD8.6070404@umanaged.com> Greg Swift: > But I cannot find what replaced it? > > I found someone had asked the same question elsewhere (fedora-list) and > no one replied. I skimmed thought the new/removed list from release > notes... saw nothing... > > any recommendations? In fact, lvm2-cluster now is compiled from lvm2. Also change locking type to `3' (just get rpm's lvm.conf). Try FC7's (development) lvm2 srpm and recompile it with dependencies like selinux, devmapper, ... from FC7's src.rpm and this should work in FC4, FC5 I've recently done it myself, heres step-by-step for i686 FC4: # wget http://download.fedora.redhat.com/pub/fedora/linux/core/development/source/ SRPMS/device-mapper-1.02.12-3.fc7.src.rpm http://download.fedora.redhat.com/pub/fedora/linux/core/development/source/ SRPMS/lvm2-2.02.15-3.fc7.src.rpm # rpmbuild --rebuild --target=i686 device-mapper-1.02.12-3.fc7.src.rpm # rpm -Uvh /usr/src/redhat/RPMS/i686/device-mapper-1.02.12-3.i686.rpm # wget http://download.fedora.redhat.com/pub/fedora/linux/core/development/source/ SRPMS/libsepol-1.15.2-1.src.rpm http://download.fedora.redhat.com/pub/fedora/linux/core/development/source/ SRPMS/libselinux-1.33.1-1.src.rpm http://download.fedora.redhat.com/pub/fedora/linux/core/6/source/SRPMS/mcst rans-0.1.8-3.src.rpm # rpmbuild --rebuild --target=i686 libsepol-1.15.2-1.src.rpm # rpm -Fvh /usr/src/redhat/RPMS/i686/libsepol-devel-1.15.2-1.i686.rpm /usr/src/redhat/RPMS/i686/libsepol-1.15.2-1.i686.rpm # yum -y install swig libcap-devel # rpmbuild --rebuild --target=i686 libselinux-1.33.1-1.src.rpm # rpm -Fvh /usr/src/redhat/RPMS/i686/libselinux-1.33.1-1.i686.rpm /usr/src/redhat/RPMS/i686/libselinux-devel-1.33.1-1.i686.rpm --nodeps # rpmbuild --rebuild --target=i686 mcstrans-0.1.8-3.src.rpm # rpm -ivh /usr/src/redhat/RPMS/i686/mcstrans-0.1.8-3.i686.rpm # rpmbuild --rebuild --target=i686 lvm2-2.02.15-3.fc7.src.rpm # rpm -Fvh /usr/src/redhat/RPMS/i686/lvm2-cluster-2.02.15-3.i686.rpm /usr/src/redhat/RPMS/i686/lvm2-2.02.15-3.i686.rpm From rickb at rapidvps.com Fri Dec 1 18:08:47 2006 From: rickb at rapidvps.com (Rick Blundell) Date: Fri, 01 Dec 2006 13:08:47 -0500 Subject: [Linux-cluster] MySQL + RH Cluster + GFS In-Reply-To: References: Message-ID: <45706FAF.6060006@rapidvps.com> Mehdi Sarmadi wrote: > Where could I find out more about Clustering MySQL using redhat > solutions/models and Pros/Cons? Theres not much out conclusive information out there on the subject. The best place is on the mysql mailing lists, where its discussed a few times. Seems the mysql developers aren't specifically interested in GFS and the GFS developers aren't specifically interested in mysql. More then likely you are looking to run two mysqld servers on the same filesystem. This seems to work fine for myisam table engines but innodb has file locking issues which will prevent both mysqld's from being able to manage the table concurrently. Mysql devs aren't too interested in GFS because their NDB solves this "problem" (redundant mysqld's) without a shared filesystem. But, this locks you to the ndb storage engine which may be incompatible with some applications. Hope this helps. Rick Blundell From dbrieck at gmail.com Fri Dec 1 20:28:39 2006 From: dbrieck at gmail.com (David Brieck Jr.) Date: Fri, 1 Dec 2006 15:28:39 -0500 Subject: [Linux-cluster] MySQL + RH Cluster + GFS In-Reply-To: <45706FAF.6060006@rapidvps.com> References: <45706FAF.6060006@rapidvps.com> Message-ID: <8c1094290612011228w7cab0581x45b6fd971af2452d@mail.gmail.com> On 12/1/06, Rick Blundell wrote: > Mehdi Sarmadi wrote: > > Where could I find out more about Clustering MySQL using redhat > > solutions/models and Pros/Cons? > > Theres not much out conclusive information out there on the subject. The > best place is on the mysql mailing lists, where its discussed a few > times. Seems the mysql developers aren't specifically interested in GFS > and the GFS developers aren't specifically interested in mysql. > > More then likely you are looking to run two mysqld servers on the same > filesystem. This seems to work fine for myisam table engines but innodb > has file locking issues which will prevent both mysqld's from being able > to manage the table concurrently. > > Mysql devs aren't too interested in GFS because their NDB solves this > "problem" (redundant mysqld's) without a shared filesystem. But, this > locks you to the ndb storage engine which may be incompatible with some > applications. > > Hope this helps. > > Rick Blundell I wouldn't even consider MySQL's clustering options until 5.1. Until then their clustering solution is an in RAM only solution. If you do a mailing list search on my name you should be able to find a pretty recent thread about GFS MySQL clustering. From lists at brimer.org Sun Dec 3 21:51:32 2006 From: lists at brimer.org (Barry Brimer) Date: Sun, 03 Dec 2006 15:51:32 -0600 Subject: [Linux-cluster] (no subject) Message-ID: <1165182692.457346e4e824f@mail.toucanhost.com> I have a 2 node cluster for a shared GFS filesystem. One of the nodes fenced the other, and the node that got fenced is no longer able to communicate with the cluster. While booting the problem node, I receive the following error message: Setting up Logival Volume Management: Locking inactive: ignoring clustered volume group vg00 I have compared /etc/lvm/lvm.conf files on both nodes. They are identical. The disk (/dev/sda1) is listed when typing "fdisk -l" There are no iptables firewalls active (although /etc/sysconfig/iptables exists, iptables is chkconfig'd off). I have written a simple iptables logging rule (iptables -I INPUT -s -j LOG) on the working node to verify that packets are reaching the working node, but no messages are being logged in /var/log/messages on the working node that acknowledge any cluster activity from the problem node. Both machines have the same RH packages installed and are mostly up to date, they are missing the same packages, none of which involve the kernel, RHCS or GFS. When I boot the problem node, it successfully starts ccsd, but it fails after a while on cman and fails after a while on fenced. I have given the clvmd process an hour, and it still will not start. vgchange -ay on the problem node returns: # vgchange -ay connect() failed on local socket: Connection refused Locking type 2 initialisation failed. I have the contents of /var/log/messages on the working machine at the time of the fence, if that would be helpful. Any help is greatly appreciated. Thanks, Barry From lists at brimer.org Sun Dec 3 21:59:46 2006 From: lists at brimer.org (Barry Brimer) Date: Sun, 03 Dec 2006 15:59:46 -0600 Subject: [Linux-cluster] CLVM/GFS will not mount or communicate with cluster Message-ID: <1165183186.457348d225bb7@mail.toucanhost.com> This is a repeat of the post I made a few minutes ago. I thought adding a subject would be helpful. I have a 2 node cluster for a shared GFS filesystem. One of the nodes fenced the other, and the node that got fenced is no longer able to communicate with the cluster. While booting the problem node, I receive the following error message: Setting up Logical Volume Management: Locking inactive: ignoring clustered volume group vg00 I have compared /etc/lvm/lvm.conf files on both nodes. They are identical. The disk (/dev/sda1) is listed when typing "fdisk -l" There are no iptables firewalls active (although /etc/sysconfig/iptables exists, iptables is chkconfig'd off). I have written a simple iptables logging rule (iptables -I INPUT -s -j LOG) on the working node to verify that packets are reaching the working node, but no messages are being logged in /var/log/messages on the working node that acknowledge any cluster activity from the problem node. Both machines have the same RH packages installed and are mostly up to date, they are missing the same packages, none of which involve the kernel, RHCS or GFS. When I boot the problem node, it successfully starts ccsd, but it fails after a while on cman and fails after a while on fenced. I have given the clvmd process an hour, and it still will not start. vgchange -ay on the problem node returns: # vgchange -ay connect() failed on local socket: Connection refused Locking type 2 initialisation failed. I have the contents of /var/log/messages on the working node and the problem node at the time of the fence, if that would be helpful. Any help is greatly appreciated. Thanks, Barry From isplist at logicore.net Sun Dec 3 22:01:27 2006 From: isplist at logicore.net (isplist at logicore.net) Date: Sun, 3 Dec 2006 16:01:27 -0600 Subject: [Linux-cluster] Need help, will pay! Message-ID: <200612316127.011845@leena> Sorry if this is a double post, just can't tell if it made it out. --- I don't know if this is allowed but I do hope you'll allow me to post this or suggest where I could get some good help. I BADLY need to get my project working. If it works out and affordable, I'd like to be able to ask for more help when needed, ongoing. I am not a corporation and don't have deep pockets so if you only know how to bid for those types, I'll be out of your league. I need a flat rate price so that I can budget for it. While I love learning and doing all these things, I just can't do it all in a time efficient manner... I need help!!! Would someone be interested in making a few bucks to get the final bits of my project working right. Mostly, it involved getting my load balancing (LVS) going and GFS straightened out. Here's the setup; All servers have a dual NIC (LB's and servers) which should help to make things simple. WEB; 3 web servers which need a load balanced front end for users to get in, including making sure that sessions aren't lost, etc. Nothing too special that I can think of here other than session loss. The application is mostly joomla/mambo based for web services. All servers share GFS mounted storage. On GFS, just need to make sure entire cluster does not blow up every time something goes wrong. Using Brocade switches for fencing. Everything works, just needs fine tuning to better handle fencing and dead node problems taking the whole thing down. MAIL; 4 Qmail (Qmail-Toaster) servers which require a load balanced front end. I need users to be able to reach the usual compliment of ports for various services, webmail, pop, imap, smtp, etc. Need to know what is the best way of load balancing mail services. One server handles all outgoing, one all incoming, not sure. All servers share GFS storage. Same as above, it's all set up, it all works but since the learning curve and time spent on this has been so high, I just cannot afford to keep trying to figure out the last little bits. I need help to get this done once and for all. MySQL; 4 MySQL servers installed, ready to work together. Again, all share GFS storage, need to get fencing working so there aren't any blow ups. Need to have load balanced MySQL working in the simplest manner possible so that I can maintain them later. Getting into complicated MySQL setup's won't work for me. I've read that some folks have been able to use GFS to share common storage. Since I'm so far behind on all the other parts, I've not even started on this yet. One of the servers currently does it all so load balancing these had not even begun. GFS is in place and ready to go here also. Load Balancing (LVS); I'm using LVS with Piranha. It seems simple but I keep missing something and getting help over email has just led to frustration for those nice enough to help me. I cannot afford to be told how to do it, I need someone to just do it and I'll learn by seeing it done. I have LVS servers ready to configure for Web and Qmail services. I'll build another for the private/internal MySQL section. Once everything is in place, I will later build the matching redundant server for each section, Web/Qmail and MySQL. I prefer to keep the Web, Mail and MySQL LVS front ends as separate servers for future flexibility and performance rather than handling all services on one pair. Everything is in place... just need the configuring and fine tuning done so that I can finally feel secure enough to fire up the services for public access. Right now, not only is LVS missing but it's too easy to lose the GFS storage when something screws up. Please, contact me if you can help me. Thank you. From janne.peltonen at helsinki.fi Mon Dec 4 07:18:16 2006 From: janne.peltonen at helsinki.fi (Janne Peltonen) Date: Mon, 4 Dec 2006 09:18:16 +0200 Subject: [Linux-cluster] Newbie question In-Reply-To: <457059F5.3080901@elemental.it> References: <45705917.3080700@elemental.it> <457059F5.3080901@elemental.it> Message-ID: <20061204071815.GE4432@helsinki.fi> On Fri, Dec 01, 2006 at 05:36:05PM +0100, St0rM wrote: > Ok I do it myself. > > >Is this clustering solution able to do that? Is there a guide on how to > >build up a system like that ? > > From the FAQ. > > # Can I use GFS to take two off-the-shelf PCs and cluster their storage? > > No. GFS will only allow PCs with shared storage, such as a SAN with a > Fibre Channel switch, to work together cooperatively on the same > storage. Off-the-shelf PCs don't have shared storage. > > > Thanks, it was quick but intense. On the other hand, you /can/ have a block device exported from an off-the-shelf pc using gndb, and use GFS on that. See the Cluster Suite Documentation, chapter abt different hardware configs (should be easy enough to find). --Janne Peltonen From isplist at logicore.net Mon Dec 4 13:49:17 2006 From: isplist at logicore.net (isplist at logicore.net) Date: Mon, 4 Dec 2006 07:49:17 -0600 Subject: [Linux-cluster] Need help, will pay! In-Reply-To: Message-ID: <200612474917.912097@leena> Hi and thanks for the reply. I'm not sure what you mean by this? I'm in MN, 55101. The work is virtual, over the net. Thanks. Mike On Mon, 4 Dec 2006 03:53:37 -0500, Rajesh singh wrote: > Kindly put your postal address, as a wide distribution on mailing list > without address keep every body guessing. > > regards > > > On 12/3/06, isplist at logicore.net wrote:> Sorry if > this is a double post, just can't tell if it made it out. > >> --- >> >> I don't know if this is allowed but I do hope you'll allow me to post >> this or >> suggest where I could get some good help. >> >> I BADLY need to get my project working. If it works out and affordable, >> I'd >> like to be able to ask for more help when needed, ongoing. I am not a >> corporation and don't have deep pockets so if you only know how to bid for >> those types, I'll be out of your league. I need a flat rate price so that >> I >> can budget for it. >> While I love learning and doing all these things, I just can't do it all >> in a >> time efficient manner... I need help!!! >> >> Would someone be interested in making a few bucks to get the final bits >> of my >> project working right. Mostly, it involved getting my load balancing (LVS) >> going and GFS straightened out. >> >> Here's the setup; >> >> All servers have a dual NIC (LB's and servers) which should help to make >> things simple. >> >> WEB; >> 3 web servers which need a load balanced front end for users to get in, >> including making sure that sessions aren't lost, etc. Nothing too special >> that >> I can think of here other than session loss. The application is mostly >> joomla/mambo based for web services. >> All servers share GFS mounted storage. On GFS, just need to make sure >> entire >> cluster does not blow up every time something goes wrong. Using Brocade >> switches for fencing. Everything works, just needs fine tuning to better >> handle fencing and dead node problems taking the whole thing down. >> >> MAIL; >> 4 Qmail (Qmail-Toaster) servers which require a load balanced front end. I >> need users to be able to reach the usual compliment of ports for various >> services, webmail, pop, imap, smtp, etc. >> Need to know what is the best way of load balancing mail services. One >> server >> handles all outgoing, one all incoming, not sure. All servers share GFS >> storage. Same as above, it's all set up, it all works but since the >> learning >> curve and time spent on this has been so high, I just cannot afford to >> keep >> trying to figure out the last little bits. I need help to get this done >> once >> and for all. >> >> MySQL; >> 4 MySQL servers installed, ready to work together. Again, all share GFS >> storage, need to get fencing working so there aren't any blow ups. Need to >> have load balanced MySQL working in the simplest manner possible so that >> I can >> maintain them later. Getting into complicated MySQL setup's won't work >> for me. >> I've read that some folks have been able to use GFS to share common >> storage. >> Since I'm so far behind on all the other parts, I've not even started on >> this >> yet. One of the servers currently does it all so load balancing these had >> not >> even begun. GFS is in place and ready to go here also. >> >> Load Balancing (LVS); >> I'm using LVS with Piranha. It seems simple but I keep missing something >> and >> getting help over email has just led to frustration for those nice enough >> to >> help me. I cannot afford to be told how to do it, I need someone to just >> do it >> and I'll learn by seeing it done. >> >> I have LVS servers ready to configure for Web and Qmail services. I'll >> build >> another for the private/internal MySQL section. Once everything is in >> place, I >> will later build the matching redundant server for each section, >> Web/Qmail and >> MySQL. >> I prefer to keep the Web, Mail and MySQL LVS front ends as separate >> servers >> for future flexibility and performance rather than handling all services >> on >> one pair. >> >> Everything is in place... just need the configuring and fine tuning done >> so >> that I can finally feel secure enough to fire up the services for public >> access. Right now, not only is LVS missing but it's too easy to lose the >> GFS >> storage when something screws up. >> >> Please, contact me if you can help me. Thank you. >> >> >> -- >> Linux-cluster mailing list >> Linux-cluster at redhat.com >> https://www.redhat.com/mailman/listinfo/linux-cluster From rpeterso at redhat.com Mon Dec 4 15:03:42 2006 From: rpeterso at redhat.com (Robert Peterson) Date: Mon, 04 Dec 2006 09:03:42 -0600 Subject: [Linux-cluster] CLVM/GFS will not mount or communicate with cluster In-Reply-To: <1165183186.457348d225bb7@mail.toucanhost.com> References: <1165183186.457348d225bb7@mail.toucanhost.com> Message-ID: <457438CE.3070809@redhat.com> Barry Brimer wrote: > This is a repeat of the post I made a few minutes ago. I thought adding a > subject would be helpful. > > > I have a 2 node cluster for a shared GFS filesystem. One of the nodes fenced > the other, and the node that got fenced is no longer able to communicate with > the cluster. > > While booting the problem node, I receive the following error message: > Setting up Logical Volume Management: Locking inactive: ignoring clustered > volume group vg00 > > I have compared /etc/lvm/lvm.conf files on both nodes. They are identical. The > disk (/dev/sda1) is listed when typing "fdisk -l" > > There are no iptables firewalls active (although /etc/sysconfig/iptables exists, > iptables is chkconfig'd off). I have written a simple iptables logging rule > (iptables -I INPUT -s -j LOG) on the working node to verify that > packets are reaching the working node, but no messages are being logged in > /var/log/messages on the working node that acknowledge any cluster activity > from the problem node. > > Both machines have the same RH packages installed and are mostly up to date, > they are missing the same packages, none of which involve the kernel, RHCS or > GFS. > > When I boot the problem node, it successfully starts ccsd, but it fails after a > while on cman and fails after a while on fenced. I have given the clvmd > process an hour, and it still will not start. > > vgchange -ay on the problem node returns: > > # vgchange -ay > connect() failed on local socket: Connection refused > Locking type 2 initialisation failed. > > I have the contents of /var/log/messages on the working node and the problem > node at the time of the fence, if that would be helpful. > > Any help is greatly appreciated. > > Thanks, > Barry > Hi Barry, Well, vgchange and other lvm functions won't work on the clustered volume unless clvmd is running, and clvmd won't run properly until the node is talking happily through the cluster infrastructure. So as I see it, your problem is that cman is not starting properly. Unfortunately, you haven't told us much about the system to determine why. There can be many reasons. For now, let me assume that the two were working properly in a cluster before it was fenced, and therefore I'll assume that the software and configurations are all okay. I think one reason this might happen is if you're using manual fencing and haven't yet done your: fence_ack_manual -n on the remaining node to acknowledge that the reboot actually happened. Also, you might want to test communications between the boxes to make sure they can communicate with each other in general. You might also get this kind of problem if you had updated the cluster software, so that the cman on one node is incompatible with the cman on the other. Ordinarily, there are no problems or incompatibilities with upgrading, but if you upgraded cman from RHEL4U1 to RHEL4U4, for example, you might get this because the cman protocol changed slightly between RHEL4U1 and U2. Next time, it would also be helpful to post what version of the cluster software you're running and possibly snippets from /var/log/messages showing why cman is not connecting. Regards, Bob Peterson Red Hat Cluster Suite From hlawatschek at atix.de Mon Dec 4 15:39:47 2006 From: hlawatschek at atix.de (Mark Hlawatschek) Date: Mon, 4 Dec 2006 16:39:47 +0100 Subject: [Linux-cluster] Need help, will pay! In-Reply-To: <200612316127.011845@leena> References: <200612316127.011845@leena> Message-ID: <200612041639.47356.hlawatschek@atix.de> Hi Mike(?), I'm sure, we could help you getting your system up and running. How much do you want to afford for your installation/support flat rate ? BTW: here's a link to our reference story for web service clustering: http://www.redhat.com/magazine/021jul06/features/gfs_update/ go to section "Reference: Munich International Trade Fairs". Thanks, Mark On Sunday 03 December 2006 23:01, isplist at logicore.net wrote: > Sorry if this is a double post, just can't tell if it made it out. > > --- > > I don't know if this is allowed but I do hope you'll allow me to post this > or suggest where I could get some good help. > > I BADLY need to get my project working. If it works out and affordable, I'd > like to be able to ask for more help when needed, ongoing. I am not a > corporation and don't have deep pockets so if you only know how to bid for > those types, I'll be out of your league. I need a flat rate price so that I > can budget for it. > While I love learning and doing all these things, I just can't do it all in > a time efficient manner... I need help!!! > > Would someone be interested in making a few bucks to get the final bits of > my project working right. Mostly, it involved getting my load balancing > (LVS) going and GFS straightened out. > > Here's the setup; > > All servers have a dual NIC (LB's and servers) which should help to make > things simple. > > WEB; > 3 web servers which need a load balanced front end for users to get in, > including making sure that sessions aren't lost, etc. Nothing too special > that I can think of here other than session loss. The application is mostly > joomla/mambo based for web services. > All servers share GFS mounted storage. On GFS, just need to make sure > entire cluster does not blow up every time something goes wrong. Using > Brocade switches for fencing. Everything works, just needs fine tuning to > better handle fencing and dead node problems taking the whole thing down. > > MAIL; > 4 Qmail (Qmail-Toaster) servers which require a load balanced front end. I > need users to be able to reach the usual compliment of ports for various > services, webmail, pop, imap, smtp, etc. > Need to know what is the best way of load balancing mail services. One > server handles all outgoing, one all incoming, not sure. All servers share > GFS storage. Same as above, it's all set up, it all works but since the > learning curve and time spent on this has been so high, I just cannot > afford to keep trying to figure out the last little bits. I need help to > get this done once and for all. > > MySQL; > 4 MySQL servers installed, ready to work together. Again, all share GFS > storage, need to get fencing working so there aren't any blow ups. Need to > have load balanced MySQL working in the simplest manner possible so that I > can maintain them later. Getting into complicated MySQL setup's won't work > for me. I've read that some folks have been able to use GFS to share common > storage. Since I'm so far behind on all the other parts, I've not even > started on this yet. One of the servers currently does it all so load > balancing these had not even begun. GFS is in place and ready to go here > also. > > Load Balancing (LVS); > I'm using LVS with Piranha. It seems simple but I keep missing something > and getting help over email has just led to frustration for those nice > enough to help me. I cannot afford to be told how to do it, I need someone > to just do it and I'll learn by seeing it done. > > I have LVS servers ready to configure for Web and Qmail services. I'll > build another for the private/internal MySQL section. Once everything is in > place, I will later build the matching redundant server for each section, > Web/Qmail and MySQL. > I prefer to keep the Web, Mail and MySQL LVS front ends as separate servers > for future flexibility and performance rather than handling all services on > one pair. > > Everything is in place... just need the configuring and fine tuning done so > that I can finally feel secure enough to fire up the services for public > access. Right now, not only is LVS missing but it's too easy to lose the > GFS storage when something screws up. > > Please, contact me if you can help me. Thank you. > > > > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster -- Gruss / Regards, Dipl.-Ing. Mark Hlawatschek Phone: +49-89 452 3538 15 http://www.atix.de/ http://www.open-sharedroot.org/ ** ATIX - Ges. fuer Informationstechnologie und Consulting mbH Einsteinstr. 10 - 85716 Unterschleissheim - Germany From eftychios.eftychiou at gmail.com Mon Dec 4 18:28:36 2006 From: eftychios.eftychiou at gmail.com (Eftychios Eftychiou) Date: Mon, 4 Dec 2006 20:28:36 +0200 Subject: [Linux-cluster] Is this possible? Message-ID: I want to setup a Cluster managed by CMAN and related components having 3 nodes , 2 running the same service(application) and 1 on standby. The service to be run is not cluster aware but we do not really care since all the data is stored on an Oracle Rac that is being accessed by the 2 nodes running the application. If one of the nodes fails then cman can move the service to the 3. On top of that I want to use Piranha to load balance incoming Network requests to the cluster. I have setup more or less a 2 node cluster using CMAN(no fencing since this is still for research purposes) and can move services from one node to the other ( process is still a bit flaky since our startup scripts are not LBS compliant) but so far did not figure out how I can force CMAN to start the same service on all the nodes. After going through the available documentation i came upon the following in the FAQ. " RHCS doesn't let you start the same service multiple times" along with the subsequent explanation. Anyway to cut the story short. I understand the explanation and the reasoning behind this, however I do see a need for managing active-active services for systems that are not cluster aware but do not really care whether they are in a cluster as such. Now to my question. Is it possible for CMAN to make it start the same application on for example 2 nodes and have as a failover a third or more nodes? Perhaps I am complicating things a bit more that required or have a misconception somewhere. I would appreciate any sort of feedback and recommendations. Regards, Eftychios Eftychiou -------------- next part -------------- An HTML attachment was scrubbed... URL: From lhh at redhat.com Mon Dec 4 18:49:22 2006 From: lhh at redhat.com (Lon Hohberger) Date: Mon, 04 Dec 2006 13:49:22 -0500 Subject: [Linux-cluster] Is this possible? In-Reply-To: References: Message-ID: <1165258162.20281.6.camel@rei.boston.devel.redhat.com> On Mon, 2006-12-04 at 20:28 +0200, Eftychios Eftychiou wrote: > Is it possible for CMAN to make it start the same application on for > example 2 nodes and have as a failover a third or more nodes? Well, cman doesn't really do that, but yes... Something like this: