From brentonr at dorm.org Wed Mar 1 08:17:27 2006 From: brentonr at dorm.org (Brenton D. Rothchild) Date: Wed, 01 Mar 2006 02:17:27 -0600 Subject: Differing ports between virtual and real servers Message-ID: <44055897.4080604@dorm.org> Hello all, I know this has been asked before, and was mentioned as a "possible future feature", (see https://www.redhat.com/archives/piranha-list/2002-April/msg00009.html) but I wanted to ask again: I have a need (well, really want) to route virtual servers to different ports on real servers, i.e. Service A virtual 192.168.1.1:443 -> real 10.10.10.8:1001 real 10.10.10.9:1001 virtual 192.168.1.2:443 -> real 10.10.10.8:1002 real 10.10.10.9:1002 As you can probably guess, this is in reference to a HTTPS setup; I want to avoid a ton of "real IP pools" in the 10.10.10.x network. Looking at the source (0.8.1 src rpm), it looks like lvsd already stores a separate port in the lvsService struct; so could it be as simple as: 1) adding a "port" option to the real server block of the lvs.cf file and setting that value in the lvsService structs 2) passing this real-server port (as opposed to the virtual-service port) to nanny to use with the "ipvsadm -a -t -r" command (as in "ipvsadm -a -t virt_host:virt_port -r real_host:real_port") I'm interesting in patching my own setup to try it out, and I'd like to hear if I'm completely off-base first, if possible. Thanks! -Brenton Rothchild From nattaponv at hotmail.com Fri Mar 3 04:20:39 2006 From: nattaponv at hotmail.com (nattapon viroonsri) Date: Fri, 03 Mar 2006 04:20:39 +0000 Subject: Lvs dont remove route automatic (squid+direct routing) Message-ID: ## My Config show as below RHEL 4 update 1 ipvsadm-1.24-6 piranha-0.8.1-1 service: Squid forward method: Direct routing schedule: lblc persistence: 360 director: eth0: 172.16.100.37 eth0:1: 172.16.100.36 (virtual ip) realserver1 (cache1): eth0: 172.16.100.39 eth1: 172.16.100.36 (virtual ip) ## hide virtual ip on cache1 arptables -A IN -d 172.16.100.36 -j DROP arptables -A OUT -d 0/0 -j mangle --mangle-ip-s 172.16.100.39 realserver2 (cache2) : eth0: 172.16.100.40 eth1: 172.16.100.36 (virtual ip) ## hide virtual ip on cache2 arptables -A IN -d 172.16.100.36 -j DROP arptables -A OUT -d 0/0 -j mangle --mangle-ip-s 172.16.100.40 ## check.sh #!/bin/bash echo "GET / HTTP/1.0\n\n" | nc $1 8080 if [ $? == "0" ]; then echo "OK" else echo "FAIL" /sbin/ipvsadm -d -t 172.16.100.36:8080 -r $1 fi ## lvs.cf serial_no = 123 primary = 172.16.100.37 service = lvs backup_active = 0 backup = 172.16.100.38 heartbeat = 1 heartbeat_port = 539 keepalive = 6 deadtime = 18 network = direct nat_nmask = 255.255.255.0 debug_level = NONE monitor_links = 1 virtual PROXY { active = 1 address = 172.16.100.36 eth0:1 vip_nmask = 255.255.255.0 port = 8080 persistent = 360 expect = "OK" use_regex = 0 send_program = "/etc/sysconfig/ha/check.sh %h" load_monitor = none scheduler = lblc protocol = tcp timeout = 5 reentry = 10 quiesce_server = 1 server Cache1 { address = 172.16.100.39 active = 1 weight = 200 } server Cache2 { address = 172.16.100.40 active = 1 weight = 200 } } When nanny cant connect to fail service (squid) on real server. it dont remove route to that real server , so director still forward request to that fail squid realserver. so i try to manual remove route in check.sh when nanny detech fail service. but it work for 20 minute and then client cant connect to virtual ip. Is it lvsd suppose to remove route to fail node automatically when nanny detech fail service ? or have any way for lvs to remove route to that fail node automatically when nanny detech fail service ? Regards, Nattapon _________________________________________________________________ Express yourself instantly with MSN Messenger! Download today it's FREE! http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/ From tbushart at nycap.rr.com Fri Mar 3 05:19:18 2006 From: tbushart at nycap.rr.com (Timothy Bushart) Date: Fri, 3 Mar 2006 00:19:18 -0500 Subject: Lvs dont remove route automatic (squid+direct routing) In-Reply-To: Message-ID: quiesce_server = 0 > -----Original Message----- > From: piranha-list-bounces at redhat.com > [mailto:piranha-list-bounces at redhat.com]On Behalf Of nattapon viroonsri > Sent: Thursday, March 02, 2006 11:21 PM > To: piranha-list at redhat.com > Subject: Lvs dont remove route automatic (squid+direct routing) > > > ## My Config show as below > > RHEL 4 update 1 > ipvsadm-1.24-6 > piranha-0.8.1-1 > > service: Squid > forward method: Direct routing > schedule: lblc > persistence: 360 > > director: > eth0: 172.16.100.37 > eth0:1: 172.16.100.36 (virtual ip) > > realserver1 (cache1): > eth0: 172.16.100.39 > eth1: 172.16.100.36 (virtual ip) > > ## hide virtual ip on cache1 > arptables -A IN -d 172.16.100.36 -j DROP > arptables -A OUT -d 0/0 -j mangle --mangle-ip-s 172.16.100.39 > > realserver2 (cache2) : > eth0: 172.16.100.40 > eth1: 172.16.100.36 (virtual ip) > > ## hide virtual ip on cache2 > arptables -A IN -d 172.16.100.36 -j DROP > arptables -A OUT -d 0/0 -j mangle --mangle-ip-s 172.16.100.40 > > > ## check.sh > #!/bin/bash > echo "GET / HTTP/1.0\n\n" | nc $1 8080 > if [ $? == "0" ]; then > echo "OK" > else > echo "FAIL" > /sbin/ipvsadm -d -t 172.16.100.36:8080 -r $1 > fi > > ## lvs.cf > serial_no = 123 > primary = 172.16.100.37 > service = lvs > backup_active = 0 > backup = 172.16.100.38 > heartbeat = 1 > heartbeat_port = 539 > keepalive = 6 > deadtime = 18 > network = direct > nat_nmask = 255.255.255.0 > debug_level = NONE > monitor_links = 1 > virtual PROXY { > active = 1 > address = 172.16.100.36 eth0:1 > vip_nmask = 255.255.255.0 > port = 8080 > persistent = 360 > expect = "OK" > use_regex = 0 > send_program = "/etc/sysconfig/ha/check.sh %h" > load_monitor = none > scheduler = lblc > protocol = tcp > timeout = 5 > reentry = 10 > quiesce_server = 1 > server Cache1 { > address = 172.16.100.39 > active = 1 > weight = 200 > } > server Cache2 { > address = 172.16.100.40 > active = 1 > weight = 200 > } > } > > > When nanny cant connect to fail service (squid) on real server. > it dont remove route to that real server , so director still > forward request > to that fail squid realserver. > > so i try to manual remove route in check.sh when nanny detech > fail service. > but it work for 20 minute > and then client cant connect to virtual ip. > > Is it lvsd suppose to remove route to fail node automatically when nanny > detech fail service ? > or have any way for lvs to remove route to that fail node > automatically when > nanny detech fail service ? > > Regards, > Nattapon > > _________________________________________________________________ > Express yourself instantly with MSN Messenger! Download today it's FREE! > http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/ > > _______________________________________________ > Piranha-list mailing list > Piranha-list at redhat.com > https://www.redhat.com/mailman/listinfo/piranha-list From nattaponv at hotmail.com Sat Mar 4 13:26:35 2006 From: nattaponv at hotmail.com (nattapon viroonsri) Date: Sat, 04 Mar 2006 13:26:35 +0000 Subject: Lvs dont remove route automatic (squid+direct routing) In-Reply-To: Message-ID: > >quiesce_server = 0 > Its work . but the queesce_server is for when node was add , request session not overwhelm new node ? why when enable quisce_server it can not handle route appropiate ? Regards, nattapon _________________________________________________________________ FREE pop-up blocking with the new MSN Toolbar - get it now! http://toolbar.msn.click-url.com/go/onm00200415ave/direct/01/ From tbushart at nycap.rr.com Sat Mar 4 19:32:39 2006 From: tbushart at nycap.rr.com (Timothy Bushart) Date: Sat, 4 Mar 2006 14:32:39 -0500 Subject: Lvs dont remove route automatic (squid+direct routing) In-Reply-To: Message-ID: I'm not sure, maybe you're supposed to set quiece_server = 1 when you introduce a new real server to manage traffic, then when things level off, set it back to 0. Thanks > -----Original Message----- > From: piranha-list-bounces at redhat.com > [mailto:piranha-list-bounces at redhat.com]On Behalf Of nattapon viroonsri > Sent: Saturday, March 04, 2006 8:27 AM > To: piranha-list at redhat.com > Subject: RE: Lvs dont remove route automatic (squid+direct routing) > > > > > > >quiesce_server = 0 > > > > Its work . but the queesce_server is for when node was add , > request session > not overwhelm new node ? > why when enable quisce_server it can not handle route appropiate ? > > Regards, > nattapon > > _________________________________________________________________ > FREE pop-up blocking with the new MSN Toolbar - get it now! > http://toolbar.msn.click-url.com/go/onm00200415ave/direct/01/ > > _______________________________________________ > Piranha-list mailing list > Piranha-list at redhat.com > https://www.redhat.com/mailman/listinfo/piranha-list From alfeijoo at cesga.es Mon Mar 6 15:49:40 2006 From: alfeijoo at cesga.es (Alejandro Feijoo) Date: Mon, 6 Mar 2006 16:49:40 +0100 (CET) Subject: may be problem on NAT? Message-ID: <43047.193.144.44.59.1141660180.squirrel@webmail.cesga.es> I want to create a cluster using piranha but i only have 2 servers, one for LVS router, one for LVS backup and both for Real Servers. The conf is the follow: ProliantA: public ip: 193.144.34.XXX private ip 10.1.1.1 virtual public ip: 193.144.34.ZZZ virtual private ip: 10.1.1.3 ProliantB: public ip: 193.144.34.YYY private ip 10.1.1.2 I run piranha and make the following config: serial_no = 143 primary = 193.144.34.XXX primary_private = 10.1.1.1 service = lvs backup_active = 1 backup = 193.144.34.YYY backup_private = 10.1.1.2 heartbeat = 1 heartbeat_port = 539 keepalive = 6 deadtime = 18 network = nat nat_router = 10.1.1.3 eth1:1 nat_nmask = 255.255.255.240 debug_level = NONE monitor_links = 1 virtual WEB { active = 1 address = 193.144.34.ZZZ eth0:1 vip_nmask = 255.255.255.0 port = 80 send = "GET / HTTP/1.0\r\n\r\n" expect = "HTTP" use_regex = 0 load_monitor = ruptime scheduler = rr protocol = tcp timeout = 6 reentry = 15 quiesce_server = 0 server proliantA { address = 10.1.1.1 active = 1 weight = 2000 } server proliantB { address = 10.1.1.2 active = 1 weight = 1000 } } But the result is, nat not work... because when i active pulse on both servers eth0:1 not start propietly and... if any petition to 193.144.34.ZZZ may pass to other server this service not work any can help me? Tanks! and sorry my english ++-------------------------++ Alejandro Feij?o Fraga Tecnico de Sistemas. Centro de supercomputaci?n de Galicia Avda. de Vigo s/n. Campus Sur. 15705 - Santiago de Compostela. Spain Tlfn.: 981 56 98 10 Extension: 216 Fax: 981 59 46 16 From alfeijoo at cesga.es Mon Mar 6 16:24:37 2006 From: alfeijoo at cesga.es (Alejandro Feijoo) Date: Mon, 6 Mar 2006 17:24:37 +0100 (CET) Subject: AW: may be problem on NAT? In-Reply-To: <9E502032F5FE0A49B87DF01026259EF0032D7F6E@swelo11mxs.welo.corp00.com> References: <9E502032F5FE0A49B87DF01026259EF0032D7F6E@swelo11mxs.welo.corp00.com> Message-ID: <34122.193.144.44.59.1141662277.squirrel@webmail.cesga.es> Well i dont understand so much .... i create a bond0 on the servers, but what ip?.... can u explain much please :( and if u can... can u make an example? A lot of tanks! > Hi, > > i also had some troubles to create a two nodes LVS Cluster. Currently > i'm using the direct routing method on two nodes. I made two bonding > interfaces in the same ip subsegment. Why you aren't using the same - > direct routing - method, or is there any matter that you have to use > the NAT method? > > If you are using the Direct Routing Method you must to enter an > iptables rulte on the real-sever interface. in my case that was the > bond0 interface. > > E.G: iptables -t nat -A PREROUTING -i bond0 or eth0/1 -p tcp -d > 193.144.34.ZZZ --dport 80(what application is running on the Real > Servers?) -j REDIRECT > > I hope this summary is useful for you! > > Regards, > Philipp > > > > -----Urspr?ngliche Nachricht----- > Von: piranha-list-bounces at redhat.com > [mailto:piranha-list-bounces at redhat.com] Im Auftrag von Alejandro > Feijoo Gesendet: Montag, 6. M?rz 2006 16:50 > An: piranha-list at redhat.com > Betreff: may be problem on NAT? > > > I want to create a cluster using piranha but i only have 2 servers, one > for > LVS router, one for LVS backup and both for Real Servers. > The conf is the follow: > > ProliantA: > public ip: 193.144.34.XXX > private ip 10.1.1.1 > virtual public ip: 193.144.34.ZZZ > virtual private ip: 10.1.1.3 > ProliantB: > public ip: 193.144.34.YYY > private ip 10.1.1.2 > > > I run piranha and make the following config: > serial_no = 143 > primary = 193.144.34.XXX > > > primary_private = 10.1.1.1 > service = lvs > backup_active = 1 > backup = 193.144.34.YYY > backup_private = 10.1.1.2 > heartbeat = 1 > heartbeat_port = 539 > keepalive = 6 > deadtime = 18 > network = nat > nat_router = 10.1.1.3 eth1:1 > nat_nmask = 255.255.255.240 > debug_level = NONE > monitor_links = 1 > virtual WEB { > active = 1 > address = 193.144.34.ZZZ eth0:1 > vip_nmask = 255.255.255.0 > port = 80 > send = "GET / HTTP/1.0\r\n\r\n" > expect = "HTTP" > use_regex = 0 > load_monitor = ruptime > scheduler = rr > protocol = tcp > timeout = 6 > reentry = 15 > quiesce_server = 0 > server proliantA { > address = 10.1.1.1 > active = 1 > weight = 2000 > } > server proliantB { > address = 10.1.1.2 > active = 1 > weight = 1000 > } > } > > > But the result is, nat not work... because when i active pulse on > both servers eth0:1 not start propietly and... if any petition to > 193.144.34.ZZZ may pass to other server this service not work > > > any can help me? > > > Tanks! and sorry my english > > ++-------------------------++ > Alejandro Feij?o Fraga > Tecnico de Sistemas. > Centro de supercomputaci?n de Galicia > Avda. de Vigo s/n. Campus Sur. > 15705 - Santiago de Compostela. Spain > Tlfn.: 981 56 98 10 Extension: 216 > Fax: 981 59 46 16 > > > _______________________________________________ > Piranha-list mailing list > Piranha-list at redhat.com > https://www.redhat.com/mailman/listinfo/piranha-list ++-------------------------++ Alejandro Feij?o Fraga Tecnico de Sistemas. Centro de supercomputaci?n de Galicia Avda. de Vigo s/n. Campus Sur. 15705 - Santiago de Compostela. Spain Tlfn.: 981 56 98 10 Extension: 216 Fax: 981 59 46 16 From alfeijoo at cesga.es Mon Mar 6 17:27:21 2006 From: alfeijoo at cesga.es (Alejandro Feijoo) Date: Mon, 6 Mar 2006 18:27:21 +0100 (CET) Subject: not work :( Message-ID: <39087.193.144.44.59.1141666041.squirrel@webmail.cesga.es> the follow config: serial_no = 13 primary = 193.144.34.188 service = lvs backup_active = 1 backup = 193.144.34.189 heartbeat = 1 heartbeat_port = 539 keepalive = 3 deadtime = 18 network = direct debug_level = NONE monitor_links = 1 virtual HTTPD { active = 1 address = 193.144.34.190 bond0 vip_nmask = 255.255.255.255 port = 80 send = "GET / HTTP/1.0\r\n\r\n" expect = "HTTP" use_regex = 0 load_monitor = none scheduler = rr protocol = tcp timeout = 4 reentry = 10 quiesce_server = 1 server proliantA { address = 10.1.1.1 active = 1 weight = 1 } server proliantB { address = 10.1.1.2 active = 1 weight = 1 } } ProlianA ifconfig bond0 Link encap:Ethernet HWaddr 00:00:00:00:00:00 inet addr:193.144.34.190 Bcast:193.144.34.255 Mask:255.255.255.255 eth0 Link encap:Ethernet HWaddr 00:15:60:55:7D:D1 inet addr:193.144.34.188 Bcast:193.144.34.255 Mask:255.255.255.0 eth1 Link encap:Ethernet HWaddr 00:15:60:55:7D:D0 inet addr:10.1.1.1 Bcast:10.1.1.255 Mask:255.255.255.0 ProliantB ifconfig bond0 Link encap:Ethernet HWaddr 00:00:00:00:00:00 inet addr:193.144.34.190 Bcast:193.144.34.255 Mask:255.255.255.255 eth0 Link encap:Ethernet HWaddr 00:15:60:55:4E:83 inet addr:193.144.34.189 Bcast:193.144.34.255 Mask:255.255.255.0 eth1 Link encap:Ethernet HWaddr 00:15:60:55:4E:82 inet addr:10.1.1.2 Bcast:10.1.1.255 Mask:255.255.255.0 And on both iptables iptables -t nat -A PREROUTING -i bond0 -p tcp -d 193.144.34.190 --dport 80 -j REDIRECT THE result: Mar 6 18:26:34 proliantA nanny[8253]: Failed to connect 10.1.1.1:80 ping socket: Connection refused Mar 6 18:26:34 proliantA nanny[8253]: avail: 0 active: 0: count: 0 Mar 6 18:26:34 proliantA nanny[8254]: Opening TCP socket to remote service port 80... Mar 6 18:26:34 proliantA nanny[8254]: Connecting socket to remote address... Mar 6 18:26:34 proliantA nanny[8254]: DEBUG -- Posting CONNECT poll() Mar 6 18:26:34 proliantA nanny[8254]: Failed to connect 10.1.1.2:80 ping socket: No route to host Mar 6 18:26:34 proliantA nanny[8254]: avail: 0 active: 0: count: 0 any can help me? Tanks! From philipp.rusch at gw-world.com Tue Mar 7 08:08:56 2006 From: philipp.rusch at gw-world.com (Rusch Philipp pru09) Date: Tue, 7 Mar 2006 09:08:56 +0100 Subject: AW: not work :( Message-ID: <9E502032F5FE0A49B87DF01026259EF0032D7F75@swelo11mxs.welo.corp00.com> did you enable the ip_forwarding in /etc/sysctl.conf The entry should be as below net.ipv4.ip_forward = 1 you also can echo 1 > /proc/sys/net/ipv4/ip_forward -----Urspr?ngliche Nachricht----- Von: piranha-list-bounces at redhat.com [mailto:piranha-list-bounces at redhat.com] Im Auftrag von Alejandro Feijoo Gesendet: Montag, 6. M?rz 2006 18:27 An: piranha-list at redhat.com Betreff: not work :( the follow config: serial_no = 13 primary = 193.144.34.188 service = lvs backup_active = 1 backup = 193.144.34.189 heartbeat = 1 heartbeat_port = 539 keepalive = 3 deadtime = 18 network = direct debug_level = NONE monitor_links = 1 virtual HTTPD { active = 1 address = 193.144.34.190 bond0 vip_nmask = 255.255.255.255 port = 80 send = "GET / HTTP/1.0\r\n\r\n" expect = "HTTP" use_regex = 0 load_monitor = none scheduler = rr protocol = tcp timeout = 4 reentry = 10 quiesce_server = 1 server proliantA { address = 10.1.1.1 active = 1 weight = 1 } server proliantB { address = 10.1.1.2 active = 1 weight = 1 } } ProlianA ifconfig bond0 Link encap:Ethernet HWaddr 00:00:00:00:00:00 inet addr:193.144.34.190 Bcast:193.144.34.255 Mask:255.255.255.255 eth0 Link encap:Ethernet HWaddr 00:15:60:55:7D:D1 inet addr:193.144.34.188 Bcast:193.144.34.255 Mask:255.255.255.0 eth1 Link encap:Ethernet HWaddr 00:15:60:55:7D:D0 inet addr:10.1.1.1 Bcast:10.1.1.255 Mask:255.255.255.0 ProliantB ifconfig bond0 Link encap:Ethernet HWaddr 00:00:00:00:00:00 inet addr:193.144.34.190 Bcast:193.144.34.255 Mask:255.255.255.255 eth0 Link encap:Ethernet HWaddr 00:15:60:55:4E:83 inet addr:193.144.34.189 Bcast:193.144.34.255 Mask:255.255.255.0 eth1 Link encap:Ethernet HWaddr 00:15:60:55:4E:82 inet addr:10.1.1.2 Bcast:10.1.1.255 Mask:255.255.255.0 And on both iptables iptables -t nat -A PREROUTING -i bond0 -p tcp -d 193.144.34.190 --dport 80 -j REDIRECT THE result: Mar 6 18:26:34 proliantA nanny[8253]: Failed to connect 10.1.1.1:80 ping socket: Connection refused Mar 6 18:26:34 proliantA nanny[8253]: avail: 0 active: 0: count: 0 Mar 6 18:26:34 proliantA nanny[8254]: Opening TCP socket to remote service port 80... Mar 6 18:26:34 proliantA nanny[8254]: Connecting socket to remote address... Mar 6 18:26:34 proliantA nanny[8254]: DEBUG -- Posting CONNECT poll() Mar 6 18:26:34 proliantA nanny[8254]: Failed to connect 10.1.1.2:80 ping socket: No route to host Mar 6 18:26:34 proliantA nanny[8254]: avail: 0 active: 0: count: 0 any can help me? Tanks! _______________________________________________ Piranha-list mailing list Piranha-list at redhat.com https://www.redhat.com/mailman/listinfo/piranha-list From alfeijoo at cesga.es Tue Mar 7 18:02:56 2006 From: alfeijoo at cesga.es (Alejandro Feijoo) Date: Tue, 7 Mar 2006 19:02:56 +0100 (CET) Subject: cman for CS Message-ID: <39519.193.144.44.59.1141754576.squirrel@webmail.cesga.es> hi i have a linux kernel version 2.6.9.22.0.2 (the lastest!) buttt the cman for dowload is cman-kernel-2.6.9-39.8.src.rpm.... there are any problem if i install that cman ????? and where is rpm for kernel 2.6.9-39.8??? Tanks! ++-------------------------++ Alejandro Feij?o Fraga Tecnico de Sistemas. Centro de supercomputaci?n de Galicia Avda. de Vigo s/n. Campus Sur. 15705 - Santiago de Compostela. Spain Tlfn.: 981 56 98 10 Extension: 216 Fax: 981 59 46 16 From lhh at redhat.com Tue Mar 7 18:16:33 2006 From: lhh at redhat.com (Lon Hohberger) Date: Tue, 07 Mar 2006 13:16:33 -0500 Subject: cman for CS In-Reply-To: <39519.193.144.44.59.1141754576.squirrel@webmail.cesga.es> References: <39519.193.144.44.59.1141754576.squirrel@webmail.cesga.es> Message-ID: <1141755393.25169.82.camel@ayanami.boston.redhat.com> On Tue, 2006-03-07 at 19:02 +0100, Alejandro Feijoo wrote: > hi i have a linux kernel version 2.6.9.22.0.2 (the lastest!) buttt the cman > for dowload is cman-kernel-2.6.9-39.8.src.rpm.... > > there are any problem if i install that cman ????? and where is rpm for > kernel 2.6.9-39.8??? You might want to ask this on linux-cluster instead of piranha-list ;) -- Lon From alfeijoo at cesga.es Wed Mar 8 17:30:42 2006 From: alfeijoo at cesga.es (Alejandro Feijoo) Date: Wed, 8 Mar 2006 18:30:42 +0100 (CET) Subject: no posible make available Message-ID: <56338.193.144.44.59.1141839042.squirrel@webmail.cesga.es> Hi. i have 2 nodes with direct routing and the following configuration: MASTER bond0 Link encap:Ethernet HWaddr 00:15:60:55:7D:D1 inet addr:193.144.34.188 Bcast:193.144.34.255 Mask:255.255.255.0 eth0 Link encap:Ethernet HWaddr 00:15:60:55:7D:D1 inet6 addr: fe80::215:60ff:fe55:7dd1/64 Scope:Link lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 SLAVE bond0 Link encap:Ethernet HWaddr 00:15:60:55:4E:83 inet addr:193.144.34.189 Bcast:193.144.34.255 Mask:255.255.255.0 bond0:1 Link encap:Ethernet HWaddr 00:15:60:55:4E:83 inet addr:193.144.34.190 Bcast:193.144.34.255 Mask:255.255.255.0 eth0 Link encap:Ethernet HWaddr 00:15:60:55:4E:83 inet6 addr: fe80::215:60ff:fe55:4e83/64 Scope:Link lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 LVS.conf serial_no = 30 primary = 193.144.34.188 service = lvs backup_active = 1 backup = 193.144.34.189 heartbeat = 1 heartbeat_port = 539 keepalive = 6 deadtime = 18 network = direct debug_level = NONE monitor_links = 1 virtual WEB { active = 1 address = 193.144.34.190 bond0:1 port = 80 send = "GET / HTTP/1.0\r\n\r\n" expect = "HTTP" use_regex = 0 load_monitor = none scheduler = rr protocol = tcp timeout = 6 reentry = 15 quiesce_server = 0 server prolianta { address = 193.144.34.188 active = 1 weight = 3 } server proliantb { address = 193.144.34.189 active = 1 weight = 1 } } But... when pulse start on both nodes never cant make available the other, for example if Backup start (193.144.34.189) the log give me that: Mar 8 18:25:44 proliantb nanny[5134]: avail: 1 active: 0: count: 3 Mar 8 18:25:44 proliantb nanny[5134]: making 193.144.34.188:80 available Mar 8 18:25:44 proliantb nanny[5135]: avail: 1 active: 0: count: 3 Mar 8 18:25:44 proliantb nanny[5135]: making 193.144.34.189:80 available Mar 8 18:25:50 proliantb nanny[5134]: avail: 1 active: 1: count: 4 I think that only keep one the other lost... any idea? Tanks! ++-------------------------++ Alejandro Feij?o Fraga Tecnico de Sistemas. Centro de supercomputaci?n de Galicia Avda. de Vigo s/n. Campus Sur. 15705 - Santiago de Compostela. Spain Tlfn.: 981 56 98 10 Extension: 216 Fax: 981 59 46 16 From eng_ak at link.net Wed Mar 15 14:18:52 2006 From: eng_ak at link.net (Ahmed Kamal) Date: Wed, 15 Mar 2006 16:18:52 +0200 Subject: Simple architecture question Message-ID: <4418224C.8030307@link.net> Hi everyone, I am new to piranha. I have read many guides. My question is about the need for an extra machine (usually called director) 1- As far as I understand, a director machine is needed for LVS cluster. (Using standard configuration, no need for fancy tricks) 2- Would I need this extra machine for a 'failover' cluster as well? Kindly let me know Best Regards From alfeijoo at cesga.es Wed Mar 15 15:38:18 2006 From: alfeijoo at cesga.es (Alejandro Feijoo) Date: Wed, 15 Mar 2006 16:38:18 +0100 (CET) Subject: Simple architecture question In-Reply-To: <4418224C.8030307@link.net> References: <4418224C.8030307@link.net> Message-ID: <33896.193.144.44.59.1142437098.squirrel@webmail.cesga.es> Hi. You always need Director machine, because that server is who redirect data trough other servers/slaves if you say Backup Director machine, this is not necesary but is recomended > Hi everyone, > > I am new to piranha. I have read many guides. My question is about > the need for an extra machine (usually called director) > > 1- As far as I understand, a director machine is needed for LVS > cluster. (Using standard configuration, no need for fancy tricks) > 2- Would I need this extra machine for a 'failover' cluster as well? > > Kindly let me know > Best Regards > > _______________________________________________ > Piranha-list mailing list > Piranha-list at redhat.com > https://www.redhat.com/mailman/listinfo/piranha-list ++-------------------------++ Alejandro Feij?o Fraga Tecnico de Sistemas. Centro de supercomputaci?n de Galicia Avda. de Vigo s/n. Campus Sur. 15705 - Santiago de Compostela. Spain Tlfn.: 981 56 98 10 Extension: 216 Fax: 981 59 46 16 From brentonr at dorm.org Fri Mar 31 17:25:19 2006 From: brentonr at dorm.org (Brenton Rothchild) Date: Fri, 31 Mar 2006 11:25:19 -0600 Subject: Firewall marks not working as expected? Message-ID: <442D65FF.5090205@dorm.org> Hi, I'm trying to get piranha (0.8.1) in a LVS-NAT setup to use firewall marks to bundle HTTP and HTTPS traffic together with persistence. However, when pulse starts up, I'm getting messages like this (when using "/usr/sbin/pulse -c /etc/sysconfig/ha/lvs.cf -n -v"): lvs: starting virtual service server_119_129_https active: 443 Service already exists lvs: ipvsadm failed for virtual server server_119_129_https! The ipvsadm command seems to work for the first entry (port 80, see lvs.cf below), and nanny processes are started for each of the 3 real servers for the HTTP process, but the second entry (for port 443) isn't - due to ipvsadm failing as shown in the above log messages. Looking at the code, it looks like when using firewall marks, the ipvsadm command isn't getting a full "ip_address:port" definition, as in: if (vserver->fwmark) { *arg++ = (char *) "-f"; (void) sprintf (fwmNum, "%d", vserver->fwmark); *arg++ = fwmNum; } else { switch (vserver->protocol) { case IPPROTO_UDP: *arg++ = (char *) "-u"; break; case IPPROTO_TCP: default: *arg++ = (char *) "-t"; break; } sprintf (virtAddress, "%s:%d", inet_ntoa (vserver->virtualAddress), vserver->port); *arg++ = virtAddress; } I'm trying to use the examples given in the RHCS manual, as per http://www.redhat.com/docs/manuals/csgfs/browse/rh-cs-en/s1-lvs-multi.html: /sbin/iptables -t mangle -A PREROUTING -p tcp \ -d 10.0.119.129/32 --dport 80 -j MARK --set-mark 80 /sbin/iptables -t mangle -A PREROUTING -p tcp \ -d 10.0.119.129/32 --dport 443 -j MARK --set-mark 80 After trying the ipvsadm commands from the command line myself, the problem appears to be since both services are defined only by "-f 80", ipvsadm is assuming they're the same thing and issuing the "Service already exists" message, as in: # Using fwmarks /sbin/ipvsadm -A -f 80 -s wlc -p 60 -M 255.255.255.255 /sbin/ipvsadm -A -f 80 -s wlc -p 60 -M 255.255.255.255 # versus: # Not using fwmarks /sbin/ipvsadm -A -t 10.0.119.129:80 -s wlc -p 60 -M 255.255.255.255 /sbin/ipvsadm -A -t 10.0.119.129:443 -s wlc -p 60 -M 255.255.255.255 Here is my lvs.cf file: service = lvs primary = 10.0.0.1 backup = 10.0.0.2 backup_active = 1 heartbeat = 1 heartbeat_port = 1050 keepalive = 6 deadtime = 18 rsh_command = ssh network = nat nat_router = 192.168.15.254 eth0:1 virtual server_119_129_http { address = 10.0.119.129 eth1:129 active = 1 load_monitor = uptime timeout = 5 reentry = 10 port = 80 send = "GET / HTTP/1.0\r\n\r\n" expect = "HTTP" scheduler = wlc persistent = 60 pmask = 255.255.255.255 fwmark = 80 protocol = tcp server app-1 { address = 192.168.5.1 active = 1 weight = 1 } server app-2 { address = 192.168.5.2 active = 1 weight = 1 } server app-3 { address = 192.168.5.3 active = 1 weight = 1 } } virtual server_119_129_https { address = 10.0.119.129 eth1:129 active = 1 load_monitor = uptime timeout = 5 reentry = 10 port = 443 send = "GET / HTTP/1.0\r\n\r\n" expect = "HTTP" scheduler = wlc persistent = 60 pmask = 255.255.255.255 fwmark = 80 protocol = tcp server app-1 { address = 192.168.5.1 active = 1 weight = 1 } server app-2 { address = 192.168.5.2 active = 1 weight = 1 } server app-3 { address = 192.168.5.3 active = 1 weight = 1 } } Am I doing something completely wrong here? Anyone have any suggestions? Thanks! -Brenton Rothchild