From linux_nic at smjkdgs.edu.my Tue May 15 15:34:19 2007 From: linux_nic at smjkdgs.edu.my (Linux Nic) Date: Tue, 15 May 2007 23:34:19 +0800 Subject: Script to monitor https Message-ID: <4649D2FB.5090500@smjkdgs.edu.my> Hi all, May I know is there any script to monitor HTTPS service ? Currently me only able to monitor HTTP using the default monitoring script. Thanks. From peterbaitz at yahoo.com Tue May 15 15:54:46 2007 From: peterbaitz at yahoo.com (pb) Date: Tue, 15 May 2007 08:54:46 -0700 (PDT) Subject: Script to monitor https In-Reply-To: <4649D2FB.5090500@smjkdgs.edu.my> Message-ID: <78532.51398.qm@web62403.mail.re1.yahoo.com> Try stunnel #!/bin/sh SEND=$'GET / HTTP/1.0\r\n\r\n' EXPECT="HTTP" if echo "$SEND"|stunnel -c -r x.y.z.a:443 2>&1|grep -q "$EXPECT"; then echo OK else echo FAIL fi --- Linux Nic wrote: > Hi all, > > May I know is there any script to monitor HTTPS > service ? Currently > me only able to monitor HTTP using the default > monitoring script. > > Thanks. > > _______________________________________________ > Piranha-list mailing list > Piranha-list at redhat.com > https://www.redhat.com/mailman/listinfo/piranha-list > ____________________________________________________________________________________Yahoo! oneSearch: Finally, mobile search that gives answers, not web links. http://mobile.yahoo.com/mobileweb/onesearch?refer=1ONXIC From hadrouj at gmail.com Tue May 15 18:19:53 2007 From: hadrouj at gmail.com (Mohamed HADROUJ (Gmail)) Date: Tue, 15 May 2007 18:19:53 +0000 Subject: Script to monitor https In-Reply-To: <4649D2FB.5090500@smjkdgs.edu.my> References: <4649D2FB.5090500@smjkdgs.edu.my> Message-ID: <3aa9a44e0705151119m57994bc6q84eac0bc47e50283@mail.gmail.com> Hi All, is there any script to monitor a Radius Service (port 1812) ? I have a Radiator server. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From fapg at eurotux.com Tue May 15 18:46:13 2007 From: fapg at eurotux.com (Fernando A. P. Gomes) Date: Tue, 15 May 2007 19:46:13 +0100 Subject: Script to monitor https In-Reply-To: <3aa9a44e0705151119m57994bc6q84eac0bc47e50283@mail.gmail.com> References: <4649D2FB.5090500@smjkdgs.edu.my> <3aa9a44e0705151119m57994bc6q84eac0bc47e50283@mail.gmail.com> Message-ID: <200705151946.18136.fapg@eurotux.com> Hi, I use nagios-plugins to check radius, https, and many other things. Regards, Fernando Gomes URL: http://mesh.dl.sourceforge.net/sourceforge/nagiosplug/nagios-plugins-1.4.8.tar.gz -- ---------------------------------------------------------------------- Fernando Alexandre Peixoto Gomes E-Mail: fapg at eurotux.com www.EuRoTuX.com ---------------------------------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: not available URL: From herta.vandeneynde at gmail.com Tue May 15 19:23:30 2007 From: herta.vandeneynde at gmail.com (Herta Van den Eynde) Date: Tue, 15 May 2007 21:23:30 +0200 Subject: Script to monitor https In-Reply-To: <4649D2FB.5090500@smjkdgs.edu.my> References: <4649D2FB.5090500@smjkdgs.edu.my> Message-ID: On 15/05/07, Linux Nic wrote: > Hi all, > > May I know is there any script to monitor HTTPS service ? Currently > me only able to monitor HTTP using the default monitoring script. > > Thanks. If you're referring to the piranha check script, here's one we used: #!/bin/bash TEST=`/usr/bin/lynx -head -dump https://$1 2>/dev/null | grep -c "HTTP/1.1 200 OK" ` if [ "$TEST" == "1" ] then echo "OK" else echo "FAIL" fi Kind regards, Herta From hadrouj at gmail.com Tue May 15 21:36:56 2007 From: hadrouj at gmail.com (Mohamed HADROUJ (Gmail)) Date: Tue, 15 May 2007 21:36:56 +0000 Subject: Script to monitor https In-Reply-To: References: <4649D2FB.5090500@smjkdgs.edu.my> Message-ID: <3aa9a44e0705151436m62b32203k72e1058db1441755@mail.gmail.com> Thank you all for your help :) could you please show me how to call this script from the lvs.conf file ? Thanks On 5/15/07, Herta Van den Eynde wrote: > > On 15/05/07, Linux Nic wrote: > > Hi all, > > > > May I know is there any script to monitor HTTPS service ? Currently > > me only able to monitor HTTP using the default monitoring script. > > > > Thanks. > > If you're referring to the piranha check script, here's one we used: > > #!/bin/bash > TEST=`/usr/bin/lynx -head -dump https://$1 2>/dev/null | grep -c > "HTTP/1.1 200 OK" ` > if [ "$TEST" == "1" ] > then > echo "OK" > else > echo "FAIL" > fi > > Kind regards, > > Herta > > _______________________________________________ > Piranha-list mailing list > Piranha-list at redhat.com > https://www.redhat.com/mailman/listinfo/piranha-list > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lhh at redhat.com Wed May 16 19:01:24 2007 From: lhh at redhat.com (Lon Hohberger) Date: Wed, 16 May 2007 15:01:24 -0400 Subject: Script to monitor https In-Reply-To: <3aa9a44e0705151436m62b32203k72e1058db1441755@mail.gmail.com> References: <4649D2FB.5090500@smjkdgs.edu.my> <3aa9a44e0705151436m62b32203k72e1058db1441755@mail.gmail.com> Message-ID: <20070516190124.GH28891@redhat.com> On Tue, May 15, 2007 at 09:36:56PM +0000, Mohamed HADROUJ (Gmail) wrote: > >If you're referring to the piranha check script, here's one we used: > > > >#!/bin/bash > >TEST=`/usr/bin/lynx -head -dump https://$1 2>/dev/null | grep -c > >"HTTP/1.1 200 OK" ` > >if [ "$TEST" == "1" ] > >then > > echo "OK" > >else > > echo "FAIL" > >fi expect = "OK" send_program = "my_script_here" ? -- Lon Hohberger - Software Engineer - Red Hat, Inc. From herta.vandeneynde at gmail.com Wed May 16 22:09:43 2007 From: herta.vandeneynde at gmail.com (Herta Van den Eynde) Date: Thu, 17 May 2007 00:09:43 +0200 Subject: Script to monitor https In-Reply-To: <3aa9a44e0705151436m62b32203k72e1058db1441755@mail.gmail.com> References: <4649D2FB.5090500@smjkdgs.edu.my> <3aa9a44e0705151436m62b32203k72e1058db1441755@mail.gmail.com> Message-ID: On 15/05/07, Mohamed HADROUJ (Gmail) wrote: > Thank you all for your help :) > could you please show me how to call this script from the lvs.conf file ? > Thanks The easiest way is to use the gui tool. Cf. the Edit Monitoring Scripts Subsection on http://www.redhat.com/docs/manuals/csgfs/browse/rh-cs-en/s1-piranha-virtservs.html Kind regards, Herta P.S. Postings are easier to read if you use bottom-posting. > > > On 5/15/07, Herta Van den Eynde < herta.vandeneynde at gmail.com> wrote: > > > > On 15/05/07, Linux Nic wrote: > > > Hi all, > > > > > > May I know is there any script to monitor HTTPS service ? Currently > > > me only able to monitor HTTP using the default monitoring script. > > > > > > Thanks. > > > > If you're referring to the piranha check script, here's one we used: > > > > #!/bin/bash > > TEST=`/usr/bin/lynx -head -dump https://$1 2>/dev/null | grep -c > > "HTTP/1.1 200 OK" ` > > if [ "$TEST" == "1" ] > > then > > echo "OK" > > else > > echo "FAIL" > > fi > > > > Kind regards, > > > > Herta From hadrouj at gmail.com Fri May 18 12:03:21 2007 From: hadrouj at gmail.com (Mohamed HADROUJ (Gmail)) Date: Fri, 18 May 2007 12:03:21 +0000 Subject: debug_level Message-ID: <3aa9a44e0705180503s2e7706b4o7ce37a653b47179c@mail.gmail.com> Hi All, what's the possible values for debug_level parameter in the lvs.cf file ? how can i check the debug messages ? in var/log/messages ? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From hadrouj at gmail.com Fri May 18 12:29:50 2007 From: hadrouj at gmail.com (Mohamed HADROUJ (Gmail)) Date: Fri, 18 May 2007 12:29:50 +0000 Subject: debug_level In-Reply-To: <3aa9a44e0705180503s2e7706b4o7ce37a653b47179c@mail.gmail.com> References: <3aa9a44e0705180503s2e7706b4o7ce37a653b47179c@mail.gmail.com> Message-ID: <3aa9a44e0705180529q6de20d16s79c59f830cdd14aa@mail.gmail.com> Another question please, I'm getting this error in /var/log/messages : May 18 12:23:21 lb1 nanny[11333]: Trouble. Recieved results are not what we expected from (10.10.11.20) the question is : how can I display the results received by Nanny in the log ? On 5/18/07, Mohamed HADROUJ (Gmail) wrote: > > Hi All, > what's the possible values for debug_level parameter in the lvs.cf file ? > how can i check the debug messages ? in var/log/messages ? > Thanks > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lhh at redhat.com Fri May 18 14:39:47 2007 From: lhh at redhat.com (Lon Hohberger) Date: Fri, 18 May 2007 10:39:47 -0400 Subject: debug_level In-Reply-To: <3aa9a44e0705180529q6de20d16s79c59f830cdd14aa@mail.gmail.com> References: <3aa9a44e0705180503s2e7706b4o7ce37a653b47179c@mail.gmail.com> <3aa9a44e0705180529q6de20d16s79c59f830cdd14aa@mail.gmail.com> Message-ID: <20070518143947.GJ28891@redhat.com> On Fri, May 18, 2007 at 12:29:50PM +0000, Mohamed HADROUJ (Gmail) wrote: > Another question please, > I'm getting this error in /var/log/messages : > May 18 12:23:21 lb1 nanny[11333]: Trouble. Recieved results are not what we > expected from (10.10.11.20) > > the question is : how can I display the results received by Nanny in the log > ? Nanny doesn't have a log option that will display that information as far as I can tell; you'd have to grab the source and patch it. -- Lon Hohberger - Software Engineer - Red Hat, Inc. From herta.vandeneynde at gmail.com Fri May 18 21:38:17 2007 From: herta.vandeneynde at gmail.com (Herta Van den Eynde) Date: Fri, 18 May 2007 23:38:17 +0200 Subject: debug_level In-Reply-To: <20070518143947.GJ28891@redhat.com> References: <3aa9a44e0705180503s2e7706b4o7ce37a653b47179c@mail.gmail.com> <3aa9a44e0705180529q6de20d16s79c59f830cdd14aa@mail.gmail.com> <20070518143947.GJ28891@redhat.com> Message-ID: On 18/05/07, Lon Hohberger wrote: > On Fri, May 18, 2007 at 12:29:50PM +0000, Mohamed HADROUJ (Gmail) wrote: > > Another question please, > > I'm getting this error in /var/log/messages : > > May 18 12:23:21 lb1 nanny[11333]: Trouble. Recieved results are not what we > > expected from (10.10.11.20) > > > > the question is : how can I display the results received by Nanny in the log > > ? > > Nanny doesn't have a log option that will display that information as > far as I can tell; you'd have to grab the source and patch it. > > -- > Lon Hohberger - Software Engineer - Red Hat, Inc. Or simply output it to a file from your test script. E.g. #!/bin/bash TEST=`/usr/bin/lynx -head -dump https://$1 2>/dev/null | grep -c "HTTP/1.1 200 OK" ` if [ "$TEST" == "1" ] then echo "OK" | tee output.log else echo "FAIL" | tee output.log fi Kind regards, Herta From hadrouj at gmail.com Mon May 21 13:25:30 2007 From: hadrouj at gmail.com (Mohamed HADROUJ (Gmail)) Date: Mon, 21 May 2007 13:25:30 +0000 Subject: load balancing problem Message-ID: <3aa9a44e0705210625n5f28550ao46072dba72dd64bf@mail.gmail.com> Hi All, I have some load balancing troubles using lvs/piranha, here is the description : when i use the ipvsadm command to view the statistics here what i get : IP Virtual Server version 1.0.8 (size=65536) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn UDP 10.11.11.230:radius wrr -> 10.11.11.20:radius Masq 1 0 0 -> 10.11.11.19:radius Masq 1 0 1 the load balancer keeps forwarding the request to only one server. in addition of that the counter of the connection established is always equal to 1. i've checked also in /var/log/messages, both servers are seen as available. here is my config file : backup_active = 1 backup = 10.11.11.229 backup_private = 10.11.11.121 heartbeat = 1 heartbeat_port = 539 keepalive = 6 deadtime = 18 network = nat nat_router = 10.11.11.122 bond0.499:1 debug_level = NONE virtual gw_radius { active = 1 address = 10.11.11.230 bond0.498:1 port = 1812 send_program = "radpwtst -s %h -noacct -noauth -auth_port 1812 -secret mysecret -status" expect = "OK" use_regex = "0" #load_monitor = uptime scheduler = wrr protocol = udp timeout = 6 reentry = 15 quiesce_server = 0 server GW_1 { address = 10.11.11.19 active = 1 weight = 1 } server GW_2 { address = 10.11.11.20 active = 1 weight = 1 } server GW_3 { address = 10.11.11.21 active = 0 weight = 1 } } Thank you for your help Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From herta.vandeneynde at gmail.com Mon May 21 14:57:39 2007 From: herta.vandeneynde at gmail.com (Herta Van den Eynde) Date: Mon, 21 May 2007 16:57:39 +0200 Subject: load balancing problem In-Reply-To: <3aa9a44e0705210625n5f28550ao46072dba72dd64bf@mail.gmail.com> References: <3aa9a44e0705210625n5f28550ao46072dba72dd64bf@mail.gmail.com> Message-ID: On 21/05/07, Mohamed HADROUJ (Gmail) wrote: > Hi All, > I have some load balancing troubles using lvs/piranha, here is the > description : > when i use the ipvsadm command to view the statistics here what i get : > IP Virtual Server version 1.0.8 (size=65536) > Prot LocalAddress:Port Scheduler Flags > -> RemoteAddress:Port Forward Weight ActiveConn InActConn > UDP 10.11.11.230:radius wrr > -> 10.11.11.20:radius Masq 1 0 0 > -> 10.11.11.19:radius Masq 1 0 1 > > the load balancer keeps forwarding the request to only one server. in > addition of that the counter of the connection established is always equal > to 1. > i've checked also in /var/log/messages, both servers are seen as available. > here is my config file : > > backup_active = 1 > backup = 10.11.11.229 > backup_private = 10.11.11.121 > heartbeat = 1 > heartbeat_port = 539 > keepalive = 6 > deadtime = 18 > network = nat > nat_router = 10.11.11.122 bond0.499:1 > debug_level = NONE > virtual gw_radius { > active = 1 > address = 10.11.11.230 bond0.498:1 > port = 1812 > send_program = "radpwtst -s %h -noacct -noauth -auth_port 1812 > -secret mysecret -status" > expect = "OK" > use_regex = "0" > #load_monitor = uptime > scheduler = wrr > protocol = udp > timeout = 6 > reentry = 15 > quiesce_server = 0 > server GW_1 { > address = 10.11.11.19 > active = 1 > weight = 1 > } > server GW_2 { > address = 10.11.11.20 > active = 1 > weight = 1 > } > server GW_3 { > address = 10.11.11.21 > active = 0 > weight = 1 > } > } > > > > Thank you for your help > Regards Might be a persistence issue. Did you try connecting from 2 different nodes/PCs? Kind regards, Herta From hadrouj at gmail.com Mon May 21 17:31:34 2007 From: hadrouj at gmail.com (Mohamed HADROUJ (Gmail)) Date: Mon, 21 May 2007 17:31:34 +0000 Subject: load balancing problem In-Reply-To: References: <3aa9a44e0705210625n5f28550ao46072dba72dd64bf@mail.gmail.com> Message-ID: <3aa9a44e0705211031u71784ec8yfd2b1d9548df4538@mail.gmail.com> I think also it's a persistence issue, because when i tried ipvsadm after few minutes, the InActConn is initialized to 0. then the load balancer sends the incoming request to the other server. here is an example : -I try to send several request : they all get answers but from the same server -> RemoteAddress:Port Forward Weight ActiveConn InActConn UDP 10.11.11.230:radius wrr -> 10.11.11.20:radius Masq 1 0 0 -> 10.11.11.19:radius Masq 1 0 1 after few minutes : -> RemoteAddress:Port Forward Weight ActiveConn InActConn UDP 10.11.11.230:radius wrr -> 10.11.11.20:radius Masq 1 0 0 -> 10.11.11.19:radius Masq 1 0 0 and then all the requests coming to the load balancer are routed to the other server : -> RemoteAddress:Port Forward Weight ActiveConn InActConn UDP 10.11.11.230:radius wrr -> 10.11.11.20:radius Masq 1 0 1 -> 10.11.11.19:radius Masq 1 0 0 How to fix this behaviour ? Regards, On 5/21/07, Herta Van den Eynde wrote: > > On 21/05/07, Mohamed HADROUJ (Gmail) wrote: > > Hi All, > > I have some load balancing troubles using lvs/piranha, here is the > > description : > > when i use the ipvsadm command to view the statistics here what i get : > > IP Virtual Server version 1.0.8 (size=65536) > > Prot LocalAddress:Port Scheduler Flags > > -> RemoteAddress:Port Forward Weight ActiveConn InActConn > > UDP 10.11.11.230:radius wrr > > -> 10.11.11.20:radius Masq 1 0 0 > > -> 10.11.11.19:radius Masq 1 0 1 > > > > the load balancer keeps forwarding the request to only one server. in > > addition of that the counter of the connection established is always > equal > > to 1. > > i've checked also in /var/log/messages, both servers are seen as > available. > > here is my config file : > > > > backup_active = 1 > > backup = 10.11.11.229 > > backup_private = 10.11.11.121 > > heartbeat = 1 > > heartbeat_port = 539 > > keepalive = 6 > > deadtime = 18 > > network = nat > > nat_router = 10.11.11.122 bond0.499:1 > > debug_level = NONE > > virtual gw_radius { > > active = 1 > > address = 10.11.11.230 bond0.498:1 > > port = 1812 > > send_program = "radpwtst -s %h -noacct -noauth -auth_port 1812 > > -secret mysecret -status" > > expect = "OK" > > use_regex = "0" > > #load_monitor = uptime > > scheduler = wrr > > protocol = udp > > timeout = 6 > > reentry = 15 > > quiesce_server = 0 > > server GW_1 { > > address = 10.11.11.19 > > active = 1 > > weight = 1 > > } > > server GW_2 { > > address = 10.11.11.20 > > active = 1 > > weight = 1 > > } > > server GW_3 { > > address = 10.11.11.21 > > active = 0 > > weight = 1 > > } > > } > > > > > > > > Thank you for your help > > Regards > > Might be a persistence issue. Did you try connecting from 2 different > nodes/PCs? > > Kind regards, > > Herta > > _______________________________________________ > Piranha-list mailing list > Piranha-list at redhat.com > https://www.redhat.com/mailman/listinfo/piranha-list > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hadrouj at gmail.com Mon May 21 21:44:22 2007 From: hadrouj at gmail.com (Mohamed HADROUJ (Gmail)) Date: Mon, 21 May 2007 21:44:22 +0000 Subject: load balancing problem In-Reply-To: <3aa9a44e0705211031u71784ec8yfd2b1d9548df4538@mail.gmail.com> References: <3aa9a44e0705210625n5f28550ao46072dba72dd64bf@mail.gmail.com> <3aa9a44e0705211031u71784ec8yfd2b1d9548df4538@mail.gmail.com> Message-ID: <3aa9a44e0705211444u7b49e5a9jf89830a226761d7c@mail.gmail.com> Hi, when entring the following command : ipvsadm -L -c -nI I get this result : IPVS connection entries pro expire state source virtual destination UDP 04:57 UDP 10.11.11.225:32875 10.11.11.230:1812 10.11.11.19:1812 Every request sets the counter to 5:00 Thank you for your help On 5/21/07, Mohamed HADROUJ (Gmail) wrote: > > I think also it's a persistence issue, because when i tried ipvsadm after > few minutes, the InActConn is initialized to 0. then the load balancer sends > the incoming request to the other server. here is an example : > -I try to send several request : they all get answers but from the same > server > -> RemoteAddress:Port Forward Weight ActiveConn InActConn > UDP 10.11.11.230:radius wrr > -> 10.11.11.20:radius Masq 1 0 0 > -> 10.11.11.19:radius Masq 1 0 1 > > after few minutes : > -> RemoteAddress:Port Forward Weight ActiveConn InActConn > UDP 10.11.11.230:radius wrr > -> 10.11.11.20:radius Masq 1 0 0 > -> 10.11.11.19:radius Masq 1 0 0 > > and then all the requests coming to the load balancer are routed to the > other server : > -> RemoteAddress:Port Forward Weight ActiveConn InActConn > UDP 10.11.11.230:radius wrr > -> 10.11.11.20:radius Masq 1 0 1 > -> 10.11.11.19:radius Masq 1 0 0 > > How to fix this behaviour ? > > Regards, > > > On 5/21/07, Herta Van den Eynde wrote: > > > > On 21/05/07, Mohamed HADROUJ (Gmail) wrote: > > > Hi All, > > > I have some load balancing troubles using lvs/piranha, here is the > > > description : > > > when i use the ipvsadm command to view the statistics here what i get > > : > > > IP Virtual Server version 1.0.8 (size=65536) > > > Prot LocalAddress:Port Scheduler Flags > > > -> RemoteAddress:Port Forward Weight ActiveConn InActConn > > > UDP 10.11.11.230:radius wrr > > > -> 10.11.11.20:radius Masq 1 0 0 > > > -> 10.11.11.19:radius Masq 1 0 1 > > > > > > the load balancer keeps forwarding the request to only one server. in > > > addition of that the counter of the connection established is always > > equal > > > to 1. > > > i've checked also in /var/log/messages, both servers are seen as > > available. > > > here is my config file : > > > > > > backup_active = 1 > > > backup = 10.11.11.229 > > > backup_private = 10.11.11.121 > > > heartbeat = 1 > > > heartbeat_port = 539 > > > keepalive = 6 > > > deadtime = 18 > > > network = nat > > > nat_router = 10.11.11.122 bond0.499:1 > > > debug_level = NONE > > > virtual gw_radius { > > > active = 1 > > > address = 10.11.11.230 bond0.498:1 > > > port = 1812 > > > send_program = "radpwtst -s %h -noacct -noauth -auth_port 1812 > > > -secret mysecret -status" > > > expect = "OK" > > > use_regex = "0" > > > #load_monitor = uptime > > > scheduler = wrr > > > protocol = udp > > > timeout = 6 > > > reentry = 15 > > > quiesce_server = 0 > > > server GW_1 { > > > address = 10.11.11.19 > > > active = 1 > > > weight = 1 > > > } > > > server GW_2 { > > > address = 10.11.11.20 > > > active = 1 > > > weight = 1 > > > } > > > server GW_3 { > > > address = 10.11.11.21 > > > active = 0 > > > weight = 1 > > > } > > > } > > > > > > > > > > > > Thank you for your help > > > Regards > > > > Might be a persistence issue. Did you try connecting from 2 different > > nodes/PCs? > > > > Kind regards, > > > > Herta > > > > _______________________________________________ > > Piranha-list mailing list > > Piranha-list at redhat.com > > https://www.redhat.com/mailman/listinfo/piranha-list > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From herta.vandeneynde at gmail.com Mon May 21 22:28:50 2007 From: herta.vandeneynde at gmail.com (Herta Van den Eynde) Date: Tue, 22 May 2007 00:28:50 +0200 Subject: load balancing problem In-Reply-To: <3aa9a44e0705211444u7b49e5a9jf89830a226761d7c@mail.gmail.com> References: <3aa9a44e0705210625n5f28550ao46072dba72dd64bf@mail.gmail.com> <3aa9a44e0705211031u71784ec8yfd2b1d9548df4538@mail.gmail.com> <3aa9a44e0705211444u7b49e5a9jf89830a226761d7c@mail.gmail.com> Message-ID: On 21/05/07, Mohamed HADROUJ (Gmail) wrote: > Hi, > when entring the following command : ipvsadm -L -c -nI I get this result : > > IPVS connection entries > pro expire state source virtual destination > UDP 04:57 UDP 10.11.11.225:32875 10.11.11.230:1812 > 10.11.11.19:1812 > > Every request sets the counter to 5:00 > Thank you for your help > > > On 5/21/07, Mohamed HADROUJ (Gmail) wrote: > > I think also it's a persistence issue, because when i tried ipvsadm after > few minutes, the InActConn is initialized to 0. then the load balancer sends > the incoming request to the other server. here is an example : > > -I try to send several request : they all get answers but from the same > server > > -> RemoteAddress:Port Forward Weight ActiveConn InActConn > > UDP 10.11.11.230:radius wrr > > -> 10.11.11.20:radius Masq 1 0 0 > > -> 10.11.11.19:radius Masq 1 0 1 > > > > after few minutes : > > -> RemoteAddress:Port Forward Weight ActiveConn InActConn > > UDP 10.11.11.230:radius wrr > > -> 10.11.11.20:radius Masq 1 0 0 > > -> 10.11.11.19:radius Masq 1 0 0 > > > > and then all the requests coming to the load balancer are routed to the > other server : > > -> RemoteAddress:Port Forward Weight ActiveConn InActConn > > UDP 10.11.11.230:radius wrr > > -> 10.11.11.20:radius Masq 1 0 1 > > -> 10.11.11.19:radius Masq 1 0 0 > > > > How to fix this behaviour ? > > > > Regards, > > > > > > > > > > On 5/21/07, Herta Van den Eynde wrote: > > > On 21/05/07, Mohamed HADROUJ (Gmail) wrote: > > > > Hi All, > > > > I have some load balancing troubles using lvs/piranha, here is the > > > > description : > > > > when i use the ipvsadm command to view the statistics here what i get > : > > > > IP Virtual Server version 1.0.8 (size=65536) > > > > Prot LocalAddress:Port Scheduler Flags > > > > -> RemoteAddress:Port Forward Weight ActiveConn InActConn > > > > UDP 10.11.11.230:radius wrr > > > > -> 10.11.11.20:radius Masq 1 0 0 > > > > -> 10.11.11.19:radius Masq 1 0 1 > > > > > > > > the load balancer keeps forwarding the request to only one server. in > > > > addition of that the counter of the connection established is always > equal > > > > to 1. > > > > i've checked also in /var/log/messages, both servers are seen as > available. > > > > here is my config file : > > > > > > > > backup_active = 1 > > > > backup = 10.11.11.229 > > > > backup_private = 10.11.11.121 > > > > heartbeat = 1 > > > > heartbeat_port = 539 > > > > keepalive = 6 > > > > deadtime = 18 > > > > network = nat > > > > nat_router = 10.11.11.122 bond0.499:1 > > > > debug_level = NONE > > > > virtual gw_radius { > > > > active = 1 > > > > address = 10.11.11.230 bond0.498:1 > > > > port = 1812 > > > > send_program = "radpwtst -s %h -noacct -noauth -auth_port 1812 > > > > -secret mysecret -status" > > > > expect = "OK" > > > > use_regex = "0" > > > > #load_monitor = uptime > > > > scheduler = wrr > > > > protocol = udp > > > > timeout = 6 > > > > reentry = 15 > > > > quiesce_server = 0 > > > > server GW_1 { > > > > address = 10.11.11.19 > > > > active = 1 > > > > weight = 1 > > > > } > > > > server GW_2 { > > > > address = 10.11.11.20 > > > > active = 1 > > > > weight = 1 > > > > } > > > > server GW_3 { > > > > address = 10.11.11.21 > > > > active = 0 > > > > weight = 1 > > > > } > > > > } > > > > > > > > > > > > > > > > Thank you for your help > > > > Regards > > > > > > Might be a persistence issue. Did you try connecting from 2 different > > > nodes/PCs? > > > > > > Kind regards, > > > > > > Herta Mohamed, it looks like you posted an extract from the lvs.cf instead of the full file. Would you please look whether you have an entry "persistent = some value" in the "virtual gw_radius" part? If not, try adding a "persistent = 0". Note that depending on what's behind the load balancer, persistence may actually be what you need (in which case you could add e.g. "persistent = 18000", where the value equals 18000 seconds, or 5 hours). Also, in real life, when using persistence, ipvsadm may not show perfect balancing (e.g. people behind proxy servers will all connect from the same address, and therefore get connected to the same server, or some people may stay connected far longer than others). As long as the connections are reasonably spread out, you should be OK. Kind regards, Herta From hadrouj at gmail.com Tue May 22 00:17:14 2007 From: hadrouj at gmail.com (Mohamed HADROUJ (Gmail)) Date: Tue, 22 May 2007 00:17:14 +0000 Subject: load balancing problem In-Reply-To: References: <3aa9a44e0705210625n5f28550ao46072dba72dd64bf@mail.gmail.com> <3aa9a44e0705211031u71784ec8yfd2b1d9548df4538@mail.gmail.com> <3aa9a44e0705211444u7b49e5a9jf89830a226761d7c@mail.gmail.com> Message-ID: <3aa9a44e0705211717m1615a37dvc8745fe9ab4492e0@mail.gmail.com> Hi Herta and thank you for your answers, I've posted the full file lvs.cf... I've added persistent = 0 and restarted pulse but it still doesn't work as I wish, meaning "Round Roubing" on the real servers. Regards, On 5/21/07, Herta Van den Eynde wrote: > > On 21/05/07, Mohamed HADROUJ (Gmail) wrote: > > Hi, > > when entring the following command : ipvsadm -L -c -nI I get this > result : > > > > IPVS connection entries > > pro expire state source virtual destination > > UDP 04:57 UDP 10.11.11.225:32875 10.11.11.230:1812 > > 10.11.11.19:1812 > > > > Every request sets the counter to 5:00 > > Thank you for your help > > > > > > On 5/21/07, Mohamed HADROUJ (Gmail) wrote: > > > I think also it's a persistence issue, because when i tried ipvsadm > after > > few minutes, the InActConn is initialized to 0. then the load balancer > sends > > the incoming request to the other server. here is an example : > > > -I try to send several request : they all get answers but from the > same > > server > > > -> RemoteAddress:Port Forward Weight ActiveConn InActConn > > > UDP 10.11.11.230:radius wrr > > > -> 10.11.11.20:radius Masq 1 0 0 > > > -> 10.11.11.19:radius Masq 1 0 1 > > > > > > after few minutes : > > > -> RemoteAddress:Port Forward Weight ActiveConn InActConn > > > UDP 10.11.11.230:radius wrr > > > -> 10.11.11.20:radius Masq 1 0 0 > > > -> 10.11.11.19:radius Masq 1 0 0 > > > > > > and then all the requests coming to the load balancer are routed to > the > > other server : > > > -> RemoteAddress:Port Forward Weight ActiveConn InActConn > > > UDP 10.11.11.230:radius wrr > > > -> 10.11.11.20:radius Masq 1 0 1 > > > -> 10.11.11.19:radius Masq 1 0 0 > > > > > > How to fix this behaviour ? > > > > > > Regards, > > > > > > > > > > > > > > > On 5/21/07, Herta Van den Eynde wrote: > > > > On 21/05/07, Mohamed HADROUJ (Gmail) wrote: > > > > > Hi All, > > > > > I have some load balancing troubles using lvs/piranha, here is the > > > > > description : > > > > > when i use the ipvsadm command to view the statistics here what i > get > > : > > > > > IP Virtual Server version 1.0.8 (size=65536) > > > > > Prot LocalAddress:Port Scheduler Flags > > > > > -> RemoteAddress:Port Forward Weight ActiveConn > InActConn > > > > > UDP 10.11.11.230:radius wrr > > > > > -> 10.11.11.20:radius Masq 1 0 0 > > > > > -> 10.11.11.19:radius Masq 1 0 1 > > > > > > > > > > the load balancer keeps forwarding the request to only one server. > in > > > > > addition of that the counter of the connection established is > always > > equal > > > > > to 1. > > > > > i've checked also in /var/log/messages, both servers are seen as > > available. > > > > > here is my config file : > > > > > > > > > > backup_active = 1 > > > > > backup = 10.11.11.229 > > > > > backup_private = 10.11.11.121 > > > > > heartbeat = 1 > > > > > heartbeat_port = 539 > > > > > keepalive = 6 > > > > > deadtime = 18 > > > > > network = nat > > > > > nat_router = 10.11.11.122 bond0.499:1 > > > > > debug_level = NONE > > > > > virtual gw_radius { > > > > > active = 1 > > > > > address = 10.11.11.230 bond0.498:1 > > > > > port = 1812 > > > > > send_program = "radpwtst -s %h -noacct -noauth -auth_port > 1812 > > > > > -secret mysecret -status" > > > > > expect = "OK" > > > > > use_regex = "0" > > > > > #load_monitor = uptime > > > > > scheduler = wrr > > > > > protocol = udp > > > > > timeout = 6 > > > > > reentry = 15 > > > > > quiesce_server = 0 > > > > > server GW_1 { > > > > > address = 10.11.11.19 > > > > > active = 1 > > > > > weight = 1 > > > > > } > > > > > server GW_2 { > > > > > address = 10.11.11.20 > > > > > active = 1 > > > > > weight = 1 > > > > > } > > > > > server GW_3 { > > > > > address = 10.11.11.21 > > > > > active = 0 > > > > > weight = 1 > > > > > } > > > > > } > > > > > > > > > > > > > > > > > > > > Thank you for your help > > > > > Regards > > > > > > > > Might be a persistence issue. Did you try connecting from 2 > different > > > > nodes/PCs? > > > > > > > > Kind regards, > > > > > > > > Herta > > Mohamed, it looks like you posted an extract from the lvs.cf instead > of the full file. Would you please look whether you have an entry > "persistent = some value" in the "virtual gw_radius" part? If not, > try adding a "persistent = 0". > > Note that depending on what's behind the load balancer, persistence > may actually be what you need (in which case you could add e.g. > "persistent = 18000", where the value equals 18000 seconds, or 5 > hours). > Also, in real life, when using persistence, ipvsadm may not show > perfect balancing (e.g. people behind proxy servers will all connect > from the same address, and therefore get connected to the same server, > or some people may stay connected far longer than others). As long as > the connections are reasonably spread out, you should be OK. > > Kind regards, > > Herta > > _______________________________________________ > Piranha-list mailing list > Piranha-list at redhat.com > https://www.redhat.com/mailman/listinfo/piranha-list > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hadrouj at gmail.com Wed May 23 12:06:18 2007 From: hadrouj at gmail.com (Mohamed HADROUJ (Gmail)) Date: Wed, 23 May 2007 12:06:18 +0000 Subject: load balancing problem In-Reply-To: References: <3aa9a44e0705210625n5f28550ao46072dba72dd64bf@mail.gmail.com> <3aa9a44e0705211031u71784ec8yfd2b1d9548df4538@mail.gmail.com> <3aa9a44e0705211444u7b49e5a9jf89830a226761d7c@mail.gmail.com> Message-ID: <3aa9a44e0705230506s9d3aa00lf81407e23239a081@mail.gmail.com> Hi, i've found a workaround to fix this problem : ipvsadm --set 0 0 0 this will set expiry time to 0s for UDP packet, but ... when a real server is offline the load balancer keeps sending incoming requests to it. Thanks for your help. Regards On 5/21/07, Herta Van den Eynde wrote: > > On 21/05/07, Mohamed HADROUJ (Gmail) wrote: > > Hi, > > when entring the following command : ipvsadm -L -c -nI I get this > result : > > > > IPVS connection entries > > pro expire state source virtual destination > > UDP 04:57 UDP 10.11.11.225:32875 10.11.11.230:1812 > > 10.11.11.19:1812 > > > > Every request sets the counter to 5:00 > > Thank you for your help > > > > > > On 5/21/07, Mohamed HADROUJ (Gmail) wrote: > > > I think also it's a persistence issue, because when i tried ipvsadm > after > > few minutes, the InActConn is initialized to 0. then the load balancer > sends > > the incoming request to the other server. here is an example : > > > -I try to send several request : they all get answers but from the > same > > server > > > -> RemoteAddress:Port Forward Weight ActiveConn InActConn > > > UDP 10.11.11.230:radius wrr > > > -> 10.11.11.20:radius Masq 1 0 0 > > > -> 10.11.11.19:radius Masq 1 0 1 > > > > > > after few minutes : > > > -> RemoteAddress:Port Forward Weight ActiveConn InActConn > > > UDP 10.11.11.230:radius wrr > > > -> 10.11.11.20:radius Masq 1 0 0 > > > -> 10.11.11.19:radius Masq 1 0 0 > > > > > > and then all the requests coming to the load balancer are routed to > the > > other server : > > > -> RemoteAddress:Port Forward Weight ActiveConn InActConn > > > UDP 10.11.11.230:radius wrr > > > -> 10.11.11.20:radius Masq 1 0 1 > > > -> 10.11.11.19:radius Masq 1 0 0 > > > > > > How to fix this behaviour ? > > > > > > Regards, > > > > > > > > > > > > > > > On 5/21/07, Herta Van den Eynde wrote: > > > > On 21/05/07, Mohamed HADROUJ (Gmail) wrote: > > > > > Hi All, > > > > > I have some load balancing troubles using lvs/piranha, here is the > > > > > description : > > > > > when i use the ipvsadm command to view the statistics here what i > get > > : > > > > > IP Virtual Server version 1.0.8 (size=65536) > > > > > Prot LocalAddress:Port Scheduler Flags > > > > > -> RemoteAddress:Port Forward Weight ActiveConn > InActConn > > > > > UDP 10.11.11.230:radius wrr > > > > > -> 10.11.11.20:radius Masq 1 0 0 > > > > > -> 10.11.11.19:radius Masq 1 0 1 > > > > > > > > > > the load balancer keeps forwarding the request to only one server. > in > > > > > addition of that the counter of the connection established is > always > > equal > > > > > to 1. > > > > > i've checked also in /var/log/messages, both servers are seen as > > available. > > > > > here is my config file : > > > > > > > > > > backup_active = 1 > > > > > backup = 10.11.11.229 > > > > > backup_private = 10.11.11.121 > > > > > heartbeat = 1 > > > > > heartbeat_port = 539 > > > > > keepalive = 6 > > > > > deadtime = 18 > > > > > network = nat > > > > > nat_router = 10.11.11.122 bond0.499:1 > > > > > debug_level = NONE > > > > > virtual gw_radius { > > > > > active = 1 > > > > > address = 10.11.11.230 bond0.498:1 > > > > > port = 1812 > > > > > send_program = "radpwtst -s %h -noacct -noauth -auth_port > 1812 > > > > > -secret mysecret -status" > > > > > expect = "OK" > > > > > use_regex = "0" > > > > > #load_monitor = uptime > > > > > scheduler = wrr > > > > > protocol = udp > > > > > timeout = 6 > > > > > reentry = 15 > > > > > quiesce_server = 0 > > > > > server GW_1 { > > > > > address = 10.11.11.19 > > > > > active = 1 > > > > > weight = 1 > > > > > } > > > > > server GW_2 { > > > > > address = 10.11.11.20 > > > > > active = 1 > > > > > weight = 1 > > > > > } > > > > > server GW_3 { > > > > > address = 10.11.11.21 > > > > > active = 0 > > > > > weight = 1 > > > > > } > > > > > } > > > > > > > > > > > > > > > > > > > > Thank you for your help > > > > > Regards > > > > > > > > Might be a persistence issue. Did you try connecting from 2 > different > > > > nodes/PCs? > > > > > > > > Kind regards, > > > > > > > > Herta > > Mohamed, it looks like you posted an extract from the lvs.cf instead > of the full file. Would you please look whether you have an entry > "persistent = some value" in the "virtual gw_radius" part? If not, > try adding a "persistent = 0". > > Note that depending on what's behind the load balancer, persistence > may actually be what you need (in which case you could add e.g. > "persistent = 18000", where the value equals 18000 seconds, or 5 > hours). > Also, in real life, when using persistence, ipvsadm may not show > perfect balancing (e.g. people behind proxy servers will all connect > from the same address, and therefore get connected to the same server, > or some people may stay connected far longer than others). As long as > the connections are reasonably spread out, you should be OK. > > Kind regards, > > Herta > > _______________________________________________ > Piranha-list mailing list > Piranha-list at redhat.com > https://www.redhat.com/mailman/listinfo/piranha-list > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hadrouj at gmail.com Wed May 23 13:59:00 2007 From: hadrouj at gmail.com (Mohamed HADROUJ (Gmail)) Date: Wed, 23 May 2007 13:59:00 +0000 Subject: load balancing problem In-Reply-To: <3aa9a44e0705230506s9d3aa00lf81407e23239a081@mail.gmail.com> References: <3aa9a44e0705210625n5f28550ao46072dba72dd64bf@mail.gmail.com> <3aa9a44e0705211031u71784ec8yfd2b1d9548df4538@mail.gmail.com> <3aa9a44e0705211444u7b49e5a9jf89830a226761d7c@mail.gmail.com> <3aa9a44e0705230506s9d3aa00lf81407e23239a081@mail.gmail.com> Message-ID: <3aa9a44e0705230659r1f221885o3a62628d7a89e3ae@mail.gmail.com> Hi :) when tailing the /var/log/messages I can see this : May 23 13:49:46 vasppslb1 nanny[12312]: The following exited abnormally: May 23 13:49:46 vasppslb1 nanny[12312]: Ran the external sending program to (10.11.11.20) but didn't get anything back it seems thats Nanny does not update the ipvsadm routing table even if it get no answers from the real server : IP Virtual Server version 1.0.8 (size=65536) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn UDP 10.76.11.230:radius wrr -> 10.11.11.20:radius Masq 1 0 0 -> 10.11.11.19:radius Masq 1 0 0 Nanny's issue ?? On 5/23/07, Mohamed HADROUJ (Gmail) wrote: > > Hi, > i've found a workaround to fix this problem : > ipvsadm --set 0 0 0 > this will set expiry time to 0s for UDP packet, but ... when a real server > is offline the load balancer keeps sending incoming requests to it. > Thanks for your help. > > Regards > > > On 5/21/07, Herta Van den Eynde wrote: > > > > On 21/05/07, Mohamed HADROUJ (Gmail) < hadrouj at gmail.com> wrote: > > > Hi, > > > when entring the following command : ipvsadm -L -c -nI I get this > > result : > > > > > > IPVS connection entries > > > pro expire state source > > virtual destination > > > UDP 04:57 UDP 10.11.11.225:32875 10.11.11.230:1812 > > > 10.11.11.19:1812 > > > > > > Every request sets the counter to 5:00 > > > Thank you for your help > > > > > > > > > On 5/21/07, Mohamed HADROUJ (Gmail) wrote: > > > > I think also it's a persistence issue, because when i tried ipvsadm > > after > > > few minutes, the InActConn is initialized to 0. then the load balancer > > sends > > > the incoming request to the other server. here is an example : > > > > -I try to send several request : they all get answers but from the > > same > > > server > > > > -> RemoteAddress:Port Forward Weight ActiveConn > > InActConn > > > > UDP 10.11.11.230:radius wrr > > > > -> 10.11.11.20:radius Masq 1 0 0 > > > > -> 10.11.11.19:radius Masq 1 0 1 > > > > > > > > after few minutes : > > > > -> RemoteAddress:Port Forward Weight ActiveConn > > InActConn > > > > UDP 10.11.11.230:radius wrr > > > > -> 10.11.11.20:radius Masq 1 0 0 > > > > -> 10.11.11.19:radius Masq 1 0 0 > > > > > > > > and then all the requests coming to the load balancer are routed to > > the > > > other server : > > > > -> RemoteAddress:Port Forward Weight ActiveConn > > InActConn > > > > UDP 10.11.11.230:radius wrr > > > > -> 10.11.11.20:radius Masq 1 0 1 > > > > -> 10.11.11.19:radius Masq 1 0 0 > > > > > > > > How to fix this behaviour ? > > > > > > > > Regards, > > > > > > > > > > > > > > > > > > > > On 5/21/07, Herta Van den Eynde wrote: > > > > > On 21/05/07, Mohamed HADROUJ (Gmail) < hadrouj at gmail.com> wrote: > > > > > > Hi All, > > > > > > I have some load balancing troubles using lvs/piranha, here is > > the > > > > > > description : > > > > > > when i use the ipvsadm command to view the statistics here what > > i get > > > : > > > > > > IP Virtual Server version 1.0.8 (size=65536) > > > > > > Prot LocalAddress:Port Scheduler Flags > > > > > > -> RemoteAddress:Port Forward Weight ActiveConn > > InActConn > > > > > > UDP 10.11.11.230:radius wrr > > > > > > -> 10.11.11.20:radius Masq 1 0 0 > > > > > > -> 10.11.11.19:radius Masq 1 0 1 > > > > > > > > > > > > the load balancer keeps forwarding the request to only one > > server. in > > > > > > addition of that the counter of the connection established is > > always > > > equal > > > > > > to 1. > > > > > > i've checked also in /var/log/messages, both servers are seen as > > > available. > > > > > > here is my config file : > > > > > > > > > > > > backup_active = 1 > > > > > > backup = 10.11.11.229 > > > > > > backup_private = 10.11.11.121 > > > > > > heartbeat = 1 > > > > > > heartbeat_port = 539 > > > > > > keepalive = 6 > > > > > > deadtime = 18 > > > > > > network = nat > > > > > > nat_router = 10.11.11.122 bond0.499:1 > > > > > > debug_level = NONE > > > > > > virtual gw_radius { > > > > > > active = 1 > > > > > > address = 10.11.11.230 bond0.498:1 > > > > > > port = 1812 > > > > > > send_program = "radpwtst -s %h -noacct -noauth > > -auth_port 1812 > > > > > > -secret mysecret -status" > > > > > > expect = "OK" > > > > > > use_regex = "0" > > > > > > #load_monitor = uptime > > > > > > scheduler = wrr > > > > > > protocol = udp > > > > > > timeout = 6 > > > > > > reentry = 15 > > > > > > quiesce_server = 0 > > > > > > server GW_1 { > > > > > > address = 10.11.11.19 > > > > > > active = 1 > > > > > > weight = 1 > > > > > > } > > > > > > server GW_2 { > > > > > > address = 10.11.11.20 > > > > > > active = 1 > > > > > > weight = 1 > > > > > > } > > > > > > server GW_3 { > > > > > > address = 10.11.11.21 > > > > > > active = 0 > > > > > > weight = 1 > > > > > > } > > > > > > } > > > > > > > > > > > > > > > > > > > > > > > > Thank you for your help > > > > > > Regards > > > > > > > > > > Might be a persistence issue. Did you try connecting from 2 > > different > > > > > nodes/PCs? > > > > > > > > > > Kind regards, > > > > > > > > > > Herta > > > > Mohamed, it looks like you posted an extract from the lvs.cf instead > > of the full file. Would you please look whether you have an entry > > "persistent = some value" in the "virtual gw_radius" part? If not, > > try adding a "persistent = 0". > > > > Note that depending on what's behind the load balancer, persistence > > may actually be what you need (in which case you could add e.g. > > "persistent = 18000", where the value equals 18000 seconds, or 5 > > hours). > > Also, in real life, when using persistence, ipvsadm may not show > > perfect balancing (e.g. people behind proxy servers will all connect > > from the same address, and therefore get connected to the same server, > > or some people may stay connected far longer than others). As long as > > the connections are reasonably spread out, you should be OK. > > > > Kind regards, > > > > Herta > > > > _______________________________________________ > > Piranha-list mailing list > > Piranha-list at redhat.com > > https://www.redhat.com/mailman/listinfo/piranha-list > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dbrieck at gmail.com Wed May 30 15:04:07 2007 From: dbrieck at gmail.com (David Brieck Jr.) Date: Wed, 30 May 2007 11:04:07 -0400 Subject: load balancing problem In-Reply-To: <3aa9a44e0705230659r1f221885o3a62628d7a89e3ae@mail.gmail.com> References: <3aa9a44e0705210625n5f28550ao46072dba72dd64bf@mail.gmail.com> <3aa9a44e0705211031u71784ec8yfd2b1d9548df4538@mail.gmail.com> <3aa9a44e0705211444u7b49e5a9jf89830a226761d7c@mail.gmail.com> <3aa9a44e0705230506s9d3aa00lf81407e23239a081@mail.gmail.com> <3aa9a44e0705230659r1f221885o3a62628d7a89e3ae@mail.gmail.com> Message-ID: <8c1094290705300804l63c0757cy945b446a26467630@mail.gmail.com> On 5/23/07, Mohamed HADROUJ (Gmail) wrote: > Hi :) > when tailing the /var/log/messages I can see this : > May 23 13:49:46 vasppslb1 nanny[12312]: The following exited abnormally: > May 23 13:49:46 vasppslb1 nanny[12312]: Ran the external sending program to ( 10.11.11.20) but didn't get anything back > > it seems thats Nanny does not update the ipvsadm routing table even if it get no answers from the real server : > IP Virtual Server version 1.0.8 (size=65536) > Prot LocalAddress:Port Scheduler Flags > -> RemoteAddress:Port Forward Weight ActiveConn InActConn > UDP 10.76.11.230:radius wrr > -> 10.11.11.20:radius Masq 1 0 0 > -> 10.11.11.19:radius Masq 1 0 0 > > Nanny's issue ?? I'm seeing the same type of problem. A real server goes offline and I get something like " shutting down 10.1.1.183:80 due to connection failure" in the logs, but it never gets removed. Anyone else see something like this?