From kgraham at cmtek.com Fri Nov 4 17:33:20 2005 From: kgraham at cmtek.com (Kenneth Graham) Date: Fri, 04 Nov 2005 11:33:20 -0600 Subject: FOS and NFS Message-ID: <436B9B60.8010601@cmtek.com> hi guys I have a big problem to do FOS for 2 nfs server actually the problem is that i dont know hot to check if the nfs service is running well in the master!!! i know that pirahna does not support NFS checking, but i thing that must be a way around to ckeck it , any idea? i have 2 nfs server and 1 http client that mounts files from the nfs server. my lvs.cf is: service = fos backup_active = 1 heartbeat = 1 heartbeat_port = 1050 keepalive = 5 deadtime = 30 rsh_command = rsh network = nat nat_nmask = 255.255.255.0 reservation_conflict_action = preempt debug_level = 3 failover nfs { active = 1 address = 192.168.1.118 eth1:1 port = 2049 *********** normally is not supported by piranha timeout = 10 send_program = "/home/nfs/./check_nfs.sh" **** see below expect = "OK" start_cmd = "/etc/init.d/nfs restart" stop_cmd = "/etc/init.d/nfs stop" for the send_program i tried : lynx 192.168.1.30:80 ----> this will check a file using a http in a client machine that have the file mounted from the nfs server, if i dont have the file back , it will mean that the nfs is not reponding or is down...but the problem is that the request hangs if the nfs is down and pirahna can NOT kill the request using the timeout, it just hangs forever with out response back (i tried changing config for mounting the file in the client using "hard,intr or soft or timeo etc.. but not working) any idea how to do a good ckecking of the status of the NFS ? thanks in advance From tbushart at nycap.rr.com Sat Nov 12 16:22:53 2005 From: tbushart at nycap.rr.com (Timothy Bushart) Date: Sat, 12 Nov 2005 11:22:53 -0500 Subject: Squid Monitoring Script Message-ID: Hi everyone Does the default monitoring script in piranha (SEND GET /HTTP/1.0\r\n\r\n)(EXPECT HTTP)monitor squid if piranha is forwarding TCP3128 to two squid realservers, or is there something else that has to be configured. I have my piranha load balancers set up correctly, they fail over between backup and primary, but if one of my two squid real servers dies (the squid daemon dies) I don't think piranha is sensing that and still attempts to send connections to the dead server. I notice this because the web browser forwarding to the virtual ip just hangs and hangs until page cannot be displayed, then until I reboot the real server and bring squid back up. When I run ipvsadm on the primary director it shows route 0 to one of my real servers, thats how I know it's dead. thanks From mnapolis at redhat.com Wed Nov 30 05:35:08 2005 From: mnapolis at redhat.com (Isauro Michael Napolis) Date: Wed, 30 Nov 2005 15:35:08 +1000 Subject: LVS - mod_ssl error Message-ID: <1133328907.1476.5.camel@localhost.localdomain> Hi all, I need some asistance, any help is greatly appreciated. Below is the current situation: LVS uses Firewall marks to route all https traffic to the "Virtual Webservers" that sit behind the LVS computers. There are recurring events in our apache error log that may or may not be related to this problem. [Sun Nov 6 05:20:38 2005] [debug] Apache.c(364): (32)Broken pipe: mod_perl: rwrite returned -1 (fd=5, B_E OUT=8)\n [Sun Nov 6 05:31:48 2005] [error] mod_ssl: SSL handshake timed out (client 210.23.119.5, server callcente r.ttgo.com:443) [Sun Nov 6 05:36:55 2005] [error] mod_ssl: SSL handshake timed out (client 210.23.119.5, server callcente r.ttgo.com:443) [Sun Nov 6 06:18:01 2005] [info] [client 210.23.119.5] (32)Broken pipe: client stopped connection before rwrite completed The actual problem is when going through the website and working with the form ocassionaly the page returned after a submit will be page not displayed or just a blank page. Adjusted different values for sysctl parameters like tcp_tw_reuse, tcp_tw_recycle and tcp_timestamps unfortunately the problem still persist. Michael