[Linux-cluster] Help BADLY needed: Fencing and LVS

isplist at logicore.net isplist at logicore.net
Mon Dec 11 05:38:52 UTC 2006


I have firewalling at the top of my network which also handles NAT. 
The public IP for the web sites will be 67.98.10.150. The firewall will 
forward that to 192.168.1.53 which is the LVS server.

The LVS has the VIP of 192.168.1.150 and does respond to pings at that address 
when pulse is running as it should. I think the only part I'm missing and just 
can't figure out (too confused over all the trials) is on the web server side.

Here is the last setup I had on the web server which has two NIC's. 

The eth0 should have it's normal IP of 192.168.1.92. But then I need to allow 
it to respond to the VIP address as well. I think this is a left over of 
something else that didn't work since I see eth0 being used. 

arptables -A IN -d 192.168.1.150 -j DROP
arptables -A OUT -d 192.168.1.150 -j mangle --mangle-ip-s 192.168.1.92
service arptables_jf save
ifconfig eth0:1 192.168.1.150 netmask 255.255.255.0 broadcast 192.168.1.255 up
/etc/init.d/iptables start
/etc/init.d/arptables_jf start

Anyhow, I think I'm close but I just don't understand arp, iptables, gateway 
problems, etc. I need to see one in action and I'll better understand it.

Mike






On Sun, 10 Dec 2006 23:30:53 -0600 (CST), Barry Brimer wrote:
> When you set up the LVS you are configuring the public interface as well
> 
> as the internal/backend IP address.  This backend IP address needs to be
> set as the default gateway of your web server.  Let's get the system
> working first.  We can configure interface bonding later, once the system
> is working correctly.
> 
> HTH,
> Barry
> 
> On Sun, 10 Dec 2006, isplist at logicore.net wrote:
> 
>>> The easiest configuration is to have your load balancers sit in front of
>>> the web servers, and also do firewalling/nat.. so basically, your load
>>> balancers have a public ip on one nic, and a private ip on the other.
>>> Your
>>> web servers only need a single IP, on the private network.
>>> 
>> Yup, that's how it's set up. The LVS has a private IP for
>> configuration/access, etc but it responds to a public IP. I was able to
>> ping
>> the public IP when I would start the services so that was fine. The LVS
>> could
>> see the web server/s as well.
>> 
>> The problem was really on the web server side since I don't have much
>> hands on
>> with dual NIC's. I'm sure my problem was as simple as a gateway or some
>> other
>> issue but I just can't find it. Other weird things were that when the web
>> server was set up to respond to LVS, I could reach it but it could no
>> longer
>> reach the cluster and so would fail.
>> 
>> As I say, I'm sure it's something simple that I'm missing but it's got me
>> floored and has for weeks now.
>> 
>> Mike
>> 
>> 
>> --
>> Linux-cluster mailing list
>> Linux-cluster at redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-cluster
>> 
>> !DSPAM:457cd15b251231152038660!
>> 
>> 
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster







More information about the Linux-cluster mailing list