[Fedora-xen] Fedora Core 8 + Xenbr0 + network bridging?

Dustin Henning Dustin.Henning at prd-inc.com
Mon Dec 3 22:43:51 UTC 2007


	Unfortunately, I took interest in this discussion and decided to
mess around with it (primarily to see if there really were noticeable
performance gains between xen's built-in bridge script and this manual
method) even though I don't currently have a test box.  I am running F7, and
prior to trying to do this, I had xenbr0 working fine (perhaps from
modifying xend-config.sxp, I don't remember exactly) alongside virbr0 (which
I didn't want, but couldn't get rid of).  I thought undonig changes would
surely get me back to where I started, so I didn't bother with backups
(though, admittedly, backups really equate to undoing changes, so I don't
know what good additional copies of the files I might have backed up would
have done).  My experience went something like this:

	1)  I created the ifcfg-peth0 file as described in "xen-lke
bridging" here:
http://watzmann.net/blog/index.php/2007/04/27/networking_with_kvm_and_libvir
t
	2)  I created a ifcfg-eth0 and an ifcfg-br0 in hopes of making an
eth0 separate from the bridge as xen does.  When that didn't work (by simply
restarting the network) I got rid of the ifcfg-eth0 and used only the
ifcfg-br0 (bridge=br0 was the setting in my peth0 all along), note that I
had changed network-script to /bin/false and restarted xend as well by this
point.
	3)  I added the iptables command as in the page from the link above
	4)  I went ahead and enabled forwarding in procfs via sysctl.conf
thinking that xen was probably manually setting and unsetting this with its
script
	5)  I rebooted and still had xenbr0 and virbr0, along with eth0 and
br0 (both bridges) and peth0 (the real nic)
	6)  I started one of my HVM domains using bridge=br0, and it worked,
as peth0 was bridged on br0 (the other three bridges weren't being used at
all)
	7)  I ran "virsh net-autostart --disable default" thinking it might
get rid of xenbr0 and virbr0, it didn't, they come back on subsequent
reboots
	8)  I decided to change back so there would be less unnecessary
bridges (shown as interfaces in ifconfig) since xenbr0 was still there, I
thought that would be simple enough
	9)  I got rid of ifcfg-br0 and ifcfg-peth0 and rebuilt my original
ifcfg-eth0 (though probably in a different order and without the presumably
unnecessary ipv6 lines)
	10)  I changed network script back to network-bridge and rebooted
	11)  The system came up and still had all four bridges, brctl show
now shows peth0 to be a member of bridge eth0, if I disable the network, all
interfaces left up (ifconfig br0 down, etc), and all bridges ([brctl delif
eth0 peth0] brctl delbr br0, etc), then when I start the network back up,
eth0 is real eth0 (though it takes some time to start up), but when I start
xend back up again, all of that mess comes back.
	12)  If I reboot into the standard kernel, eth0 is eth0 and all is
well, but upon rebooting back into xen, I have the same mess (4 bridges with
peth0 on the eth0 bridge), but I got my domUs working by setting bridge to
eth0 in their config files (when that was set to xenbr0, I couldn't contact
them from dom0 like I could before this fiasco, and I don't think they could
get out to the network either, though I didn't test that exhaustively).

	While it is working and that is fine, I would like to get rid of the
virbr0 (and at this point, also xenbr0 and br0 I guess), but I am not sure
how to go about that.  Brctl certainly isn't deleting them permanently, and
virsh doesn't seem to have anything to do with them.  I did stumble upon a
gui bridge controller at one point while messing with all of this, but
haven't found it again and don't know if it would have caused this stuff to
stick.  I may have caused all of this confusion when messing with
system-config-network, as I thought a second device in there was a NIC I had
removed, so I deleted it, and that may have been used by xen (as opposed to
the NIC sitting in a drawer that may or may not have been installed and
removed on this particular system), but the network-scripts folder in
sysconfig is right (according to my inspection and the happy boot into
non-xen f7).  That said, can anyone tell me where settings might be hiding
that would somehow be creating my br0 along with the other unnecessary (in
my current situation) bridges?  Thanks,

	Dustin


-----Original Message-----
From: fedora-xen-bounces at redhat.com [mailto:fedora-xen-bounces at redhat.com]
On Behalf Of John Summerfield
Sent: Saturday, December 01, 2007 19:34
To: fedora-xen at redhat.com
Subject: Re: [Fedora-xen] Fedora Core 8 + Xenbr0 + network bridging?

Christian Lahti wrote:
> Hi Mark:
>  
> Thank you very much for your response, I did indeed read the original
poster as Dale by mistake :)  So what you are saying makes perfect sense to
me and sounds like exactly what we are after, I will have 3 vlans to bridge
myself ultimately.  My next question is the relative merits of RHEL5.1 as
compared to Fedora 8.  Obviously I would prefer the stable enterprise
release rather than bleeding edge Fedora, but has fully virtualized windows
performance been fixed in this release?  At any rate I am looking forward to
getting this up and running tomorrow!
>  
> /Christian
>  
> 
> ________________________________
> 
> From: Mark Nielsen [mailto:mnielsen at redhat.com]
> Sent: Sat 12/1/2007 3:19 PM
> To: Christian Lahti
> Subject: Re: [Fedora-xen] Fedora Core 8 + Xenbr0 + network bridging?
> 
> 
> 
> hmm, did you mean "Hi Mark" ??
> 
> I have 8 Dell 2950s running RHEL 5.1 (new libvirt with that funky NAT
> they added). I have 4 NICs in each; 2 copper, 2 fiber. I bond the 2
> copper (eth0 and eth1) and call it bond0. bond0 carries my "private" IP
> for cluster suite communications on the dom0 (physical) cluster.
> 
> Then I bond eth2 and eth3 (fiber) in to bond1. I lay down the public
> network for the dom0 cluster on bond1.100 (for example, that would be
> VLAN 100). I also add many (up to 10 or so now) VLANs on bond1
> (bond1.20, bond1.21, bond1.22, etc). Then I create xen bridges to each
> of these bond/VLAN devices. This allows me to put any particular VM on
> any particular (or combination up to 3) of these xen bridged bonded VLAN
> device.
> 
> My document explains, in detail, how to do all of this :) The only added
> step is that I have to "undefine" (virsh net-undefine default) the
> default network that the new libvirt creates (virbr0). Even with this
> new NAT thing they added, I've been told (by our devs) that the
> preferred way to do static network configurations is with the method I
> lay out. NAT is more for dynamic networks (cable modems, dial-up, wifi,
> etc).
> 
> I'm pretty sure there weren't any significant changes in Fedora 8 (we've
> dropped the word "core" now, btw) that don't exist in RHEL 5.1 with
> respects to the network. 5.0 -> 5.1 is when that NAT change came down
> the pipe.
> 
> Mark
> 
> p.s. I'm happy to answer any other questions you may have about my
> document. I'm quite certain that, if you follow it, you'll have what
> you're looking for.
> 
> Christian Lahti wrote:
>> Hi Dale:
>>
>> I work with David who posted the original question to the mailing
>> list.  I think we need to give a bit more background info on what we
>> are trying to do.  We are running a mixed environment of mostly CentOS
>> 3, 4and 5, we do have a few windows servers and XP systems as well. 
>> We are looking to virtualize all these platforms.  Normally we have a
>> bonded pair of NICs for the physical hosts, we were able to get this
>> running using CentOS 5 x86_64 with no problems, the guest machines use
>> the bonded pair in bridged mode as expected after a bit of tweaking. 
>> The biggest issue we found with EL5 is that windows guest performace
>> is dismal at best, hence our decision to have a look at Fedora Core 8
>> x86_64.  I am happy to report that performance for all of our guest
>> platforms is *very* good with FC8, but it seems that libvirt changed
>> the way networking is setup for Xen.  The default NAT configuration is
>> pretty useless for production server environment.  Thanks to the
>> mailing list we are now able to bridge a single NIC on FC8 (like eth0
>> for example), but we cannot figure out how to get a bridge for bond0
>> (comprised of eth0 and eth1) defined and available to Xen.  All the
>> tweaks that worked find on EL5 have not worked so far on FC8.  I am
>> going to review your document tomorrow and give it a try, but any idea
>> on whether your methodology will work on FC8 and libvirt?  I am
>> willing to blow a Sunday to get this worked out once and for all :)
>>
>> Basically we are after good performance on both para and fully
>> virtualized guests using a bonded pair of GB NICs for speed and
>> redundancy.  If this can be achieved with enterprise linux then that
>> would be preferable, but we will go FC8 if the bonding thing can be
>> sorted out.  By the way Xensource 4.x looks to be a respin of RHEL5
>> and has pretty good performance but their free version is limited to
>> 32bit (and hence 4GB ram).  Adding the clustering failover is the next
>> step of course :)
>>
>> Thanks again for the help so far.

In your position, I might consider another Sunday to see whether the f8 
tools run on C5, and not, then what's needed.

The -xen kernel's probably needed along with the most obvious *virt*. 
There might not be a lot of building to do, and the odds are good that a 
Fedora kernel will "just work," depending on whether you need extra drivers.





-- 

Cheers
John

-- spambait
1aaaaaaa at coco.merseine.nu  Z1aaaaaaa at coco.merseine.nu
-- Advice
http://webfoot.com/advice/email.top.php
http://www.catb.org/~esr/faqs/smart-questions.html
http://support.microsoft.com/kb/555375

You cannot reply off-list:-)

--
Fedora-xen mailing list
Fedora-xen at redhat.com
https://www.redhat.com/mailman/listinfo/fedora-xen





More information about the Fedora-xen mailing list