[Ovirt-devel] NIC Bonding and Failover

Daniel P. Berrange berrange at redhat.com
Thu Sep 11 14:53:23 UTC 2008


On Thu, Sep 11, 2008 at 10:30:25AM -0400, Darryl L. Pierce wrote:
> In order to make this happen, the following flow occurs during the node's
> bootup:
> 
>  1. the node submits its hardware details, including the list of NICs 
>  2. the server updates the database, deleting any records for NICs that
>     weren't reported, and saving records for new NICs reported
>  3. the node makes a request to the new managed node controller, asking for
>     the configuration file
>     a. previously this was a hard-coded file, now it's a generated file
>     b. the node submits the list of mac addresses mapped to the interface
>        names for the system
>     c. the returned configuration will contain at most two sections:
>        1. a pre-augtool script
>        2. an augtool file
>  4. the configuration file is saved to /var/tmp/node-config 
>  5. the configuration file is then passed to bash for execution, to extract
>     the two files
>  6. if the file /var/tmp/pre-config-script exists, it is executed
>     a. this segment loads the bonding kernel module with the correct 
>        bonding mode
>  7. if the file /var/tmp/node-augtool exists, then it is passed to augtool
>  8. the network service is then restarted and the bonding is available.
> 
> To configure a node for bonding/failover/load balancing on the server, the
> admin has to set a bonding type for the node. The choices are:
> 
> 1. Load Balancing 
> 2. Failover
> 3. Broadcast
> 4. Link Aggregation
> 
> Only one type can be set per-node. 

Is that a limitation of the linux bonding driver, or an explicit design
choice ?

If I have a system with lots of NICs I could imagine that the storage
LAN might want a different bonding config from the guest LAN, from
the management LAN.  Then again you could argue that in that case 
you can just set 2 pairs for each LAN all in Link Aggregation, which
effectively gives you load balancing/failover  upon failure anyway.

> The user will then be able to select two or more NICs on that node and
> enslave them to a bonded interface. To do that, they will:
> 
> 1. create a bonded interface and give it a name and an interface name
> 2. select two or more NICs and associate them with the bonded interface
> 
> The next time the node boots, it will load the bonding module and pass in 
> the
> appropriate mode for the bonding type selected.
> 
> Questions?

Don't forget that we need to add briding ontop of that if the bonded pair
is to be used for the guest LAN. Potentially also bridges ontop of VLANs 
ontop of bonds.

Daniel
-- 
|: Red Hat, Engineering, London   -o-   http://people.redhat.com/berrange/ :|
|: http://libvirt.org  -o-  http://virt-manager.org  -o-  http://ovirt.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505  -o-  F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|




More information about the ovirt-devel mailing list