[libvirt] RFC: network interface "tags" vs. portgroups

Laine Stump laine at laine.org
Thu Jun 26 08:16:45 UTC 2014


On 06/19/2014 09:08 AM, Michal Privoznik wrote:
> On 10.06.2014 12:01, Laine Stump wrote:
>> A couple releases ago (commit 7d5bf484, first appeared in 1.2.2) I
>> modified the domain interface status xml to show what resources are
>> actually in use for an interface, superseding the interface config in
>> the cases where they conflict with each other.
>>
>> In particular, if there is an interface of type='network' that
>> references a portgroup of the network in the <source> element, the
>> interface status will not contain a <source> element showing the network
>> and portgroup names, but instead the <source resulting from applying the
>> config is shown. For example, given the following domain interface and
>> network definitions:
>>
>>
>>      <interface type='network'>
>>        <source network='mynet' portgroup='xyzzy'/>
>>        ...
>>      </interface>
>>
>>
>>      <network>
>>        <name>mynet</name>
>>        <forward mode='bridge'/>\
>>        <bridge name='br0'/>
>>        <portgroup name='xyzzy'>
>>          <bandwidth>
>>            <inbound average='1000' peak='5000' floor='200'
>> burst='1024'/>
>>            <outbound average='128' peak='256' burst='256'/>
>>          </bandwidth>
>>        </portgroup>
>>      </network>
>>
>> the status that was previously displayed when the domain was running
>> would be identical to the config above (except that it would also
>> contain the tap device name and alias). But now the status will be this:
>>
>>
>>      <interface type='bridge'>
>>        <source bridge='br0'/>
>>        <bandwidth>
>>          <inbound average='1000' peak='5000' floor='200' burst='1024'/>
>>          <outbound average='128' peak='256' burst='256'/>
>>        </bandwidth>
>>        ...
>>      </interface>
>>
>> The advantage here is that a hook script for the domain will be able to
>> see the bandwidth (and vlan and physical device, if any) that are
>> actually being used by the domain's interface. Because the config and
>> status both use the same elements/attributes, we can only show one or
>> the other; the thinking was that normally the status will be what is
>> desired, and anyone who really wants to look at the config should use
>> the VIR_DOMAIN_XML_INACTIVE flag when calling virDomainGetXMLDesc().
>>
>> As you would expect, a few months later (after the release of 1.2.4)
>> someone on IRC checked in with a problem caused by this change - they
>> had been using the portgroup name in the <source> element of the
>> interface to determine what action to take during migration; they didn't
>> even have any libvirt config stored in the portgroup, but were just
>> using its name as a tag. Since the portgroup name is only a part of the
>> <source> element when the interface is type='network', they now don't
>> have a tag in the xml to use for their decision (and since they aren't
>> explicitly calling virDomainGetXMLDesc() themselves, they can't simply
>> get the INACTIVE xml to solve their problem).
>>
>> This use of a portgroup name struck me as potentially useful (although
>> it is a slight hijacking of the original intent of portgroups), so I
>> would like to restore that functionality. I came up with a couple
>> different ways to solve the problem, and am looking for opinions before
>> I spend any time on either.
>>
>> Solution 1:
>>
>> My initial thought was to just restore the portgroup name in the status
>> XML; that could be done by moving the portgroup name out of the
>> network-specific part of the object and into common data for all
>> interface types (this way it could show up in the <source> element no
>> matter what is the network type). However, once we've done that it
>> becomes enticing to allow specification of a portgroup even in cases
>> where the interface type != network; in those cases though, the
>> portgroup would be *only* a tag to be used by external entities; this
>> would lead to lax checking for existence of the specified portgroup, and
>> may end up with people misspelling a portgroup name, then mistakenly
>> believing that (e.g.) they had a bandwidth limit applied to a domain
>> interface when none was in fact in effect. (alternately, we could allow
>> it only if the interface *config* was for type='network', but that seems
>> somehow logically broken, and you can bet that eventually someone would
>> ask for us to allow it for all types)
>>
>> Solution 2:
>>
>> An alternate idea I had was to add a new <tag name='x'/> element to
>> interfaces, networks, and portgroups. An interface could have multiple
>> tags, and would assume the tags of its <network> when active. A <tag>
>> would be purely for use by external entities - it would mean nothing to
>> libvirt. For example, given this extreme example:
>>
>>      <interface type='network'>
>>        <source network='mynet' portgroup='xyzzy'/>
>>        <tag name='wumpus'/>
>>        ...
>>      </interface>
>>
>>      <network>
>>        <name>mynet</name>
>>        <tag name='twisty'/>
>>        <forward mode='bridge'/>\
>>        <bridge name='br0'/>
>>        <portgroup name='xyzzy'>
>>          <tag name='xyzzytag'/>
>>          <bandwidth>
>>            <inbound average='1000' peak='5000' floor='200'
>> burst='1024'/>
>>            <outbound average='128' peak='256' burst='256'/>
>>          </bandwidth>
>>        </portgroup>
>>      </network>
>>     <network>
>>
>> when the interface was active, its status xml would look like this:
>>
>>      <interface type='bridge'>
>>        <tag name='wumpus'/>
>>        <tag name='twisty'/>
>>        <tag name='xyzzytag'/>
>>        <source bridge='br0'/>
>>        <bandwidth>
>>          <inbound average='1000' peak='5000' floor='200' burst='1024'/>
>>          <outbound average='128' peak='256' burst='256'/>
>>        </bandwidth>
>>        ...
>>      </interface>
>>
>> These tags would mean nothing to libvirt, but would be available for
>> management applications to make decisions during domain
>> start/stop/migration/etc.
>
> While solution 1 is ad-hoc, I like it more than 2 if I had to choose.
> The problem with 2 is that it's polluting our domain XML. Next time we
> invent hooks for something else (nwfilters, storage, whatever) we're
> in the same situation again. Which will result in more pollution then
> if now go with 2 now. That's why I vote for 1 even though it's kind of
> ad-hoc approach.

I agree with the sentiment, but have just tried implementing option (1)
and found that it creates some ugly code in the parser and (especially)
formatter.

The problem is that in at least one case of parsing and formatting, the
<source> element is handled by separate, self-contained functions
(virDomainHostdevDefParseXMLSubsys() and
virDomainHostdevDefFormatSubsys()). For parsing, the higher level
function can simply "reach through" with
virXPathString("string(./source/@portgroup)", ctxt), and the lower level
function will just ignore the presence of the unrecognized attribute -
ugly factor of 3 (on 1-10). But for the formatter, since the entire
element from < to > is emitted by the lower level function, we need to
add the portgroup name to the arglist - ugly factor of at least 7.

(another option would be to duplicate portgroup in the type-specific
data for every single interface type, but that doesn't make sense at all).

Anyway, I'll finish up this patch and post it, but it may be ugly enough
that we need to reconsider.




More information about the libvir-list mailing list