[libvirt-users] What's the best way to make use of VLAN interfaces with VMs?

Richard Achmatowicz rachmato at redhat.com
Tue Dec 3 19:06:23 UTC 2019


Even more puzzling, I reverted back to the old configuration to confirm 
what I had seen and found that it works in one direction but not the 
other: i.e. from 192.168.0.110 to 192.168.0.120, but not the other way 
around.

Must be something with my configuration....which I can follow up on.

So, thanks again for your help.

Richard

On 12/3/19 11:36 AM, Richard Achmatowicz wrote:
> Laine
>
> I made the change and I can now ping across both bridges: br1 
> (192.168.0.0/24) and br1600 (192.168.1.0/24):
>
> br1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
>         inet 192.168.0.110  netmask 255.255.255.0  broadcast 
> 192.168.0.255
>         inet6 fe80::1e98:ecff:fe1b:276d  prefixlen 64  scopeid 0x20<link>
>         ether 1c:98:ec:1b:27:6d  txqueuelen 1000  (Ethernet)
>         RX packets 21608  bytes 10289553 (9.8 MiB)
>         RX errors 0  dropped 0  overruns 0  frame 0
>         TX packets 1372  bytes 131012 (127.9 KiB)
>         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
>
> br1600: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
>         inet 192.168.1.110  netmask 255.255.255.0  broadcast 
> 192.168.1.255
>         inet6 fe80::1e98:ecff:fe1b:276d  prefixlen 64  scopeid 0x20<link>
>         ether 1c:98:ec:1b:27:6d  txqueuelen 1000  (Ethernet)
>         RX packets 3429  bytes 173404 (169.3 KiB)
>         RX errors 0  dropped 0  overruns 0  frame 0
>         TX packets 112  bytes 9490 (9.2 KiB)
>         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
>
> // ping to another host with the same bridge br1 defined on 192.168.0.120
> [root at clusterdev01 ~]# ping 192.168.0.120
> PING 192.168.0.120 (192.168.0.120) 56(84) bytes of data.
> 64 bytes from 192.168.0.120: icmp_seq=1 ttl=64 time=0.481 ms
> 64 bytes from 192.168.0.120: icmp_seq=2 ttl=64 time=0.244 ms
> 64 bytes from 192.168.0.120: icmp_seq=3 ttl=64 time=0.242 ms
> 64 bytes from 192.168.0.120: icmp_seq=4 ttl=64 time=0.244 ms
> 64 bytes from 192.168.0.120: icmp_seq=5 ttl=64 time=0.237 ms
> ^C
>
> // ping to another host with the same bridge br1600 defined on 
> 192.168.1.120
> [root at clusterdev01 ~]# ping 192.168.1.120
> PING 192.168.1.120 (192.168.1.120) 56(84) bytes of data.
> 64 bytes from 192.168.1.120: icmp_seq=1 ttl=64 time=0.316 ms
> 64 bytes from 192.168.1.120: icmp_seq=2 ttl=64 time=0.245 ms
> 64 bytes from 192.168.1.120: icmp_seq=3 ttl=64 time=0.176 ms
> 64 bytes from 192.168.1.120: icmp_seq=4 ttl=64 time=0.213 ms
> ^C
>
> However, for some reason, multicast has stopped working across the 
> bridge interface, when it was working in the previous configuration 
> (with br1 and br1.600).
>
> I test multicast using a multicast test app, binding the receiver to 
> br1 on one host (192.168.0.120) and then binding the sender on br1 on 
> another host (192.168.0.110). I also start an instance of tshark on 
> each host to check which packets are being sent across. The multicast 
> packets are seen by tshark to be crossing the bridge but for some 
> reason the receiver app does not receive them. The bridges also have 
> the multicast_querier and multicast_snooping flags enabled (e.g. for 
> br1 on both hosts):
>
> [root at clusterdev01 ~]# cat /sys/class/net/br1/bridge/multicast_querier
> 1
> [root at clusterdev01 ~]# cat /sys/class/net/br1/bridge/multicast_snooping
> 1
> [root at clusterdev02 ~]# cat /sys/class/net/br1/bridge/multicast_querier
> 1
> [root at clusterdev02 ~]# cat /sys/class/net/br1/bridge/multicast_snooping
> 1
>
> // 1. start multicast test app sender and send packet across br1 (the 
> multicast address of 239.11.12.13 is fixed for both apps, as well as 
> multicast port 5555)
>
> [root at clusterdev01 bridge-multicast-test]# 
> ./start-ipv4-mcast-sender.sh 192.168.0.110
> Socket #1=0.0.0.0/0.0.0.0:5555, ttl=32, bind interface=/192.168.0.110
> > 1234567890
> >
>
> // 2. tshark on the sending host after sending packet
>
> [root at clusterdev01 ~]# tshark -i br1 -f "igmp or host 239.11.12.13"
> Running as user "root" and group "root". This could be dangerous.
> Capturing on 'br1'
>     1 0.000000000 192.168.0.110 → 239.11.12.13 UDP 52 5555 → 5555 Len=10
>     2 55.389831698      0.0.0.0 → 224.0.0.1    IGMPv2 60 Membership 
> Query, general
>     3 60.194658894 192.168.0.115 → 224.0.0.251  IGMPv2 46 Membership 
> Report group 224.0.0.251
>
> // 3. tshark on the receiving host across bridge br1
>
> [root at clusterdev02 ~]# tshark -i br1 -f "igmp or host 239.11.12.13"
> Running as user "root" and group "root". This could be dangerous.
> Capturing on 'br1'
>     1 0.000000000 192.168.0.120 → 224.0.0.22   IGMPv3 54 Membership 
> Report / Join group 239.11.12.13 for any sources
>     2 0.902032534 192.168.0.120 → 224.0.0.22   IGMPv3 54 Membership 
> Report / Join group 239.11.12.13 for any sources
>     3 41.448921858 192.168.0.110 → 239.11.12.13 UDP 60 5555 → 5555 Len=10
>
> // 4. previously started multicast test app receiver and no packet 
> received
>
> [root at clusterdev02 bridge-multicast-test]# 
> ./start-ipv4-mcast-receiver.sh 192.168.0.120
> Socket=0.0.0.0/0.0.0.0:5555, bind interface=/192.168.0.120
> // NO PACKET APPEARS HERE :-(
>
>
> It's puzzling that the packet is seen to arrive on clusterdev02 but is 
> not received by the test app. It's also odd to see IGMPv3 messages on 
> the receiver and IGMPv2 messages on the sender.
>
> Investigating ....
>
> Richard
>
> On 11/29/19 12:15 PM, Richard Achmatowicz wrote:
>> Hi Laine
>>
>> What you have suggested sounds eminently reasonable. Thanks for your 
>> advice. I'm going to give it a shot and report back.
>>
>> Richard
>>
>> On 11/27/19 1:38 PM, Laine Stump wrote:
>>> On 11/26/19 11:07 PM, Richard Achmatowicz wrote:
>>>> Hello
>>>>
>>>> I have a problem with attaching VMs to a VLAN interface.
>>>>
>>>> Here is my setup: I have several physical hosts connected by a 
>>>> physical switch.  Each host has two NICs leading to the switch, 
>>>> which have been combined into a team, team0. Each host a has a 
>>>> bridge br1, which has team0 as a slave. So communication between 
>>>> hosts is based on the IP address of bridge br1 on each host.
>>>>
>>>> Up until recently, using libvirt and KVM, I was creating VMs which 
>>>> had one interface attached the default virtual network and one 
>>>> interface attached to the bridge:
>>>>
>>>> virt-install ... --network network=default --network bridge=br1 ...
>>>>
>>>> I would then statically assign an IP address to the bridge 
>>>> interface on the guest when installing the OS.
>>>>
>>>> A few days ago, a VLAN was introduced to split up the network. I 
>>>> created a new VLAN interface br1.600 on each of the hosts. My 
>>>> initial attempt was to do try this:
>>>>
>>>> virt-install ... --network network=default --network bridge=br1.600 
>>>> ...
>>>>
>>>> which did not work. It then dawned on me that a VLAN interface and 
>>>> a bridge aren't treated the same. So I started to look for ways to 
>>>> allow my VMs to bind to this new interface.
>>>>
>>>> This would seem to be a common situation. What is the best way to 
>>>> work around this?
>>>>
>>>> Both the host bridge and the host VLAN interface already have their 
>>>> assigned IP addresses and appear like this in libvirt:
>>>>
>>>> [root at clusterdev01 ]# ifconfig
>>>> br1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
>>>>          inet 192.168.0.110  netmask 255.255.255.0 broadcast 
>>>> 192.168.0.255
>>>>          inet6 fe80::1e98:ecff:fe1b:276d  prefixlen 64 scopeid 
>>>> 0x20<link>
>>>>          ether 1c:98:ec:1b:27:6d  txqueuelen 1000 (Ethernet)
>>>>          RX packets 833772  bytes 2976958254 (2.7 GiB)
>>>>          RX errors 0  dropped 0  overruns 0  frame 0
>>>>          TX packets 331237  bytes 23335124 (22.2 MiB)
>>>>          TX errors 0  dropped 0 overruns 0  carrier 0 collisions 0
>>>>
>>>> br1.600: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
>>>>          inet 192.168.1.110  netmask 255.255.255.0 broadcast 
>>>> 192.168.1.255
>>>>          inet6 fe80::1e98:ecff:fe1b:276d  prefixlen 64 scopeid 
>>>> 0x20<link>
>>>>          ether 1c:98:ec:1b:27:6d  txqueuelen 1000 (Ethernet)
>>>>          RX packets 189315  bytes 9465744 (9.0 MiB)
>>>>          RX errors 0  dropped 0  overruns 0  frame 0
>>>>          TX packets 302  bytes 30522 (29.8 KiB)
>>>>          TX errors 0  dropped 0 overruns 0  carrier 0 collisions 0
>>>>
>>>> [root at clusterdev01]# virsh iface-list --all
>>>>   Name                 State      MAC Address
>>>> ---------------------------------------------------
>>>>   br1                  active     1c:98:ec:1b:27:6d
>>>>   br1.600           active     1c:98:ec:1b:27:6d
>>>>
>>>> [root at clusterdev01 sysadmin]# virsh iface-dumpxml br1.600
>>>> <interface type='vlan' name='br1.600'>
>>>>    <protocol family='ipv4'>
>>>>      <ip address='192.168.1.110' prefix='24'/>
>>>>    </protocol>
>>>>    <protocol family='ipv6'>
>>>>      <ip address='fe80::1e98:ecff:fe1b:276d' prefix='64'/>
>>>>    </protocol>
>>>>    <link state='up'/>
>>>>    <vlan tag='600'>
>>>>      <interface name='br1'/>
>>>>    </vlan>
>>>> </interface>
>>>>
>>>> I tried following some suggestions which wrapped the vlan interface 
>>>> in a bridge interface, but in ended up trashing the br1.600 
>>>> interface which was originally defined on the host.
>>>>
>>>> Is there a failsafe way to deal with such a situation? Am I doing 
>>>> something completely wrong here? In would like br1.600 to behave 
>>>> like br1 .....
>>>>
>>>> Any suggestions or advice greatly appreciated.
>>>
>>>
>>> I guess what you need is for all the traffic from your guests to go 
>>> out on the physical network tagged with vlan id 600, and you want 
>>> that to be transparent to the guests, right?
>>>
>>> The simplest way to handle this is to create a vlan interface off of 
>>> the ethernet that you have attached to br1 (not br1 itself), so it 
>>> would be named something like "eth0.600", and then create a new 
>>> bridge (call it, say "br600") and attach eth0.600 to br600. Then 
>>> your guests would be created with "--network bridge=br600"
>>>
>>> (Note that Linux host bridges do now support vlan tagging (and maybe 
>>> even trunking) at the port level, but libvirt hasn't added support 
>>> for it. (in other words, "Patches Welcome!" :-))
>>>





More information about the libvirt-users mailing list