[libvirt-users] Poor network performance

Sławomir Kapłoński slawek at kaplonski.pl
Tue May 16 09:23:26 UTC 2017


Hello,

We found what was the issue there. Our OVS bridge had set datapath_type=NETDEV. Change it to SYSTEM fixed our issue :)

Pozdrawiam / Best regards
Sławek Kapłoński
slawek at kaplonski.pl

> Wiadomość napisana przez Sławomir Kapłoński <slawek at kaplonski.pl> w dniu 15.05.2017, o godz. 11:20:
> 
> Hello,
> 
> I have no queue configured on tap and veth devices, quest type is of course KVM and I’m using virtio model for VM’s NIC.
> What we found is that on Xenial (where performance is poor) during test ovs-vswitchd process is using 100% CPU and there are some messages in ovs logs:
> 
> 2017-05-12T14:22:04.351Z|00125|poll_loop|INFO|wakeup due to [POLLIN] on fd 149 (AF_PACKET(tap27903b5e-06)(protocol=0x3)<->) at ../lib/netdev-linux.c:1139 (86% CPU usage)
> 
> Identical setup (with same versions of ovs, libvirt, gemu and kernel) is working properly on Trusty.
> 
> Pozdrawiam / Best regards
> Sławek Kapłoński
> slawek at kaplonski.pl
> 
>> Wiadomość napisana przez Michal Privoznik <mprivozn at redhat.com> w dniu 15.05.2017, o godz. 08:27:
>> 
>> On 05/12/2017 11:02 AM, Sławomir Kapłoński wrote:
>>> Hello,
>>> 
>>> I have some problem with poor network performance on libvirt with qemu and openvswitch.
>>> I’m using libvirt 1.3.1, qemu 2.5 and openvswitch 2.6.0 on Ubuntu 16.04 currently.
>>> My connection diagram looks like below:
>>> 
>>>                                                                               +---------------------------+
>>>                              +---------------------------+                    |         Net namespace    |
>>> +------------------+           |        OVS bridge         |                    |                         |
>>> |                  |           |                           |                    |                         |
>>> |        VM        |           |                           |                    |                         |
>>> |                  |      +----+---+                  +----+-----+         +----+---+                     |
>>> |                  +------+tap dev |                  |  veth A  +---------+ veth B |                     |
>>> |                  |      +--------+                  +----------+         +--------+                     |
>>> |    iperf -s<---------------------------------------------------------------------------+iperf -c        |
>>> |                  |           |                           |                    |                         |
>>> +------------------+           |                           |                    |                         |
>>>                              |                           |                    |                          |
>>>                              +---------------------------+                    +---------------------------+
>>> 
>>> 
>>> 
>>> I haven’t got any QoS in tc configured on any interface. When I do this iperf test I have something about 150Mbps only. IMHO it should be something about 20-30 Gbps there.
>> 
>> There could be a lot of stuff that is suboptimal here.
>> Firstly, you should ping your vcpus and guest memory. Then, you might
>> want to enable multiqueue for the tap device (that way packet processing
>> can be split into multiple vcpus). You also want to make sure, you're
>> not overcommitting the host.
>> BTW: you may also try setting 'noqueue' qdisc for the tap device (if
>> supported by your kernel). Also, the guest is type of KVM, not qemu,
>> right? And you're using virtio model for the VM's NIC.
>> 
>> Michal
> 
> 
> _______________________________________________
> libvirt-users mailing list
> libvirt-users at redhat.com
> https://www.redhat.com/mailman/listinfo/libvirt-users





More information about the libvirt-users mailing list