[libvirt] [RFC 0/7] Live Migration with Pass-through Devices proposal

Chen Fan chen.fan.fnst at cn.fujitsu.com
Fri Apr 17 08:53:02 UTC 2015


backgrond:
Live migration is one of the most important features of virtualization technology.
With regard to recent virtualization techniques, performance of network I/O is critical.
Current network I/O virtualization (e.g. Para-virtualized I/O, VMDq) has a significant
performance gap with native network I/O. Pass-through network devices have near
native performance, however, they have thus far prevented live migration. No existing
methods solve the problem of live migration with pass-through devices perfectly.

There was an idea to solve the problem in website:
https://www.kernel.org/doc/ols/2008/ols2008v2-pages-261-267.pdf
Please refer to above document for detailed information.

So I think this problem maybe could be solved by using the combination of existing
technology. and the following steps are we considering to implement:

-  before boot VM, we anticipate to specify two NICs for creating bonding device
   (one plugged and one virtual NIC) in XML. here we can specify the NIC's mac addresses
   in XML, which could facilitate qemu-guest-agent to find the network interfaces in guest.

-  when qemu-guest-agent startup in guest it would send a notification to libvirt,
   then libvirt will call the previous registered initialize callbacks. so through
   the callback functions, we can create the bonding device according to the XML
   configuration. and here we use netcf tool which can facilitate to create bonding device
   easily.

-  during migration, unplug the passthroughed NIC. then do native migration.

-  on destination side, check whether need to hotplug new NIC according to specified XML.
   usually, we use migrate "--xml" command option to specify the destination host NIC mac
   address to hotplug a new NIC, because source side passthrough NIC mac address is different,
   then hotplug the deivce according to the destination XML configuration.

TODO:
  1.  when hot add a new NIC in destination side after migration finished, the NIC device
      need to re-enslave on bonding device in guest. otherwise, it is offline. maybe
      we should consider bonding driver to support add interfaces dynamically.

This is an example on how this might work, so I want to hear some voices about this scenario.

Thanks,
Chen

Chen Fan (7):
  qemu-agent: add agent init callback when detecting guest setup
  qemu: add guest init event callback to do the initialize work for
    guest
  hostdev: add a 'bond' type element in <hostdev> element
  qemu-agent: add qemuAgentCreateBond interface
  hostdev: add parse ip and route for bond configure
  migrate: hot remove hostdev at perform phase for bond device
  migrate: add hostdev migrate status to support hostdev migration

 docs/schemas/basictypes.rng   |   6 ++
 docs/schemas/domaincommon.rng |  37 ++++++++
 src/conf/domain_conf.c        | 195 ++++++++++++++++++++++++++++++++++++++---
 src/conf/domain_conf.h        |  40 +++++++--
 src/conf/networkcommon_conf.c |  17 ----
 src/conf/networkcommon_conf.h |  17 ++++
 src/libvirt_private.syms      |   1 +
 src/qemu/qemu_agent.c         | 196 +++++++++++++++++++++++++++++++++++++++++-
 src/qemu/qemu_agent.h         |  12 +++
 src/qemu/qemu_command.c       |   3 +
 src/qemu/qemu_domain.c        |  70 +++++++++++++++
 src/qemu/qemu_domain.h        |  14 +++
 src/qemu/qemu_driver.c        |  38 ++++++++
 src/qemu/qemu_hotplug.c       |   8 +-
 src/qemu/qemu_migration.c     |  91 ++++++++++++++++++++
 src/qemu/qemu_migration.h     |   4 +
 src/qemu/qemu_process.c       |  32 +++++++
 src/util/virhostdev.c         |   3 +
 18 files changed, 745 insertions(+), 39 deletions(-)

-- 
1.9.3




More information about the libvir-list mailing list