[libvirt] [RFC] require for suggestions on support for ivshmem device

Wangrui (K) moon.wangrui at huawei.com
Wed May 14 08:23:21 UTC 2014


Hi,

Libvirt does not support ivshmem(Inter-VM Shared Memory) device recently, 
thus, I would like to know if there's any plan to support it in the future?
If not, I would like to contribute a serial of patches to do so.

On Jan 28, Wangyufei (James) asked about this question, and Daniel replied with 2 suggestions:
1. Libvirt should be capable of configuring the guest's xml on ivshmem.
2.An ivshmem daemon is needed to run on the host to support it, libvirt is recommended to provide such a daemon.
Please refer to  https://www.redhat.com/archives/libvir-list/2014-January/msg01335.html for details.

What I'll do later is the 1st suggestion, the 2nd one is left to be accomplished by someone else.

Here is the detailed work I'll do to support configuration of the guest in libvirt:
virDomainDefParseXML:                 parse ivshmem device xml when defining dom.xml
virDomainDeviceInfoIterateInternal:       iterate ivshmem devices
qemuAssignDevicePCISlots:              assign ivshmem device pci slots
virDomainDefFormatInternal:             format ivshmem device xml (Eg. virsh edit dom)
virDomainDefFree:                     free ivshmem device def

qemuBuildCommandLine:               build ivshmem device command line when vm starts
qemuAssignDeviceAliases:              assign ivshmem device aliases when vm starts

virDomainDeviceDefParse:               attach and parse ivshmem device xml
qemuDomainAttachDeviceConfig:         attach ivshmem device xml persistently
qemuDomainAttachDeviceLive:           attach ivshmem device online

qemuDomainDetachDeviceConfig:         detach ivshmem device xml persistently
qemuDomainDetachDeviceLive:           detach ivshmem device online


There are two ways to use ivshmem with qemu 
(please refer to http://qemu.weilnetz.de/qemu-doc.html#pcsys_005fother_005fdevs ):
1.Guests map a POSIX shared memory region into the guest as a PCI device
that enables zero-copy communication to the application level of the guests, The basic syntax is:

  qemu-system-i386-device ivshmem, size = <size in format accepted by -m> [, shm = <shm name>] 

2.If desired, interrupts can be sent between guest VMs accessing the same shared memory region.
Interrupt support requires using a shared memory server and using a chardev socket to connect to it. 
An example syntax when using the shared memory server is: 

  qemu-system-i386-device ivshmem, size = <size in format accepted by -m> [, chardev = <id>] [, msi = on]
                               [, ioeventfd = on] [, vectors = n] [, role = peer | master] 
  qemu-system-i386-chardev socket, path = <path>, id = <id>

The respective xml configuration for the above 2 qemu command lines are shown as below:

Example1: automatically attach device with KVM

  <devices>
    <ivshmem role='master'>
      <memory name='dom-ivshmem' size='2'/>
    </ivshmem>
  </devices>

NOTE: "size" means ivshmem size in unit MB, "name" means shm name
      "role" is optional, it may be set to "master" or "peer", the default one is "master"

Example2: manually attach device with static PCI slot 4 requested 

  <devices>
    <ivshmem role='master'>
      <memory name='dom-ivshmem' size='2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </ivshmem>
  </devices>

Example3: automatically attach device with KVM

  <devices>
    <ivshmem role='master' type='unix'>
      <source mode='connect' path='/tmp/ivshmem'/>
      <memory name='dom-ivshmem' size='2'/>
    </ivshmem>
  </devices>

NOTE: "path" means shared memory socket path which is set by the daemon.
      " source mode " and "type" is similar with vmchannel.


I'm looking forward to your suggestions, thank you very much.




More information about the libvir-list mailing list