[virt-tools-list] [PATCH V2] addhardware: Deal with the conflict host device

Cole Robinson crobinso at redhat.com
Fri Aug 15 17:25:30 UTC 2014


On 08/13/2014 06:15 AM, Lin Ma wrote:

Thanks for the patch. One general style bit: the code base mostly uses
parentheses wrapping an entire expression rather than escaped line
continuations, so please follow that format.

> diff --git a/virtManager/addhardware.py b/virtManager/addhardware.py
> index 02cff57..eedb351 100644
> --- a/virtManager/addhardware.py
> +++ b/virtManager/addhardware.py
> @@ -835,6 +835,11 @@ class vmmAddHardware(vmmGObjectUI):
>  
>          devs = self.conn.get_nodedevs(devtype, devcap)
>          for dev in devs:
> +            if virtinst.VirtualHostDevice.is_conflict_hostdev \
> +                    (self.conn.get_backend(), devtype, dev, "inactive"):
> +                # Doesn't append device which is in use by an active VM to list.
> +                continue
> +

As giuseppe said, removing a device from the list isn't very transparent and
could cause user confusion. I'd rather just warn in all cases. And in fact
it's possible that you want to have two mutually exclusive VMs that have the
same device assigned, libvirt will warn if trying to start one while the other
is running so it will prevent malfunction in that case.

>              prettyname = dev.pretty_name()
>  
>              for subdev in subdevs:
> @@ -1739,6 +1744,18 @@ class vmmAddHardware(vmmGObjectUI):
>  
>          try:
>              dev = virtinst.VirtualHostDevice(self.conn.get_backend())
> +            # Hostdev collision
> +            names = virtinst.VirtualHostDevice.is_conflict_hostdev \
> +                    (self.conn.get_backend(), devtype, nodedev, "active", \
> +                    is_dup)
> +            if names:
> +                res = self.err.yes_no(
> +                        _('The device is already in use by other guests %s') %
> +                         (names),
> +                        _("Do you really want to use the device?"))
> +                if not res:
> +                    return False
> +
>              dev.set_from_nodedev(nodedev, use_full_usb=is_dup)
>              self._dev = dev
>          except Exception, e:
> diff --git a/virtinst/devicehostdev.py b/virtinst/devicehostdev.py
> index a5d1a2a..39a6213 100644
> --- a/virtinst/devicehostdev.py
> +++ b/virtinst/devicehostdev.py
> @@ -89,5 +89,52 @@ class VirtualHostDevice(VirtualDevice):
>      driver_name = XMLProperty("./driver/@name")
>      rom_bar = XMLProperty("./rom/@bar", is_onoff=True)
>  
> +    @staticmethod
> +    def is_conflict_hostdev(conn, devtype, dev, vm_filter, is_dup=True):
> +        if vm_filter == "active":
> +            ret = []
> +        vms = conn.fetch_all_guests()
> +        for vm in vms:
> +            _backend = vm.conn.lookupByName(vm.name)
> +            if vm_filter == "inactive":
> +                if not _backend.isActive():
> +                    continue
> +            elif vm_filter == "active":
> +                if _backend.isActive():
> +                    continue
> +            for hostdev in vm.get_devices("hostdev"):
> +                if devtype == NodeDevice.CAPABILITY_TYPE_USBDEV and \
> +                        hostdev.type == "usb":
> +                    if is_dup:
> +                        if hostdev.bus == dev.bus and \
> +                                hostdev.device == dev.device and \
> +                                hostdev.vendor == dev.vendor_id and \
> +                                hostdev.product == dev.product_id:
> +                            if vm_filter == "inactive":
> +                                return True
> +                            elif vm_filter == "active":
> +                                ret.append(vm.name)
> +                    else:
> +                        if hostdev.vendor == dev.vendor_id and \
> +                                hostdev.product == dev.product_id:
> +                            if vm_filter == "inactive":
> +                                return True
> +                            elif vm_filter == "active":
> +                                ret.append(vm.name)
> +                elif devtype == NodeDevice.CAPABILITY_TYPE_PCI and \
> +                        hostdev.type == "pci":
> +                    if str(int(hostdev.domain, 16)) == dev.domain and \
> +                            str(int(hostdev.bus, 16)) == dev.bus and \
> +                            str(int(hostdev.slot, 16)) == dev.slot and \
> +                            str(int(hostdev.function, 16)) == dev.function:
> +                        if vm_filter == "inactive":
> +                            return True
> +                        elif vm_filter == "active":
> +                            ret.append(vm.name)
> +        if vm_filter == "inactive":
> +            return False
> +        elif vm_filter == "active":
> +            return ret
> +

I'd like to see this arranged in a bit different way. We already have similar
code to match a hostdev to its nodedev in details.py:lookup_nodedev. Please
move that code to a function like nodedev.py NodeDevice.compare_to_nodedev,
then you can even separate the logic and implement the function in PCIDevice,
USBDevice, etc. Bonus points for unit tests in tests/nodedev.py. That'll be
one patch

You'll still need details.py:lookup_nodedev to iterate over all the
nodedevices and compare against the passed hostdev, but the comparison logic
will be removed.

Next patch will add this logic in addhardware.py, using the new compare
function. I don't think we even need to differentiate between active vs
inactive VMs, just warn that the device is attached to other guests
foo,bar,baz, and let libvirt raise the failure if a hotplug operation will
conflict. We don't want to get in the business of predicting when libvirt will
throw an error unless it's absolutely necessary

Thanks,
Cole




More information about the virt-tools-list mailing list