[virt-tools-list] [PATCH] virtinst: Pass Xen machine type to libvirt getDomainCapabilities

Charles Arnold carnold at suse.com
Wed Jun 15 18:56:34 UTC 2016


>>> On 6/15/2016 at 12:13 PM, Cole Robinson <crobinso at redhat.com> wrote: 
> On 06/15/2016 11:40 AM, Charles Arnold wrote:
>>>>> On 6/15/2016 at 09:21 AM, Pavel Hrdina <phrdina at redhat.com> wrote: 
>>> On Wed, Jun 15, 2016 at 08:31:42AM -0600, Charles Arnold wrote:
>>>> Tell libvirt what machine type the user chose for Xen (PV or HVM).
>>>> Without a type specified, the default is to return the capabilities of a pv
>>>> machine. Passing "xenfv" will allow the "Firmware" option to show up
>>>> under "Hypervisor Details" when a Xen HVM guest install is being customized.
>>>> Also specify the name of the SUSE aavmf firmware for aarch64.
>>>>
>>>> diff --git a/virtinst/domcapabilities.py b/virtinst/domcapabilities.py
>>>> index 874fa1e..605d77a 100644
>>>> --- a/virtinst/domcapabilities.py
>>>> +++ b/virtinst/domcapabilities.py
>>>> @@ -78,13 +78,20 @@ class _Features(_CapsBlock):
>>>>  
>>>>  class DomainCapabilities(XMLBuilder):
>>>>      @staticmethod
>>>> -    def build_from_params(conn, emulator, arch, machine, hvtype):
>>>> +    def build_from_params(conn, emulator, arch, machine, hvtype, os_type):
>>>>          xml = None
>>>>          if conn.check_support(
>>>>              conn.SUPPORT_CONN_DOMAIN_CAPABILITIES):
>>>> +            machine_type = machine
>>>> +            # For Xen capabilities pass either xenpv or xenfv
>>>> +            if hvtype == "xen":
>>>> +                if os_type == "hvm":
>>>> +                    machine_type = "xenfv"
>>>> +                else:
>>>> +                    machine_type = "xenpv"
>>>
>>> Hi Charles
>>>
>>> I'm confused about this change, there is no need to do something like this.
>>>
>>> virt-install creates a correct XML if you ask for it.  Please check man page 
>>> for
>>> virt-install, there are two options, --hvm and --paravirt.  If you don't 
>>> specify
>>> any of them, virt-install creates a PV guest as default.
>> 
>> This is tested via the installation wizard GUI. If you select (fullvirt) on 
> the
>> "Xen Type:" pop-down the machine type is not passed along to this
>> libvirt call to get the capabilities. Without the machine type you
>> can't customize the install and choose UEFI (along with ovmf) to boot the
>> VM. See also upstream fixes to libvirt to support this at
>> https://www.redhat.com/archives/libvir-list/2016-June/msg00748.html
> 
> Wrong link? That's a discussion about tar formats

Sorry about that. Here is the link I meant to paste.

https://www.redhat.com/archives/libvir-list/2016-June/msg00694.html

> 
> We shouldn't need to use a heuristic here, the machine type should be coming
> from the domain XML that virt-manager builds... though I don't think we 
> encode
> one by default for xen but instead let the xen driver fill it in for us. Can
> you link to the correct thread, and give the 'virsh capabilities' output for
> your xen connection?

'virsh capabilities' output,

<capabilities>

  <host>
    <cpu>
      <arch>x86_64</arch>
      <features>
        <pae/>
      </features>
    </cpu>
    <power_management/>
    <migration_features>
      <live/>
    </migration_features>
    <netprefix>vif</netprefix>
    <topology>
      <cells num='2'>
        <cell id='0'>
          <memory unit='KiB'>51380224</memory>
          <cpus num='24'>
            <cpu id='0' socket_id='0' core_id='0' siblings='0-1'/>
            <cpu id='1' socket_id='0' core_id='0' siblings='0-1'/>
            <cpu id='2' socket_id='0' core_id='1' siblings='2-3'/>
            <cpu id='3' socket_id='0' core_id='1' siblings='2-3'/>
            <cpu id='4' socket_id='0' core_id='2' siblings='4-5'/>
            <cpu id='5' socket_id='0' core_id='2' siblings='4-5'/>
            <cpu id='6' socket_id='0' core_id='3' siblings='6-7'/>
            <cpu id='7' socket_id='0' core_id='3' siblings='6-7'/>
            <cpu id='8' socket_id='0' core_id='4' siblings='8-9'/>
            <cpu id='9' socket_id='0' core_id='4' siblings='8-9'/>
            <cpu id='10' socket_id='0' core_id='5' siblings='10-11'/>
            <cpu id='11' socket_id='0' core_id='5' siblings='10-11'/>
            <cpu id='12' socket_id='0' core_id='8' siblings='12-13'/>
            <cpu id='13' socket_id='0' core_id='8' siblings='12-13'/>
            <cpu id='14' socket_id='0' core_id='9' siblings='14-15'/>
            <cpu id='15' socket_id='0' core_id='9' siblings='14-15'/>
            <cpu id='16' socket_id='0' core_id='10' siblings='16-17'/>
            <cpu id='17' socket_id='0' core_id='10' siblings='16-17'/>
            <cpu id='18' socket_id='0' core_id='11' siblings='18-19'/>
            <cpu id='19' socket_id='0' core_id='11' siblings='18-19'/>
            <cpu id='20' socket_id='0' core_id='12' siblings='20-21'/>
            <cpu id='21' socket_id='0' core_id='12' siblings='20-21'/>
            <cpu id='22' socket_id='0' core_id='13' siblings='22-23'/>
            <cpu id='23' socket_id='0' core_id='13' siblings='22-23'/>
          </cpus>
        </cell>
        <cell id='1'>
          <memory unit='KiB'>50331648</memory>
          <cpus num='24'>
            <cpu id='24' socket_id='1' core_id='0' siblings='24-25'/>
            <cpu id='25' socket_id='1' core_id='0' siblings='24-25'/>
            <cpu id='26' socket_id='1' core_id='1' siblings='26-27'/>
            <cpu id='27' socket_id='1' core_id='1' siblings='26-27'/>
            <cpu id='28' socket_id='1' core_id='2' siblings='28-29'/>
            <cpu id='29' socket_id='1' core_id='2' siblings='28-29'/>
            <cpu id='30' socket_id='1' core_id='3' siblings='30-31'/>
            <cpu id='31' socket_id='1' core_id='3' siblings='30-31'/>
            <cpu id='32' socket_id='1' core_id='4' siblings='32-33'/>
            <cpu id='33' socket_id='1' core_id='4' siblings='32-33'/>
            <cpu id='34' socket_id='1' core_id='5' siblings='34-35'/>
            <cpu id='35' socket_id='1' core_id='5' siblings='34-35'/>
            <cpu id='36' socket_id='1' core_id='8' siblings='36-37'/>
            <cpu id='37' socket_id='1' core_id='8' siblings='36-37'/>
            <cpu id='38' socket_id='1' core_id='9' siblings='38-39'/>
            <cpu id='39' socket_id='1' core_id='9' siblings='38-39'/>
            <cpu id='40' socket_id='1' core_id='10' siblings='40-41'/>
            <cpu id='41' socket_id='1' core_id='10' siblings='40-41'/>
            <cpu id='42' socket_id='1' core_id='11' siblings='42-43'/>
            <cpu id='43' socket_id='1' core_id='11' siblings='42-43'/>
            <cpu id='44' socket_id='1' core_id='12' siblings='44-45'/>
            <cpu id='45' socket_id='1' core_id='12' siblings='44-45'/>
            <cpu id='46' socket_id='1' core_id='13' siblings='46-47'/>
            <cpu id='47' socket_id='1' core_id='13' siblings='46-47'/>
          </cpus>
        </cell>
      </cells>
    </topology>
  </host>

  <guest>
    <os_type>xen</os_type>
    <arch name='x86_64'>
      <wordsize>64</wordsize>
      <emulator>/usr/lib/xen/bin/qemu-system-i386</emulator>
      <machine>xenpv</machine>
      <domain type='xen'/>
    </arch>
  </guest>

  <guest>
    <os_type>xen</os_type>
    <arch name='i686'>
      <wordsize>32</wordsize>
      <emulator>/usr/lib/xen/bin/qemu-system-i386</emulator>
      <machine>xenpv</machine>
      <domain type='xen'/>
    </arch>
    <features>
      <pae/>
    </features>
  </guest>

  <guest>
    <os_type>hvm</os_type>
    <arch name='i686'>
      <wordsize>32</wordsize>
      <emulator>/usr/lib/xen/bin/qemu-system-i386</emulator>
      <loader>/usr/libexec/xen/boot/hvmloader</loader>
      <machine>xenfv</machine>
      <domain type='xen'/>
    </arch>
    <features>
      <pae/>
      <nonpae/>
      <acpi default='on' toggle='yes'/>
      <apic default='on' toggle='no'/>
      <hap default='on' toggle='yes'/>
    </features>
  </guest>

  <guest>
    <os_type>hvm</os_type>
    <arch name='x86_64'>
      <wordsize>64</wordsize>
      <emulator>/usr/lib/xen/bin/qemu-system-i386</emulator>
      <loader>/usr/libexec/xen/boot/hvmloader</loader>
      <machine>xenfv</machine>
      <domain type='xen'/>
    </arch>
    <features>
      <acpi default='on' toggle='yes'/>
      <apic default='on' toggle='no'/>
      <hap default='on' toggle='yes'/>
    </features>
  </guest>

</capabilities>


> 
>> 
>> Here is an example of the resulting XML that is desired,
>>   <os>
>>     <type arch='x86_64' machine='xenfv'>hvm</type>
>>     <loader readonly='yes' 
> type='pflash'>/usr/share/qemu/ovmf-x86_64-ms-code.bin</loader> 
>>     <boot dev='hd'/>
>>   </os>
>> 
>> I don't know if this is even configurable using virt-install from the command 
> line.
>> 
> 
> All of it is available, see 'virt-install --boot help'
> 
>>>
>>>>              try:
>>>>                  xml = conn.getDomainCapabilities(emulator, arch,
>>>> -                    machine, hvtype)
>>>> +                    machine_type, hvtype)
>>>>              except:
>>>>                  logging.debug("Error fetching domcapabilities XML",
>>>>                      exc_info=True)
>>>> @@ -97,7 +104,7 @@ class DomainCapabilities(XMLBuilder):
>>>>      @staticmethod
>>>>      def build_from_guest(guest):
>>>>          return DomainCapabilities.build_from_params(guest.conn,
>>>> -            guest.emulator, guest.os.arch, guest.os.machine, guest.type)
>>>> +            guest.emulator, guest.os.arch, guest.os.machine, guest.type, 
>>> guest.os.os_type)
>>>>  
>>>>      # Mapping of UEFI binary names to their associated architectures. We
>>>>      # only use this info to do things automagically for the user, it 
>>> shouldn't
>>>> @@ -112,6 +119,7 @@ class DomainCapabilities(XMLBuilder):
>>>>          "aarch64": [
>>>>              ".*AAVMF_CODE\.fd",  # RHEL
>>>>              ".*aarch64/QEMU_EFI.*",  # gerd's firmware repo
>>>> +            ".*aavmf-aarch64-.*"  # SUSE
>>>>              ".*aarch64.*",  # generic attempt at a catchall
>>>>          ],
>>>>      }
>>>
>>> This hunk should be a separate patch because it's unrelated to the rest of 
>>> the
>>> patch.  Please send this as a separate patch and also if possible provide 
>>> some
>>> source where we can validate the naming.
>> 
>> I'll send this as a separate patch.
> 
> Actually this bit isn't strictly required, notice the regex below your added
> line will also match that path. But I guess it doesn't hurt to explicitly
> document the various distro paths, so its up to you

Thanks,

- Charles





More information about the virt-tools-list mailing list