[virt-tools-list] [PATCH] virtinst: Pass Xen machine type to libvirt getDomainCapabilities

Pavel Hrdina phrdina at redhat.com
Thu Jun 16 07:50:20 UTC 2016


On Wed, Jun 15, 2016 at 04:48:24PM -0600, Jim Fehlig wrote:
> On 06/15/2016 12:56 PM, Charles Arnold wrote:
> >>>> On 6/15/2016 at 12:13 PM, Cole Robinson <crobinso at redhat.com> wrote: 
> >> On 06/15/2016 11:40 AM, Charles Arnold wrote:
> >>>>>> On 6/15/2016 at 09:21 AM, Pavel Hrdina <phrdina at redhat.com> wrote: 
> >>>> On Wed, Jun 15, 2016 at 08:31:42AM -0600, Charles Arnold wrote:
> >>>>> Tell libvirt what machine type the user chose for Xen (PV or HVM).
> >>>>> Without a type specified, the default is to return the capabilities of a pv
> >>>>> machine. Passing "xenfv" will allow the "Firmware" option to show up
> >>>>> under "Hypervisor Details" when a Xen HVM guest install is being customized.
> >>>>> Also specify the name of the SUSE aavmf firmware for aarch64.
> >>>>>
> >>>>> diff --git a/virtinst/domcapabilities.py b/virtinst/domcapabilities.py
> >>>>> index 874fa1e..605d77a 100644
> >>>>> --- a/virtinst/domcapabilities.py
> >>>>> +++ b/virtinst/domcapabilities.py
> >>>>> @@ -78,13 +78,20 @@ class _Features(_CapsBlock):
> >>>>>  
> >>>>>  class DomainCapabilities(XMLBuilder):
> >>>>>      @staticmethod
> >>>>> -    def build_from_params(conn, emulator, arch, machine, hvtype):
> >>>>> +    def build_from_params(conn, emulator, arch, machine, hvtype, os_type):
> >>>>>          xml = None
> >>>>>          if conn.check_support(
> >>>>>              conn.SUPPORT_CONN_DOMAIN_CAPABILITIES):
> >>>>> +            machine_type = machine
> >>>>> +            # For Xen capabilities pass either xenpv or xenfv
> >>>>> +            if hvtype == "xen":
> >>>>> +                if os_type == "hvm":
> >>>>> +                    machine_type = "xenfv"
> >>>>> +                else:
> >>>>> +                    machine_type = "xenpv"
> >>>> Hi Charles
> >>>>
> >>>> I'm confused about this change, there is no need to do something like this.
> >>>>
> >>>> virt-install creates a correct XML if you ask for it.  Please check man page 
> >>>> for
> >>>> virt-install, there are two options, --hvm and --paravirt.  If you don't 
> >>>> specify
> >>>> any of them, virt-install creates a PV guest as default.
> >>> This is tested via the installation wizard GUI. If you select (fullvirt) on 
> >> the
> >>> "Xen Type:" pop-down the machine type is not passed along to this
> >>> libvirt call to get the capabilities. Without the machine type you
> >>> can't customize the install and choose UEFI (along with ovmf) to boot the
> >>> VM. See also upstream fixes to libvirt to support this at
> >>> https://www.redhat.com/archives/libvir-list/2016-June/msg00748.html
> >> Wrong link? That's a discussion about tar formats
> > Sorry about that. Here is the link I meant to paste.
> >
> > https://www.redhat.com/archives/libvir-list/2016-June/msg00694.html

So I've tested it and we get wrong capabilities for guest if --hvm is used.  We
always get only xenpv capabilities and we don't allow setting uefi for xen
guests.

> >> We shouldn't need to use a heuristic here, the machine type should be coming
> >> from the domain XML that virt-manager builds... though I don't think we 
> >> encode
> >> one by default for xen but instead let the xen driver fill it in for us. Can
> >> you link to the correct thread, and give the 'virsh capabilities' output for
> >> your xen connection?
> > 'virsh capabilities' output,
> >
> > <capabilities>
> >
> >   <host>
> >     <cpu>
> >       <arch>x86_64</arch>
> >       <features>
> >         <pae/>
> >       </features>
> >     </cpu>
> >     <power_management/>
> >     <migration_features>
> >       <live/>
> >     </migration_features>
> >     <netprefix>vif</netprefix>
> >     <topology>
> >       <cells num='2'>
> >         <cell id='0'>
> >           <memory unit='KiB'>51380224</memory>
> >           <cpus num='24'>
> >             <cpu id='0' socket_id='0' core_id='0' siblings='0-1'/>
> >             <cpu id='1' socket_id='0' core_id='0' siblings='0-1'/>
> >             <cpu id='2' socket_id='0' core_id='1' siblings='2-3'/>
> >             <cpu id='3' socket_id='0' core_id='1' siblings='2-3'/>
> >             <cpu id='4' socket_id='0' core_id='2' siblings='4-5'/>
> >             <cpu id='5' socket_id='0' core_id='2' siblings='4-5'/>
> >             <cpu id='6' socket_id='0' core_id='3' siblings='6-7'/>
> >             <cpu id='7' socket_id='0' core_id='3' siblings='6-7'/>
> >             <cpu id='8' socket_id='0' core_id='4' siblings='8-9'/>
> >             <cpu id='9' socket_id='0' core_id='4' siblings='8-9'/>
> >             <cpu id='10' socket_id='0' core_id='5' siblings='10-11'/>
> >             <cpu id='11' socket_id='0' core_id='5' siblings='10-11'/>
> >             <cpu id='12' socket_id='0' core_id='8' siblings='12-13'/>
> >             <cpu id='13' socket_id='0' core_id='8' siblings='12-13'/>
> >             <cpu id='14' socket_id='0' core_id='9' siblings='14-15'/>
> >             <cpu id='15' socket_id='0' core_id='9' siblings='14-15'/>
> >             <cpu id='16' socket_id='0' core_id='10' siblings='16-17'/>
> >             <cpu id='17' socket_id='0' core_id='10' siblings='16-17'/>
> >             <cpu id='18' socket_id='0' core_id='11' siblings='18-19'/>
> >             <cpu id='19' socket_id='0' core_id='11' siblings='18-19'/>
> >             <cpu id='20' socket_id='0' core_id='12' siblings='20-21'/>
> >             <cpu id='21' socket_id='0' core_id='12' siblings='20-21'/>
> >             <cpu id='22' socket_id='0' core_id='13' siblings='22-23'/>
> >             <cpu id='23' socket_id='0' core_id='13' siblings='22-23'/>
> >           </cpus>
> >         </cell>
> >         <cell id='1'>
> >           <memory unit='KiB'>50331648</memory>
> >           <cpus num='24'>
> >             <cpu id='24' socket_id='1' core_id='0' siblings='24-25'/>
> >             <cpu id='25' socket_id='1' core_id='0' siblings='24-25'/>
> >             <cpu id='26' socket_id='1' core_id='1' siblings='26-27'/>
> >             <cpu id='27' socket_id='1' core_id='1' siblings='26-27'/>
> >             <cpu id='28' socket_id='1' core_id='2' siblings='28-29'/>
> >             <cpu id='29' socket_id='1' core_id='2' siblings='28-29'/>
> >             <cpu id='30' socket_id='1' core_id='3' siblings='30-31'/>
> >             <cpu id='31' socket_id='1' core_id='3' siblings='30-31'/>
> >             <cpu id='32' socket_id='1' core_id='4' siblings='32-33'/>
> >             <cpu id='33' socket_id='1' core_id='4' siblings='32-33'/>
> >             <cpu id='34' socket_id='1' core_id='5' siblings='34-35'/>
> >             <cpu id='35' socket_id='1' core_id='5' siblings='34-35'/>
> >             <cpu id='36' socket_id='1' core_id='8' siblings='36-37'/>
> >             <cpu id='37' socket_id='1' core_id='8' siblings='36-37'/>
> >             <cpu id='38' socket_id='1' core_id='9' siblings='38-39'/>
> >             <cpu id='39' socket_id='1' core_id='9' siblings='38-39'/>
> >             <cpu id='40' socket_id='1' core_id='10' siblings='40-41'/>
> >             <cpu id='41' socket_id='1' core_id='10' siblings='40-41'/>
> >             <cpu id='42' socket_id='1' core_id='11' siblings='42-43'/>
> >             <cpu id='43' socket_id='1' core_id='11' siblings='42-43'/>
> >             <cpu id='44' socket_id='1' core_id='12' siblings='44-45'/>
> >             <cpu id='45' socket_id='1' core_id='12' siblings='44-45'/>
> >             <cpu id='46' socket_id='1' core_id='13' siblings='46-47'/>
> >             <cpu id='47' socket_id='1' core_id='13' siblings='46-47'/>
> >           </cpus>
> >         </cell>
> >       </cells>
> >     </topology>
> >   </host>
> >
> >   <guest>
> >     <os_type>xen</os_type>
> >     <arch name='x86_64'>
> >       <wordsize>64</wordsize>
> >       <emulator>/usr/lib/xen/bin/qemu-system-i386</emulator>
> >       <machine>xenpv</machine>
> >       <domain type='xen'/>
> >     </arch>
> >   </guest>
> >
> >   <guest>
> >     <os_type>xen</os_type>
> >     <arch name='i686'>
> >       <wordsize>32</wordsize>
> >       <emulator>/usr/lib/xen/bin/qemu-system-i386</emulator>
> >       <machine>xenpv</machine>
> >       <domain type='xen'/>
> >     </arch>
> >     <features>
> >       <pae/>
> >     </features>
> >   </guest>
> >
> >   <guest>
> >     <os_type>hvm</os_type>
> >     <arch name='i686'>
> >       <wordsize>32</wordsize>
> >       <emulator>/usr/lib/xen/bin/qemu-system-i386</emulator>
> >       <loader>/usr/libexec/xen/boot/hvmloader</loader>
> >       <machine>xenfv</machine>
> >       <domain type='xen'/>
> >     </arch>
> >     <features>
> >       <pae/>
> >       <nonpae/>
> >       <acpi default='on' toggle='yes'/>
> >       <apic default='on' toggle='no'/>
> >       <hap default='on' toggle='yes'/>
> >     </features>
> >   </guest>
> >
> >   <guest>
> >     <os_type>hvm</os_type>
> >     <arch name='x86_64'>
> >       <wordsize>64</wordsize>
> >       <emulator>/usr/lib/xen/bin/qemu-system-i386</emulator>
> >       <loader>/usr/libexec/xen/boot/hvmloader</loader>
> >       <machine>xenfv</machine>
> >       <domain type='xen'/>
> >     </arch>
> >     <features>
> >       <acpi default='on' toggle='yes'/>
> >       <apic default='on' toggle='no'/>
> >       <hap default='on' toggle='yes'/>
> >     </features>
> >   </guest>
> 
> I've stared at these guest capabilities for a while but don't see a problem.
> Cole, do you see anything wrong here? Something I'm not doing quite right in the
> libxl driver?

Those capabilities are ok, the uefi firmware is listed in domcapabilities and
only for hvm os type.  As I've wrote above, we need to fix this for xen.

I'll send a patch shortly.

Pavel




More information about the virt-tools-list mailing list