[libvirt] [PATCH] qemu: add spice opengl support

Andrea Bolognani abologna at redhat.com
Fri Feb 19 13:09:40 UTC 2016


On Fri, 2016-02-19 at 13:49 +0100, Marc-André Lureau wrote:
> > I don't think we want to merge this before a QEMU version that
> > supports this attribute has been released. Still, it's good to get
> > ready ahead of time :)
> 
> That would delay the support for other projects too
> (virt-manager/boxes etc). I don't know what rules applies, but
> similarly we added virtio-gpu support in Nov, a month before the qemu
> 2.5 release.
> 
> This gl=yes argument has been in the work for over a year (in
> development branches), and it is simple I don't think it will change.

Seems to me like any work intended to use features that only exist
in branches should be done, well, in branches :)

> > > @@ -4991,6 +4991,12 @@ qemu-kvm -net nic,model=? /dev/null
> > >                0.8.8</span>); and <code>usbredir</code>
> > >                (<span class="since">since 0.9.12</span>).
> > >              </p>
> > > +            <p>
> > > +              Spice may provide accelerated server-side rendering with
> > > +              OpenGL. You can enable or disable OpenGL support explicitly with
> > > +              the <code>gl</code> attribute.
> > > +              (<span class="since">since 1.3.2</span>).
> > > +            </p>
>> > I think this should be a sub-element rather than an attribute, something
> > like
> 
> Any guideline on choosing attribute or sub elements here?

There's no formal guideline I'm aware of, but looks like all the
SPICE specific settings, the ones that are not shared with other
<graphics> types, are represented as sub-elements so I believe we
should follow that lead.

> > > @@ -6047,6 +6047,17 @@ qemuBuildGraphicsSPICECommandLine(virQEMUDriverConfigPtr cfg,
> > >          }
> > >      }
> > > 
> > > +    if (graphics->data.spice.gl) {
> > > +        if (!virQEMUCapsGet(qemuCaps, QEMU_CAPS_SPICE_GL)) {
> > > +            virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
> > > +                           _("This QEMU doesn't support spice OpenGL"));
> > > +            goto error;
> > > +        }
> > > +
> > > +        virBufferAsprintf(&opt, ",gl=%s",
> > > +                          virTristateSwitchTypeToString(graphics->data.spice.gl));
>> > graphics->data.spice.gl is a virTristateBool, yet you're using
> > virTristateSwitchTypeToString() on it.
>> > I know you have "yes" / "no" in the XML but need "on" / "off" for the
> > QEMU attribute, and I know that virTristateBool and virTristateSwitch
> > are basically the same thing, but this still looks weird.
>> > I'd rather use something like
>> >   switch (graphics->data.spice.gl) {
> >       case VIR_TRISTATE_BOOL_TRUE:
> >           virBufferAsprintf(&opt, ",gl=on");
> >           break;
> >       case VIR_TRISTATE_BOOL_FALSE:
> >           virBufferAsprintf(&opt, ",gl=off");
> >           break;
> >       default:
> >           break;
> >   }
>> > but other people probably feel differently :)
> 
> I understand your concern, but having that switch seems worse to me.
> Maybe we should make it statically explicit that VIR_TRISTATE_BOOL_YES
> == VIR_TRISTATE_SWITCH_ON..

Well, it's certainly more verbose but also more explicit. I, for one,
had to do a double take because I didn't notice right away that you
were using virTristateSwitch* in that case, and was wondering how it
could work at all. At the very least, I believe it deserves an
explanatory comment.

Let's wait for other people to weigh in on these three remaining
issues...

Cheers.

-- 
Andrea Bolognani
Software Engineer - Virtualization Team




More information about the libvir-list mailing list