[PATCH v1] Add basic driver for the Cloud-Hypervisor

Daniel P. Berrangé berrange at redhat.com
Tue May 11 09:05:31 UTC 2021


On Tue, Apr 27, 2021 at 02:39:46PM -0700, William Douglas wrote:
> Cloud-Hypervisor is a KVM virtualization using hypervisor. It
> functions similarly to qemu and the libvirt Cloud-Hypervisor driver
> uses a very similar structure to the libvirt driver.
> 
> The biggest difference from the libvirt perspective is that the
> "monitor" socket is seperated into two sockets one that commands are
> issued to and one that events are notified from. The current
> implementation only uses the command socket (running over a REST API
> with json encoded data) with future changes to add support for the
> event socket (to better handle shutdowns from inside the VM).
> 
> This patch adds support for the following initial VM actions using the
> Cloud-Hypervsior API:
>  * vm.create
>  * vm.delete
>  * vm.boot
>  * vm.shutdown
>  * vm.reboot
>  * vm.pause
>  * vm.resume
> 
> To use the Cloud-Hypervisor driver, the v0.14.0 release of
> Cloud-Hypervisor is required to be installed.
> 
> Some additional notes:
>  * The curl handle is persistent but not useful to detect ch process
>  shutdown/crash (a future patch will address this shortcoming)
>  * On a 64-bit host Cloud-Hypervisor needs to support PVH and so can
>  emulate 32-bit mode but it isn't fully tested (a 64-bit kernel and
>  32-bit userspace is fine, a 32-bit kernel is unsupported)

When you say "unsupported", do you merely mean lack of testing, or
are they known functional issues ?  With any full virt hypervisors
I've seen 32-on-64 "just works" in general.  Is  PVH the /only/
boot mode supported, or does it support the traditional/legacy x86
boot sequence starting from real mode.


>  * The pause ch implements pauses execution of the guest cpus (and
>  disk/network i/o threads) which seems to align with
>  https://wiki.libvirt.org/page/VM_lifecycle#Pausing_a_guest_domain

Yes, "pause" in libvirt terminology merely involves stopping guest
CPUs. Ideally anything else related to the VM that consumes CPU would
also cease, but stopping CPUs is the only mandatory part.

Think of pause as kind a like "suspend to RAM" 

>  * Updated spec file to disable ch driver by default

Is cloud-hypervisor present in any Linux OS distributions yet and
or are there plans ? It makes it much more likely that someone will
test the libvirt side, if it is actually packaged in common distros.


> diff --git a/include/libvirt/virterror.h b/include/libvirt/virterror.h
> index 524a7bf9e8..57986931fd 100644
> --- a/include/libvirt/virterror.h
> +++ b/include/libvirt/virterror.h
> @@ -136,6 +136,7 @@ typedef enum {
>  
>      VIR_FROM_TPM = 70,          /* Error from TPM */
>      VIR_FROM_BPF = 71,          /* Error from BPF code */
> +    VIR_FROM_CH = 72,           /* Error from Cloud-Hypervisor driver */

Is "CH" the most common abbreviation / shorthand for refering
to Cloud-Hypervisor ? 

> diff --git a/libvirt.spec.in b/libvirt.spec.in
> index da7af2824e..4eeb4dc1b4 100644
> --- a/libvirt.spec.in
> +++ b/libvirt.spec.in
> @@ -244,6 +244,9 @@ Requires: libvirt-daemon-driver-lxc = %{version}-%{release}
>  %if %{with_qemu}
>  Requires: libvirt-daemon-driver-qemu = %{version}-%{release}
>  %endif
> +%if %{with_ch}
> +Requires: libvirt-daemon-driver-ch = %{version}-%{release}
> +%endif
>  # We had UML driver, but we've removed it.
>  Obsoletes: libvirt-daemon-driver-uml <= 5.0.0
>  Obsoletes: libvirt-daemon-uml <= 5.0.0
> @@ -761,6 +764,20 @@ QEMU
>  %endif
>  
>  
> +%if %{with_ch}
> +%package daemon-driver-ch
> +Summary: Cloud-Hypervisor driver plugin for the libvirtd daemon
> +Requires: libvirt-daemon = %{version}-%{release}
> +Requires: libvirt-libs = %{version}-%{release}
> +Requires: /usr/bin/cloud-hypervisor
> +
> +%description daemon-driver-ch
> +The Cloud-Hypervisor driver plugin for the libvirtd daemon,
> +providing an implementation of the hypervisor driver APIs
> +using Cloud-Hypervisor
> +%endif
> +
> +
>  %if %{with_lxc}
>  %package daemon-driver-lxc
>  Summary: LXC driver plugin for the libvirtd daemon
> @@ -1003,6 +1020,12 @@ exit 1
>      %define arg_qemu -Ddriver_qemu=disabled
>  %endif
>  
> +%if %{with_ch}
> +    %define arg_ch -Ddriver_ch=enabled
> +%else
> +    %define arg_ch -Ddriver_ch=disabled
> +%endif
> +
>  %if %{with_openvz}
>      %define arg_openvz -Ddriver_openvz=enabled
>  %else
> @@ -1146,6 +1169,7 @@ export SOURCE_DATE_EPOCH=$(stat --printf='%Y' %{_specdir}/%{name}.spec)
>  %meson \
>             -Drunstatedir=%{_rundir} \
>             %{?arg_qemu} \
> +           %{?arg_ch} \
>             %{?arg_openvz} \
>             %{?arg_lxc} \
>             %{?arg_vbox} \
> @@ -1771,6 +1795,13 @@ exit 0
>  %{_mandir}/man8/virtqemud.8*
>  %endif
>  
> +%if %{with_ch}
> +%files daemon-driver-ch
> +%dir %attr(0700, root, root) %{_localstatedir}/log/libvirt/ch/
> +%ghost %dir %{_rundir}/libvirt/ch/
> +%{_libdir}/%{name}/connection-driver/libvirt_driver_ch.so
> +%endif
> +
>  %if %{with_lxc}
>  %files daemon-driver-lxc
>  %config(noreplace) %{_sysconfdir}/sysconfig/virtlxcd

Nothing enables "with_ch" and AFAIK cloud-hypervisor isn't in
either RHEL or Fedora, so we don't need anything in the RPM
until it needs to actually be enabled.

> diff --git a/meson.build b/meson.build
> index 951da67896..8c33efb25c 100644
> --- a/meson.build
> +++ b/meson.build
> @@ -1517,6 +1517,48 @@ elif get_option('driver_lxc').enabled()
>    error('linux and remote_driver are required for LXC')
>  endif
>  
> +if not get_option('driver_ch').disabled() and host_machine.system() == 'linux'
> +  use_ch = true
> +
> +  if not conf.has('WITH_LIBVIRTD')
> +    use_ch = false
> +    if get_option('driver_ch').enabled()
> +      error('libvirtd is required to build Cloud-Hypervisor driver')
> +    endif
> +  endif
> +
> +  if not yajl_dep.found()
> +    use_ch = false
> +    if get_option('driver_ch').enabled()
> +      error('YAJL 2 is required to build Cloud-Hypervisor driver')
> +    endif
> +  endif
> +
> +  if not curl_dep.found()
> +    use_ch = false
> +    if get_option('driver_ch').enabled()
> +      error('curl is required to build Cloud-Hypervisor driver')
> +    endif
> +  endif
> +
> +  if use_ch
> +    conf.set('WITH_CH', 1)
> +
> +    default_ch_user = 'root'
> +    default_ch_group = 'root'
> +    ch_user = get_option('ch_user')
> +    if ch_user == ''
> +      ch_user = default_ch_user
> +    endif
> +    ch_group = get_option('ch_group')
> +    if ch_group == ''
> +      ch_group = default_ch_group
> +    endif
> +    conf.set_quoted('CH_USER', ch_user)
> +    conf.set_quoted('CH_GROUP', ch_group)
> +  endif
> +endif

This logic will end up enabling cloud-hypervisor by default on
all architectures. You mention earlier it only targets 64-bit.
What about the more obscure 64-bit archs like s390x/ppc64 ?
This will build cloud-hypervisor on those archs right now.
Does this need to have an arch restriction.



> +/* Functions */
> +virCaps *virCHDriverCapsInit(void)
> +{
> +    virCaps *caps;
> +    virCapsGuest *guest;
> +    g_autofree char *ch_cmd = NULL;
> +
> +    if ((caps = virCapabilitiesNew(virArchFromHost(),
> +                                   false, false)) == NULL)
> +        goto cleanup;
> +
> +    if (!(caps->host.numa = virCapabilitiesHostNUMANewHost()))
> +        goto cleanup;
> +
> +    if (virCapabilitiesInitCaches(caps) < 0)
> +        goto cleanup;
> +
> +    if ((guest = virCapabilitiesAddGuest(caps,
> +                                         VIR_DOMAIN_OSTYPE_HVM,
> +                                         caps->host.arch,
> +                                         NULL,
> +                                         NULL,
> +                                         0,
> +                                         NULL)) == NULL)
> +        goto cleanup;

So on 64-bit archs, you're only advertizing support for 64-bit
guests, not advertizing 32-bit.  I guess this matches what you
say earlier about not supporting 32-on-64.

> +
> +    if (virCapabilitiesAddGuestDomain(guest,
> +                                      VIR_DOMAIN_VIRT_KVM,
> +                                      NULL,
> +                                      NULL,
> +                                      0,
> +                                      NULL) == NULL)
> +        goto cleanup;
> +
> +    ch_cmd = g_find_program_in_path(CH_CMD);
> +    if (!ch_cmd)
> +        goto cleanup;

You're not using this anywhere . Did you mean to pass it as 
the emulator arg to virCapabilitiesAddGuest a few lines
earlier ?



> +/* Function Tables */
> +static virHypervisorDriver chHypervisorDriver = {
> +    .name = "CH",
> +    .connectURIProbe = chConnectURIProbe,
> +    .connectOpen = chConnectOpen,                           /* 6.7.0 */
> +    .connectClose = chConnectClose,                         /* 6.7.0 */
> +    .connectGetType = chConnectGetType,                     /* 6.7.0 */
> +    .connectGetVersion = chConnectGetVersion,               /* 6.7.0 */
> +    .connectGetHostname = chConnectGetHostname,             /* 6.7.0 */
> +    .connectNumOfDomains = chConnectNumOfDomains,           /* 6.7.0 */
> +    .connectListAllDomains = chConnectListAllDomains,       /* 6.7.0 */
> +    .connectListDomains = chConnectListDomains,             /* 6.7.0 */
> +    .connectGetCapabilities = chConnectGetCapabilities,     /* 6.7.0 */
> +    .domainCreateXML = chDomainCreateXML,                   /* 6.7.0 */
> +    .domainCreate = chDomainCreate,                         /* 6.7.0 */
> +    .domainCreateWithFlags = chDomainCreateWithFlags,       /* 6.7.0 */
> +    .domainShutdown = chDomainShutdown,                     /* 6.7.0 */
> +    .domainShutdownFlags = chDomainShutdownFlags,           /* 6.7.0 */
> +    .domainReboot = chDomainReboot,                         /* 6.7.0 */
> +    .domainSuspend = chDomainSuspend,                       /* 6.7.0 */
> +    .domainResume = chDomainResume,                         /* 6.7.0 */
> +    .domainDestroy = chDomainDestroy,                       /* 6.7.0 */
> +    .domainDestroyFlags = chDomainDestroyFlags,             /* 6.7.0 */
> +    .domainDefineXML = chDomainDefineXML,                   /* 6.7.0 */
> +    .domainDefineXMLFlags = chDomainDefineXMLFlags,         /* 6.7.0 */
> +    .domainUndefine = chDomainUndefine,                     /* 6.7.0 */
> +    .domainUndefineFlags = chDomainUndefineFlags,           /* 6.7.0 */
> +    .domainLookupByID = chDomainLookupByID,                 /* 6.7.0 */
> +    .domainLookupByUUID = chDomainLookupByUUID,             /* 6.7.0 */
> +    .domainLookupByName = chDomainLookupByName,             /* 6.7.0 */
> +    .domainGetState = chDomainGetState,                     /* 6.7.0 */
> +    .domainGetXMLDesc = chDomainGetXMLDesc,                 /* 6.7.0 */
> +    .domainGetInfo = chDomainGetInfo,                       /* 6.7.0 */
> +    .domainIsActive = chDomainIsActive,                     /* 6.7.0 */
> +    .nodeGetInfo = chNodeGetInfo,                           /* 6.7.0 */
> +};

FWIW, the versions can be bumped just before sending each patch version

> +static virConnectDriver chConnectDriver = {
> +    .localOnly = true,
> +    .uriSchemes = (const char *[]){"ch", "Cloud-Hypervisor", NULL},

AFAIK, you only handle ch:// scheme, not Cloud-Hypervisor://, so
only the former should be listed.



In general my impression is that this looks like a decent starting
point for a libvirt driver. It shows enough to demonstrate the
overall architecture, and you've followed normal libvirt coding
style and architectural designs pretty well.

My preference is to get something merged sooner rather than later,
to make it easier to then get on with ongoing incremental dev,
without having to carry around a huge out of tree patch.

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|




More information about the libvir-list mailing list