[Ovirt-devel] Questions about Xen support

Perry N. Myers pmyers at redhat.com
Wed May 28 01:18:37 UTC 2008


Pierre Inglebert wrote:
> Hi ovirt,
> 
> I'm working on the Xen support for oVirt, so first i wanna know if
> anyone else is working on or is interrested to.

Pierre,

Thanks for writing.  We haven't started implementing oVirt for Xen yet, 
but it is on our roadmap.  We definitely want to be able to support a Xen 
host as a managed node in an oVirt network.

> Thanks to libvirt, it is quite easy to make ovirt only work with Xen or
> KVM but not with both because there are some hardcoded parts for KVM.
> 
> This hardcoded parts are in task-omatic (especially in task_vm.rb),
> libvirt uri connections as "qemu+tcp://" don't work with Xen, so i'm
> trying to find a way to dynamicly modify this uri (e.g.change it to "xen
> +tcp://" for xen).
> 
> To do this, we have to know the hypervisor type before the connection,
> we also have to know what type of VM we are creating in case of many
> hypervisor types available and for the migration, the hosts need to have
> the same hypervisor.

I agree with the first part, but not necessarily the second.

For the first part...  What we should do is provide a way for a managed 
node to identify itself to the oVirt server.  Right now when a Managed 
Node using KVM is booted it contacts the oVirt Server (using DNS SRV 
records) to tell it that it is alive.  This mechanism could be extended to 
also provide other information to the oVirt Server like what type of 
Hypervisor the node is.  (So instead of just a HELLO message it could be 
something like, "HELLO:KVM or HELLO:XEN")

The daemon on the oVirt Server that is listening for the HELLO is called 
the ovirt-host-keyadd daemon.  Darryl Pierce is going to be working with 
this daemon and the ovirt-host-browser daemon to merge their functionality 
(and get rid of the Avahi notification process) so it would be worthwhile 
to discuss this in depth with him.

This identification is done presently in an init script on the managed 
node that is placed there by the node kickstart.  This should really be 
moved to an RPM instead of being placed in a kickstart file, and it should 
be made a little more generic so that it can be used on either Xen or KVM 
hosts.

Once we have the ability for a node to identify to the oVirt Server what 
kind of hypervisor it is running, then we need to be able to store that 
information in the database and it in the WUI.  Then when taskomatic 
actions are requested it can use that information to determine the virsh url.

The second part about not being able to move guests from one managed node 
to another is not so clear...  Initially I agree that this should be the 
case, since disk image formats and configuration files will differ 
somewhat.  We may want to simplify the initial support of multiple HV 
types by allowing hardware pools to only contain HVs of one type.  i.e. if 
you have both Managed Nodes (KVM) and RHEL 5.1 hosts (XEN) you could not 
have both of these host types in the same hardware pool.  They need to go 
in separate HW pools with like HV hosts.

However, there is work going on in another project to provide tools to 
migrate guests from one HV to another.  Once we have these tools linked in 
to oVirt, we should be able to run a guest on any HV that we support and 
remove the restriction that hardware pools need to contain nodes with the 
same HV type.

> For me, there are 2 solutions.
> The first solution is to allow an host to have more than one
> hypervisor, these solutions require some changes : 
> - the VM will have an hypervisor type

Agree with this, but eventually we'll allow this to be changed manually or 
even dynamically as the tools for guest HV migrations evolve.

> - On the host, a list of available hypervisors (possibly dynamic?)

What do you mean by host here?  If host == Node (as in Managed Node) then 
there is not a choice.  RHEL will only support Xen and the oVirt Managed 
Node will only support KVM.  I suppose Fedora could support both, but this 
could be detected automatically on node boot (i.e. if you're in a Xen 
kernel say HELLO:XEN, if you detect hardware virt support and are not in a 
Xen kernel say HELLO:KVM to the oVirt server)

> - For the user, we have to ask about hypervisor/arch on the VM creation
> form (from the list of available hypervisors (dynamic listing will be
> hard))

Agree.  We should be able to list only hypervisors that are present in a 
given hardware pool (which for the initial implementation will be 
restricted by the fact that hardware pools have to be homogeneous)

> - for the VM creation/migration task, we have to check if the Host is
> compatible with the VM, 

Eventually, but for initial implementation this can be omitted.

> - during the Host addition, we have to get all the hypervisor
> capabilities.

Agreed

> 
> The second solution is to keep the restriction of one hypervisor by
> Host.
> We can seperate 2 cases :
> - One Hypervisor by Host collection :
>    - Check the availability of the hypervisor during the Host addition.
>    - Move the Hypervisor Type to the Host collection (actually on Host)

Don't know if this should be automatic.  When a new node appears in the 
oVirt WUI it should be identified by what HV is running on it, and it 
should only be permitted to move to a HW pool containing like HVs.
	
> 
> - One Hypervisor by Host (Multiple for Collection) :
>    - Ask user about the hypervisor to use during the Host addition
> (possibility to choose it from a list)

This isn't necessary since the host can tell you what HV it's using.  One 
thing this brings up though... For Fedora since you can use either Xen or 
KVM, you could conceivably toggle he Node back and forth.  If we decide to 
initially implement using homogeneous HW pools, we will have to 
automatically remove the Node from its assigned HW pool if it switches HV 
types...

>    - For the VM creation, ask the user about the hypervisor type from
> the hypervisor list of the Host Collection.
>    - For migration, check available hosts in the Collection.
> 
> So I wanna know what do you think about it.

Good thoughts.  If you want to start looking at creating patches to 
support Xen, it would probably make sense to start with the code on the 
Managed Node for identifying the node to the oVirt Server.  This needs to 
be extracted from the kickstart, made into an RPM and then modified so 
that it identifies the HV type when it communicates to the server.

We also want to eventually send information like hardware capabilities 
(dmidecode info, storage/network devices, processor flags, etc...) as 
well.  So we need to give some thought to how we should package up that 
hardware information for transmission to the server.  The HV type can be 
part of that information that is sent.  There is other work being done 
right now on the hardware enumeration task, so I'd just focus on detecting 
the HV type and sending that data.  We can merge in the additional 
hardware info later.

Perry

-- 
|=-        Red Hat, Engineering, Emerging Technologies, Boston        -=|
|=-                     Email: pmyers at redhat.com                      -=|
|=-         Office: +1 412 474 3552   Mobile: +1 703 362 9622         -=|
|=- GnuPG: E65E4F3D 88F9 F1C9 C2F3 1303 01FE 817C C5D2 8B91 E65E 4F3D -=|




More information about the ovirt-devel mailing list