[Ovirt-devel] I need oVirt!

Mark Nielsen mnielsen at redhat.com
Tue Mar 11 14:34:35 UTC 2008


On Tue, 2008-03-11 at 13:59 +0000, Daniel P. Berrange wrote:

> 
> So would this be a fair summary of your networking setup:
> 
>  eth0 + eth1 -> bond0    (cluster suite / management)
>  eth2 + eth3 -> bond1    (guest connectivity)
> 
>  bond1.1 + br01  => Bridged VLAN 1  (guest lan)
>  bond1.2 + br02  => Bridged VLAN 2  (guest lan)
>  bond1.3 + br03  => Bridged VLAN 3  (guest lan)
>  bond1.4 + br04  => Bridged VLAN 4  (guest lan)
>  bond1.5 + br05  => Bridged VLAN 5  (guest lan)
> 
yes, nearly exactly (names have been changed to protect the innocent).
One note of possible interest, only bond1.1 has an IP address. This is
for access to dom0. No guests use this bridge. None of the additional
bridges have an IP address on dom0. The guests use these various VLANs
for all their networking (either "internal" for the VM cluster comms or
external for direct network access to the guest). In most cases, dom0
won't be able to connect to the domU.

> We are currently only supporting iSCSI, but direct attached storage is the
> next thing on the feature list for storage support. This is pending HBA
> support in libvirt - reasonably straightfowrad todo. Once this is done, then
> LVM is the logical next step. 
> 
We have a "DR" cluster that won't go to the DR site for nearly a year.
This is my 'playground' for now, I can test anything you want to try
without worry about killing the whole cluster and having to re-install.

> Can you describe in more detail the way you carve up your storage ?  The
> relationship between LUNS, volume groups and guests ?  1 LUN + VG per
> guest, or many LUNs in 1 VG, split across many guests, or something else ?
> Also, do you use multipathing or RAID at all ?

VolGrp00 is all local for the host OS. VolGrp01 is 10+Tb for *guest*
shared storage (virtual cluster GFS partitions for data) presented as
the second disk to the VM where needed (w!). VolGrp02 is 1+TB for VM
root partitions. We lvcreate separate 15G partitions for all guests,
then  present as disk = [ "phy:/dev/VolGrp02/LVTestVM,xvda,w" ] and
kickstart a VM on to that using virt-install. Multipathing is done on
dom0, RAID is done on the SAN. The LUNs presented from SAN are numerous
and varying to some degree in size. They do try to present a consistent
size, but that doesn't always happen. I pvcreate /dev/mapper/mpath0
them, then vgcreate multiple SAN LUNs (each PV) into one VG, then
lvcreate slice them up as needed for guests. I do also use LVM inside
the guests as well. I decided it was easier to use vg|lvextend inside
the guest vs. on dom0 and I haven't seen any data indicating LVM over
LVM has significant performance impacts. I do run in to issues because I
use the same naming convention (VolGrp00), this prevents me from
mounting a guest disk on dom0. That's fine, I don't want anyone on dom0
anyways, and don't want dom0 being used to mount up guest images when
there are also security concerns about some of our data. I have a
"rescue" VM that uses VG0/LV0 as its naming convention. If I have a
problematic guest disk/image/LVM I present it as a second disk= to my
rescue guest and fsck it there.
> 
> > 2) I'm using RHEL 5.1 and need to make sure oVirt plays well with Xen.
> 
> For the guest that should not be a problem. oVirt is using KVM fullvirt
> which can run more or less any guest. There are now paravirt drivers
> available for KVM backported to 2.6.18. Or alternatively there is Xenner
> which lets us run Xen paravirt guests in KVM.
> 
Another issue I have is the winding roadmap for virt! A FAQ would be
nice to explain all the terms, the diffs between the various
technologies, and where it is all going. I can't even tell my customer
whether or not the virtualization solution we're using now will be valid
in RHEL 6.

> For managed nodes (ie where the guests run) we provide a Fedora 8 based 
> stateless image, but the core requirement is basically libvirtd daemon
> version 0.4.1 or later, so that can be built around other OS if needed.
> 
> > 3) My guests are currently services of Cluster Suite, I'm hoping that
> > oVirt will play well with Cluster Suite as well as Conga...
> 
> So, this is clustering at the physical host level ? ie if a physical
> host dies, cluster suite moves all your VMs to another host ? If so,
> that is definitely something on our roadmap. We won't neccessarily use
> Conga for this - we may integrate the low level clustering infrastructure
> directly into the oVirt images.
> 
yes, exactly. Though I often use clusvcadm and other cluster commands
when Conga fails me (which it doesn't seem to do as often as it used to)

> For clustering inside the guest, we're intending to work with the cluster
> guys to provide some form of generic fence device to all guests. The goal
> is to allow the guest administrator to setup clustering of services inside
> their guests without needing co-ordination with the host admin.
> 
> > I think oVirt is what I'm looking for to manage my VMs and hosts, but
> > please feel free to wave your hand in front of my face and tell me if
> > this is not the tool I'm looking for.
> 
> oVirt is at  a very early stage of development, so as you can see we can't
> currently meet all your feature requests yet. Everything you mention is
> definitely on our roadmap - you outline precisely the kind of deployment
> scenarios we want & need to be able to handle.
> 
I'd love to be a use/test case. There really isn't anything I see that
allows us to manage our VMs, and as our numbers climb in to the
hundreds, we will need this functionality.

Mark


> Regards,
> Dan.
> -- 
> |: Red Hat, Engineering, Boston   -o-   http://people.redhat.com/berrange/ :|
> |: http://libvirt.org  -o-  http://virt-manager.org  -o-  http://ovirt.org :|
> |: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
> |: GnuPG: 7D3B9505  -o-  F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|




More information about the ovirt-devel mailing list