[Ovirt-devel] Modeling LVM storage
sofsthun at virtualiron.com
Thu Sep 18 21:47:31 UTC 2008
Andrew Cathrow wrote:
> (Sorry for top posting....)
> Aren't we trying to solve a problem that was already solved with cluster aware LVM - cLVM ?
No. There are cluster issues with using standard LVM on shared storage with more than one node. Care must be taken with metadata changes, snapshot usage, etc. These issues could be avoided with cLVM.
The problem here is how foreign disks containing possibly conflicting LVM metadata are safely imported into a host node using LVM. cLVM could suffer the same problems. The faulty assumption here is that the host(s) are completely in control of the LVM/cLVM meta data (normally a safe assumption w/o virtualization).
In the cLVM context, it would be similar to allowing multiple clusters access to the same SAN luns.
> ----- Original Message -----
> From: "Perry N. Myers" <pmyers at redhat.com>
> To: "Steve Ofsthun" <sofsthun at virtualiron.com>
> Cc: ovirt-devel at redhat.com
> Sent: Tuesday, September 16, 2008 6:33:10 PM GMT -05:00 US/Canada Eastern
> Subject: Re: [Ovirt-devel] Modeling LVM storage
> Steve Ofsthun wrote:
>> Scott Seago wrote:
>>> Some additional clarification based on a conversation I just had with
>>> No changes from the basic model side of things, but we do want to
>>> explicitly exclude any LVM bits in LUNs assigned directly to VMs. So
>>> if a VM carves up one of its assigned LUNs into multiple LVs, oVirt
>>> doesn't care -- we only show the whole LUN assigned to a VM. The other
>>> thing is that we don't want to explicitly scan all unused LUNs for
>>> PVs/LVs -- rather we should do this on-demand.
>> How would this "on-demand" scanning be done?
>> The only way to selectively scan volumes (with pvscan) is to update
>> /etc/lvm/lvm.conf to restrict the desired list of volumes to scan. This
>> can be a pain to manage, particularly if the local node requires LVM
>> access to boot. If a newly scanned LUN contains volume information that
>> conflicts with an existing volume group (say VolGroup00), the pvscan
>> will error out due to conflicting volume group information (not so
>> bad). More importantly though, if the system is rebooted at this point,
>> the reboot will fail to activate the real VolGroup00 possibly
>> compromising the entire boot process (not so good). Even if a boot
>> volume isn't compromised, some active volume group is no longer accessible.
> Good points. One thing we could do is hijack a Node to do the scanning
> for us, since they will never require LVM to boot and shouldn't have
> anything in /etc/lvm/lvm.conf. But regardless the process is somewhat
> convoluted, I agree.
>> A safer alternative would be to modify pvscan to allow selective volume
>> scanning without changing /etc/lvm/lvm.conf. Is this a possible
>> alternative? Would a modified pvscan be acceptable in upstream lvm?
> I agree this would be a much better alternative, but have no idea how
> difficult it would be to get this upstream.
>> Another alternative, as Daniel mentioned, is to not allow a guest direct
>> access to the entire LUN.
> The only reason I had for allowing direct LUN access is for hypervisors
> that don't understand LVM but can use iSCSI. Not sure if this niche case
> warrants the pain associated with direct LUN access. For example if we
> eventually get libvirt support for VMware ESX Nodes we wouldn't be able to
> give them iSCSI storage since they wouldn't understand what to do with an
> LVM volume on an iSCSI LUN. We'd have to stick with providing them NFS
> file based images. (I believe the same problems will occur with Fibre
> Channel attached storage as well if you want to do LVM on top of FC)
> If we don't care about that case, then I would be in favor of always
> creating a VG on every LUN in oVirt. If a vm wants the 'entire LUN' we
> create a single LV on the VG and give it to the VM.
> Ovirt-devel mailing list
> Ovirt-devel at redhat.com
More information about the ovirt-devel