[libvirt-users] Ceph RBD locking for libvirt-managed LXC (someday) live migrations

Joshua Dotson josh at wrale.com
Wed Jan 15 22:47:35 UTC 2014


Hi,

I'm trying to build an active/active virtualization cluster using a Ceph
RBD as backing for each libvirt-managed LXC.  I know live migration for LXC
isn't yet possible, but I'd like to build my infrastructure as if it were.
That is, I would like to be sure proper locking is in place for live
migrations to someday take place.  In other words, I'm building things as
if I were using KVM and live migration via libvirt.

I've been looking at corosync, pacemaker, virtlock, sanlock, gfs2, ocfs2,
glusterfs, cephfs, ceph RBD and other solutions.  I admit that I'm quite
confused.  If oVirt, with its embedded GlusterFS and its planned
self-hosted engine option, supported LXC, I'd use that.  However the stars
have not yet aligned for that.

It seems that the most elegant and scalable approach may be to utilize
Ceph's RBD with its native locking mechanism plus corosync and pacemaker
for fencing, for a number of reasons out of scope for this email.

*My question now is in regards to proper locking.  Please see the following
links.  The libvirt hook looks good, but is there any expectation that this
arrangement will become a patch to libvirt itself, as is suggested by the
second link?*

http://www.wogri.at/en/linux/ceph-libvirt-locking/

http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-August/003887.html

Can anyone guide me on how to theoretically build a very "lock" safe 5-node
active-active KVM cluster atop Ceph RBD?  Must I use sanlock with its NFS
or GFS2 with its performance bottlenecks?  Does your answer work for LXC
(sans the current state of live migration)?

Thanks,
Joshua
-- 
Joshua Dotson
Founder, Wrale Ltd
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/libvirt-users/attachments/20140115/9e825b55/attachment.htm>


More information about the libvirt-users mailing list