[Linux-cluster] cluster between 2 Xen guests where guests are ondifferent hosts
Lon Hohberger
lhh at redhat.com
Fri Oct 24 15:37:30 UTC 2008
On Fri, 2008-10-24 at 10:09 -0400, Jeff Sturm wrote:
> Santosh,
>
> The hosts are responsible for fencing the guests, so, as far as I know
> it is not possible to use fence_xvm without also configuring fence_xvmd.
Correct.
> In our configuration we run an "inner" cluster amongst the DomU guests,
> and an "outer" cluster amongst the Dom0 hosts. The outer cluster starts
> fence_xvmd whenever cman starts. The fence_xvmd daemon listens for
> multicast traffic from fence_xvm. We have a dedicated VLAN for this
> traffic in our configuration. (Make sure your routing tables are
> adjusted for this, if needed--whereas aisexec figures out what
> interfaces to use for multicast automatically based on the bind address,
> fence_xvm does not.)
> If your Dom0 hosts are not part of a cluster, it may be possible to run
> fence_xvmd standalone. We have not attempted to do so, so I can't say
> whether it can work.
fence_xvmd -LX (need to add to rc.local or something)
You could (in theory) do fencing using multiple fence_xvm agent
instances to try different keys (one per physical host) so that if
fencing a host on one key succeeds, you also ensure the other guest
isn't running the node.
For example, if you had two keys on the guests, you could do the
following:
* dd if=/dev/urandom of=/etc/cluster/fence_xvm-host1.key bs=4k count=1
* dd if=/dev/urandom of=/etc/cluster/fence_xvm-host2.key bs=4k count=1
* scp /etc/cluster/fence_xvm-host1.key
host1:/etc/cluster/fence_xvm.key
* scp /etc/cluster/fence_xvm-host2.key
host2:/etc/cluster/fence_xvm.key
(don't forget to copy /etc/cluster/fence_xvm* to the other virtual guest
too!)
Set up two fencing devices:
<fencedevices>
<fencedevice agent="fence_xvm" name="host1"
key_file="/etc/cluster/fence_xvm-host1.key" />
<fencedevice agent="fence_xvm" name="host2"
key_file="/etc/cluster/fence_xvm-host2.key" />
</fencedevices>
Set up the nodes to fence both:
<clusternodes>
<clusternode name="virt1.mydomain.com">
<fence>
<method name="hack-xvm">
<device name="host1" domain="virt1"/>
<device name="host2" domain="virt1"/>
</method>
</fence>
</clusternode>
<clusternode name="virt2.mydomain.com">
<fence>
<method name="hack-xvm">
<device name="host2" domain="virt2"/>
<device name="host1" domain="virt2"/>
</method>
</fence>
</clusternode>
... maybe that would work.
The reason you need a cluster in dom0 typically is because we use
Checkpointing to distribute the states of VMs cluster-wide. If there's
no cluster, then you can't distribute the states. Now, key files are,
well, key here - fence_xvmd assumes that the admin does the correct
thing (not reusing key files on multiple clusters), so therefore it
returns "ok" if it's not got information about a guest...
Suppose virt1 (on guest1) fails:
* virt2 sends a request that only host2 listens to to try to fence
virt1.
- "Never heard of that domain, so it must be safe"
* virt2 sends a request that only host1 listens to to try to fence
virt1.
- "Ok, it's running locally -> kill it and return success"
-- Lon
More information about the Linux-cluster
mailing list