[Linux-cluster] Re: cluster.conf, was: Cluster config. advice sought
brem belguebli
brem.belguebli at gmail.com
Thu Dec 17 12:42:04 UTC 2009
Hi Wolf,
I have no xen setup to tell you exactly if the cluster.conf you posted
should be fine.
I do understand that this cluster.conf comes from what you think it
should be after reading the different posts, and it is not the one you
have in production right now, right ?
To test it without disturbing your prod setup, as the use_virsh, path
parameters are VM based, you may create a test VM with these
parameters and see if you get the same behaviour.
Brem
2009/12/17 Wolf Siedler <siedler at hrd-asia.com>:
> Dear Brem,
>
> Thanks for taking time to look at my problem.
>
>> Try to look in linux-cluster archive, your problem looks similar to
>> some others that were posted around October/November.
>> There were things to check with use_virsh, path etc... in the
>> cluster.conf...
>
> I did and this was actually the reason for my original question (I am
> definitely open for testing, but there is one production VM running in
> the cluster. Which in turn limits my access for configuration changes
> and restarts.):
> After studying the thread you described, I came up with this cluster.conf:
> ===quote===
> <?xml version="1.0"?>
> <cluster alias="example_cluster_1" config_version="81"
> name="example_cluster_1">
> <fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="30"/>
> <clusternodes>
> <clusternode name="station1.example.com" nodeid="1" votes="1">
> <fence>
> <method name="1">
> <device name="station1_fenced"/>
> </method>
> </fence>
> </clusternode>
> <clusternode name="station2.example.com" nodeid="2" votes="1">
> <fence>
> <method name="1">
> <device name="station2_fenced"/>
> </method>
> </fence>
> </clusternode>
> </clusternodes>
> <cman expected_votes="3" two_node="0"/>
> <fencedevices>
> <fencedevice agent="fence_ipmilan" ipaddr="172.16.10.91"
> login="ipmi_admin" name="station1_fenced" operation="off" passwd="secret"/>
> <fencedevice agent="fence_ipmilan" ipaddr="172.16.10.92"
> login="ipmi_admin" name="station2_fenced" operation="off" passwd="secret"/>
> </fencedevices>
> <rm>
> <failoverdomains>
> <failoverdomain name="bias-station1" nofailback="0"
> ordered="0" restricted="0">
> <failoverdomainnode name="station1.example.com"
> priority="1"/>
> </failoverdomain>
> <failoverdomain name="bias-station2" nofailback="0"
> ordered="0" restricted="0">
> <failoverdomainnode name="station2.example.com"
> priority="1"/>
> </failoverdomain>
> </failoverdomains>
> <resources/>
> <vm autostart="1" use_virsh="0" domain="bias-station1"
> exclusive="0" migrate="live" name="vm_mailserver" path="/rootfs"
> recovery="restart"/>
> <vm autostart="1" use_virsh="0" domain="bias-station2"
> exclusive="0" migrate="live" name="vm_ldapserver" path="/rootfs"
> recovery="restart"/>
> <vm autostart="1" use_virsh="0" domain="bias-station2"
> exclusive="0" migrate="live" name="vm_adminserver" path="/rootfs"
> recovery="restart"/>
> </rm>
> <quorumd interval="3" label="xen_qdisk" min_score="1" tko="23"
> votes="1"/>
> </cluster>
> ===unquote===
>
> You will notice that I already included use_virsh.
> Does this cluster.conf look OK?
>
> As said before, I would highly appreciate any advice/suggestion you
> would be willing to give.
>
> Regards,
> Wolf
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
More information about the Linux-cluster
mailing list