[Linux-cluster] openais issue

brem belguebli brem.belguebli at gmail.com
Fri Sep 25 22:07:42 UTC 2009


Have you started  your VM via rgmanager (clusvcadm -e vm:guest1) or
using xm commands out of cluster control  (or maybe a thru an
automatic init script ?)

When clustered, you should never be starting services (manually or
thru automatic init script) out of cluster control

The thing would be to stop your vm on all the nodes with the adequate
xm command (not using xen myself) and try to start it with clusvcadm.

Then see if it is started on all nodes (send clustat output)



2009/9/25 Paras pradhan <pradhanparas at gmail.com>:
> Ok. Please see below. my vm is running on all nodes though clustat
> says it is stopped.
>
> --
> [root at cvtst1 ~]# clustat
> Cluster Status for test @ Fri Sep 25 16:52:34 2009
> Member Status: Quorate
>
>  Member Name                                                     ID   Status
>  ------ ----                                                     ---- ------
>  cvtst2                                                    1 Online, rgmanager
>  cvtst1                                                     2 Online,
> Local, rgmanager
>  cvtst3                                                     3 Online, rgmanager
>
>  Service Name                                            Owner (Last)
>                                          State
>  ------- ----                                            ----- ------
>                                          -----
>  vm:guest1                                               (none)
>                                          stopped
> [root at cvtst1 ~]#
>
>
> ---
> o/p of xm li on cvtst1
>
> --
> [root at cvtst1 ~]# xm li
> Name                                      ID Mem(MiB) VCPUs State   Time(s)
> Domain-0                                   0     3470     2 r-----  28939.4
> guest1                                     7      511     1 -b----   7727.8
>
> o/p of xm li on cvtst2
>
> --
> [root at cvtst2 ~]# xm li
> Name                                      ID Mem(MiB) VCPUs State   Time(s)
> Domain-0                                   0     3470     2 r-----  31558.9
> guest1                                    21      511     1 -b----   7558.2
> ---
>
> Thanks
> Paras.
>
>
>
> On Fri, Sep 25, 2009 at 4:22 PM, brem belguebli
> <brem.belguebli at gmail.com> wrote:
>> It looks like no.
>>
>> can you send an output of clustat  of when the VM is running on
>> multiple nodes at the same time?
>>
>> And by the way, another one after having stopped (clusvcadm -s vm:guest1) ?
>>
>>
>>
>> 2009/9/25 Paras pradhan <pradhanparas at gmail.com>:
>>> Anyone having issue as mine? Virtual machine service is not being
>>> properly handled by the cluster.
>>>
>>>
>>> Thanks
>>> Paras.
>>>
>>> On Mon, Sep 21, 2009 at 9:55 AM, Paras pradhan <pradhanparas at gmail.com> wrote:
>>>> Ok.. here is my cluster.conf file
>>>>
>>>> --
>>>> [root at cvtst1 cluster]# more cluster.conf
>>>> <?xml version="1.0"?>
>>>> <cluster alias="test" config_version="9" name="test">
>>>>        <fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="3"/>
>>>>        <clusternodes>
>>>>                <clusternode name="cvtst2" nodeid="1" votes="1">
>>>>                        <fence/>
>>>>                </clusternode>
>>>>                <clusternode name="cvtst1" nodeid="2" votes="1">
>>>>                        <fence/>
>>>>                </clusternode>
>>>>                <clusternode name="cvtst3" nodeid="3" votes="1">
>>>>                        <fence/>
>>>>                </clusternode>
>>>>        </clusternodes>
>>>>        <cman/>
>>>>        <fencedevices/>
>>>>        <rm>
>>>>                <failoverdomains>
>>>>                        <failoverdomain name="myfd1" nofailback="0" ordered="1" restricted="0">
>>>>                                <failoverdomainnode name="cvtst2" priority="3"/>
>>>>                                <failoverdomainnode name="cvtst1" priority="1"/>
>>>>                                <failoverdomainnode name="cvtst3" priority="2"/>
>>>>                        </failoverdomain>
>>>>                </failoverdomains>
>>>>                <resources/>
>>>>                <vm autostart="1" domain="myfd1" exclusive="0" max_restarts="0"
>>>> name="guest1" path="/vms" recovery="r
>>>> estart" restart_expire_time="0"/>
>>>>        </rm>
>>>> </cluster>
>>>> [root at cvtst1 cluster]#
>>>> ------
>>>>
>>>> Thanks!
>>>> Paras.
>>>>
>>>>
>>>> On Sun, Sep 20, 2009 at 9:44 AM, Volker Dormeyer <volker at ixolution.de> wrote:
>>>>> On Fri, Sep 18, 2009 at 05:08:57PM -0500,
>>>>> Paras pradhan <pradhanparas at gmail.com> wrote:
>>>>>> I am using cluster suite for HA of xen virtual machines. Now I am
>>>>>> having another problem. When I start the my xen vm in one node, it
>>>>>> also starts on other nodes. Which daemon controls  this?
>>>>>
>>>>> This is usually done bei clurgmgrd (which is part of the rgmanager
>>>>> package). To me, this sounds like a configuration problem. Maybe,
>>>>> you can post your cluster.conf?
>>>>>
>>>>> Regards,
>>>>> Volker
>>>>>
>>>>> --
>>>>> Linux-cluster mailing list
>>>>> Linux-cluster at redhat.com
>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>>>>
>>>>
>>>
>>> --
>>> Linux-cluster mailing list
>>> Linux-cluster at redhat.com
>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>>
>>
>> --
>> Linux-cluster mailing list
>> Linux-cluster at redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>




More information about the Linux-cluster mailing list