[Linux-cluster] failover domain and service start

emmanuel segura emi2fast at gmail.com
Wed Dec 18 19:13:04 UTC 2013


from script vm.sh i saw it try to discovery the hypervisor you are using,
with  hypervisor="xen" you force the script to use xen


2013/12/18 Paras pradhan <pradhanparas at gmail.com>

> The only parameter I don't have is:  hypervisor="xen"
>
> Does it matter?
>
> This is what i have:
>
> <vm autostart="1" domain="myfd1" exclusive="0" max_restarts="0"
> name="guest1" path="/vms_c" recovery="restart" restart_expire_time="0"
> use_virsh="0"/>
>
> -Paras.
>
>
> On Wed, Dec 18, 2013 at 12:24 PM, emmanuel segura <emi2fast at gmail.com>wrote:
>
>> <vm name="guest1" hypervisor="xen" path="/vms_c" use_virsh="0">
>>
>> Incriment the config version of cluster.conf and ccs_tool update
>> /etc/cluster/cluster.conf
>>
>>
>> 2013/12/18 Paras pradhan <pradhanparas at gmail.com>
>>
>>> Emmauel,
>>>
>>> With export OCF_RESKEY_name="guest1" ; export OCF_RESKEY_path="/vms_c" ;
>>> export OCF_RESKEY_use_virsh=0
>>>
>>> , I can start the vm using : /usr/share/cluster/vm.sh
>>>
>>> I am wondering how to make the changes to cluser.conf or other files so
>>> that we can start the vm using clucvcsadm.
>>>
>>> -Thanks
>>> sorry for the delay.
>>>
>>> Paras.
>>>
>>>
>>> On Thu, Dec 5, 2013 at 12:36 PM, Paras pradhan <pradhanparas at gmail.com>wrote:
>>>
>>>> Emmanue no.l I was busy on some other things I will test and let you
>>>> know asap !
>>>>
>>>>
>>>> On Thu, Dec 5, 2013 at 12:26 PM, emmanuel segura <emi2fast at gmail.com>wrote:
>>>>
>>>>> Hello Paras
>>>>>
>>>>> did you solved the problem?
>>>>>
>>>>> Thanks
>>>>> Emmanuel
>>>>>
>>>>>
>>>>> 2013/11/25 emmanuel segura <emi2fast at gmail.com>
>>>>>
>>>>>> Hello Paras
>>>>>>
>>>>>> Maybe i found the solution, in function validate_all we got
>>>>>>
>>>>>>        if [ -z "$OCF_RESKEY_hypervisor" ] ||
>>>>>>            [ "$OCF_RESKEY_hypervisor" = "auto" ]; then
>>>>>>                 export OCF_RESKEY_hypervisor="`virsh version | grep
>>>>>> \"Running hypervisor:\" | awk '{print $3}' | tr A-Z a-z`"
>>>>>>                 if [ -z "$OCF_RESKEY_hypervisor" ]; then
>>>>>>                         ocf_log err "Could not determine Hypervisor"
>>>>>>                         return $OCF_ERR_ARGS
>>>>>>                 fi
>>>>>>                 echo Hypervisor: $OCF_RESKEY_hypervisor
>>>>>>         fi
>>>>>>
>>>>>>         #
>>>>>>         # Xen hypervisor only for when use_virsh = 0.
>>>>>>         #
>>>>>>         if [ "$OCF_RESKEY_use_virsh" = "0" ]; then
>>>>>>                 if [ "$OCF_RESKEY_hypervisor" != "xen" ]; then
>>>>>>                         ocf_log err "Cannot use
>>>>>> $OCF_RESKEY_hypervisor hypervisor without using virsh"
>>>>>>                         return $OCF_ERR_ARGS
>>>>>>                 fi
>>>>>>
>>>>>> with this following enviroment variables, when i tested by hand the
>>>>>> angent uses xm commands
>>>>>>
>>>>>> env | grep OCF
>>>>>> OCF_RESKEY_hypervisor=xen
>>>>>> OCF_RESKEY_path=/vms_c
>>>>>> OCF_RESKEY_name=guest1
>>>>>> OCF_RESKEY_use_virsh=0
>>>>>>
>>>>>> [root at client ~]# /usr/share/cluster/vm.sh status
>>>>>> Management tool: xm
>>>>>> <err>    Cannot find 'xm'; is it installed?
>>>>>> [vm.sh] Cannot find 'xm'; is it installed?
>>>>>>
>>>>>>
>>>>>> I don't have xen installed to test it
>>>>>>
>>>>>>
>>>>>>                 if [ -n "$OCF_RESKEY_xmlfile" ]; then
>>>>>>                         ocf_log err "Cannot use xmlfile if use_virsh
>>>>>> is set to 0"
>>>>>>                         return $OCF_ERR_ARGS
>>>>>>                 fi
>>>>>>
>>>>>>
>>>>>>
>>>>>> 2013/11/25 emmanuel segura <emi2fast at gmail.com>
>>>>>>
>>>>>>> Hello paras
>>>>>>>
>>>>>>> missing the export command in front of variables, the correct way is
>>>>>>> this
>>>>>>>
>>>>>>> export OCF_RESKEY_name="guest1" ; export OCF_RESKEY_path="/vms_c" ;
>>>>>>> export OCF_RESKEY_use_virsh=0
>>>>>>> [root at client ~]# env | grep OCF
>>>>>>> OCF_RESKEY_path=/vms_c
>>>>>>> OCF_RESKEY_name=guest1
>>>>>>> OCF_RESKEY_use_virsh=0
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> 2013/11/25 emmanuel segura <emi2fast at gmail.com>
>>>>>>>
>>>>>>>> Hello Paras
>>>>>>>>
>>>>>>>> I have  a centos 6, i don't know if it is different on redhat 5,
>>>>>>>> but i saw in the script vm.sh calls do_start function when start parameter
>>>>>>>> is given
>>>>>>>>
>>>>>>>> do_start()
>>>>>>>> {
>>>>>>>>         if [ "$OCF_RESKEY_use_virsh" = "1" ]; then
>>>>>>>>                 do_virsh_start $*
>>>>>>>>                 return $?
>>>>>>>>         fi
>>>>>>>>
>>>>>>>>         do_xm_start $*
>>>>>>>>         return $?
>>>>>>>> }
>>>>>>>>
>>>>>>>> i don't know why because the vm.sh uses virsh when you launch the
>>>>>>>> script by hand :(
>>>>>>>>
>>>>>>>>
>>>>>>>> 2013/11/25 Paras pradhan <pradhanparas at gmail.com>
>>>>>>>>
>>>>>>>>> Looks like use_virsh=0 has no effect.
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> [root at cvtst3 ~]# export OCF_RESKEY_name="guest1" ;
>>>>>>>>> OCF_RESKEY_path="/vms_c" ; OCF_RESKEY_use_virsh=0
>>>>>>>>> [root at cvtst3 ~]# set -x
>>>>>>>>> ++ printf '\033]0;%s@%s:%s\007' root cvtst3 '~'
>>>>>>>>> [root at cvtst3 ~]# /usr/share/cluster/vm.sh start
>>>>>>>>> + /usr/share/cluster/vm.sh start
>>>>>>>>> Hypervisor: xen
>>>>>>>>> Management tool: virsh
>>>>>>>>> Hypervisor URI: xen:///
>>>>>>>>> Migration URI format: xenmigr://target_host/
>>>>>>>>> Virtual machine guest1 is error: failed to get domain 'guest1'
>>>>>>>>> error: Domain not found: xenUnifiedDomainLookupByName
>>>>>>>>>
>>>>>>>>> <debug>  virsh -c xen:/// start guest1
>>>>>>>>> error: failed to get domain 'guest1'
>>>>>>>>> error: Domain not found: xenUnifiedDomainLookupByName
>>>>>>>>>
>>>>>>>>> ++ printf '\033]0;%s@%s:%s\007' root cvtst3 '~'
>>>>>>>>>  [root at cvtst3 ~]# set +x
>>>>>>>>> + set +x
>>>>>>>>> ---
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> -Paras.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Fri, Nov 22, 2013 at 5:22 PM, emmanuel segura <
>>>>>>>>> emi2fast at gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Hellos Paras
>>>>>>>>>>
>>>>>>>>>> Stop the vm and retry to start the vm with following commands and
>>>>>>>>>> if you got some error show it
>>>>>>>>>>
>>>>>>>>>> export OCF_RESKEY_name="guest1" ; OCF_RESKEY_path="/vms_c" ;
>>>>>>>>>> OCF_RESKEY_use_virsh=0
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> set -x
>>>>>>>>>> /usr/share/cluster/vm.sh start
>>>>>>>>>> set +x
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> 2013/11/22 Paras pradhan <pradhanparas at gmail.com>
>>>>>>>>>>
>>>>>>>>>>> I found the workaround to my issue. What i did is:
>>>>>>>>>>>
>>>>>>>>>>> run the vm using xm and then start using cluvscadm. This works
>>>>>>>>>>> for me for the time being but I am not sure what is causing this. This is
>>>>>>>>>>> what I did
>>>>>>>>>>>
>>>>>>>>>>> xm create /vms_c/guest1
>>>>>>>>>>> clusvcadm -e vm: guest1 ( This detects that guest1 is up and
>>>>>>>>>>> quickly changes its status to success)
>>>>>>>>>>>
>>>>>>>>>>> Although i used virt-install, it also create a xem format
>>>>>>>>>>> configuration file and since use_virsh=0 it should be able to use this xen
>>>>>>>>>>> format config file. This is my vm configuration:
>>>>>>>>>>>
>>>>>>>>>>> ---
>>>>>>>>>>> name = "guest1"
>>>>>>>>>>> maxmem = 2048
>>>>>>>>>>> memory = 512
>>>>>>>>>>> vcpus = 1
>>>>>>>>>>> #cpus="1-2"
>>>>>>>>>>> bootloader = "/usr/bin/pygrub"
>>>>>>>>>>> on_poweroff = "destroy"
>>>>>>>>>>> on_reboot = "restart"
>>>>>>>>>>> on_crash = "restart"
>>>>>>>>>>> vfb = [  ]
>>>>>>>>>>> disk = [ "tap:aio:/vms_c/guest1.img,xvda,w",
>>>>>>>>>>> "tap:aio:/vms_c/guest1-disk.img,xvdb,w" ]
>>>>>>>>>>> vif = [ "rate=10MB/s,mac=00:16:3e:6b:be:71,bridge=xenbr0" ]
>>>>>>>>>>>
>>>>>>>>>>> ---
>>>>>>>>>>>
>>>>>>>>>>> Thanks for you help Emmanuel ! Really appreciate it.
>>>>>>>>>>>
>>>>>>>>>>> -Paras.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Fri, Nov 22, 2013 at 11:10 AM, emmanuel segura <
>>>>>>>>>>> emi2fast at gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> ok, but your vm doesn't start on others nodes, i think, for
>>>>>>>>>>>> configuration problems
>>>>>>>>>>>> ================================================================
>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: <notice> start on vm
>>>>>>>>>>>> "guest1" returned 1 (generic error)
>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: <warning> #68: Failed
>>>>>>>>>>>> to start vm:guest1; return value: 1
>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: <notice> Stopping
>>>>>>>>>>>> service vm:guest1
>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: <notice> Service
>>>>>>>>>>>> vm:guest1 is recovering
>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: <warning> #71:
>>>>>>>>>>>> Relocating failed service vm:guest1
>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: <notice> Service
>>>>>>>>>>>> vm:guest1 is stopped
>>>>>>>>>>>> ================================================================
>>>>>>>>>>>> in few words, try in every cluster node
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> export OCF_RESKEY_name="guest1" ; OCF_RESKEY_path="/vms_c"
>>>>>>>>>>>>
>>>>>>>>>>>> set -x
>>>>>>>>>>>> /usr/share/cluster/vm.sh start
>>>>>>>>>>>> /usr/share/cluster/vm.sh stop
>>>>>>>>>>>>
>>>>>>>>>>>> after you check if your vm can start and stop on every cluster
>>>>>>>>>>>> node,
>>>>>>>>>>>>
>>>>>>>>>>>> /usr/share/cluster/vm.sh start
>>>>>>>>>>>> /usr/share/cluster/vm.sh migrate name_of_a_cluster_node
>>>>>>>>>>>>
>>>>>>>>>>>> can you show me your vm configuration under /vms_c?
>>>>>>>>>>>>
>>>>>>>>>>>> Thanks
>>>>>>>>>>>> Emmanuel
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> 2013/11/22 Paras pradhan <pradhanparas at gmail.com>
>>>>>>>>>>>>
>>>>>>>>>>>>> And also to test I made use_virsh=1 , same problem. The vm
>>>>>>>>>>>>> does not start up if the FD domains are offline.
>>>>>>>>>>>>>
>>>>>>>>>>>>> -Paras.
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Fri, Nov 22, 2013 at 10:37 AM, Paras pradhan <
>>>>>>>>>>>>> pradhanparas at gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Well thats seems to theoretically correct. But right now my
>>>>>>>>>>>>>> cluser has use_virsh=0 and I don't have any issue untill my mebmers on the
>>>>>>>>>>>>>> failover domains are offline. So wondering what is it that clusvcadm -e is
>>>>>>>>>>>>>> looking when I don't use virsh .
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Fri, Nov 22, 2013 at 10:05 AM, emmanuel segura <
>>>>>>>>>>>>>> emi2fast at gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> If you used virt-install, i think you need use virsh, the
>>>>>>>>>>>>>>> cluster uses xm xen command if you got use_virsh=0 and virsh if you got
>>>>>>>>>>>>>>> use_virsh=1 in your cluster config
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> 2013/11/22 Paras pradhan <pradhanparas at gmail.com>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I use virt-install to create virtual machines. Is there a
>>>>>>>>>>>>>>>> way to debug why clusvcadm -e vm:guest1 is failing? vm.sh  seems to use
>>>>>>>>>>>>>>>> virsh and my cluster.conf has use_virsh=0
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Thanks
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Paras.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Nov 21, 2013 5:53 PM, "emmanuel segura" <
>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> but did you configure your vm with xen tools or using
>>>>>>>>>>>>>>>>> virt-manager?
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 2013/11/22 Paras pradhan <pradhanparas at gmail.com>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Well no i don't want to use virsh.   But as we are
>>>>>>>>>>>>>>>>>> debugging with virsh now i found a strange issue.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> I exported an xml file and imported to all nodes . Ran
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> ---
>>>>>>>>>>>>>>>>>> name="guest1" path="/vms_c"
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> export OCF_RESKEY_name="guest1" ; OCF_RESKEY_path="/vms_c"
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> set -x
>>>>>>>>>>>>>>>>>> /usr/share/cluster/vm.sh start
>>>>>>>>>>>>>>>>>> set +x
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>> vm starts now. BUT from a cluster service : cluvscam -e
>>>>>>>>>>>>>>>>>> vm:guest1 , same error.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> So if i populate all my domains' config files to all my
>>>>>>>>>>>>>>>>>> cluser nodes and make use_virsh=1, then the issue is resolved. But this is
>>>>>>>>>>>>>>>>>> a lot of work for those who have hundreds of vm.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> vm.start uses virsh . Is there a way to tell him not use
>>>>>>>>>>>>>>>>>> virsh?
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Thanks
>>>>>>>>>>>>>>>>>> Paras.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 5:19 PM, emmanuel segura <
>>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> if you are using virsh for manage your vms, change this
>>>>>>>>>>>>>>>>>>> in your cluster.conf
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> from
>>>>>>>>>>>>>>>>>>> use_virsh="0"
>>>>>>>>>>>>>>>>>>> to
>>>>>>>>>>>>>>>>>>> use_virsh="1"
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> 2013/11/22 Paras pradhan <pradhanparas at gmail.com>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> I think i found the problem.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> virsh list --all does not show my vm . This is because
>>>>>>>>>>>>>>>>>>>> it was created on another node. and another node has it. Now I want to
>>>>>>>>>>>>>>>>>>>> start the service on a different node in which it was not created or where
>>>>>>>>>>>>>>>>>>>> virsh list --all does not have an entry. Is it possible to create this
>>>>>>>>>>>>>>>>>>>> entry using a xen config file?Looks like this is now a Xen issue rather
>>>>>>>>>>>>>>>>>>>> than a linux-cluster issue . :)
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Paras.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:58 PM, emmanuel segura <
>>>>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> 1:did you verify your xen livemigration configuration?
>>>>>>>>>>>>>>>>>>>>> 2: where you vm disk reside?
>>>>>>>>>>>>>>>>>>>>> 3: can you see your vm defined on every cluster node
>>>>>>>>>>>>>>>>>>>>> with xm list?
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan <pradhanparas at gmail.com>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> This is what I get
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Hypervisor: xen
>>>>>>>>>>>>>>>>>>>>>> Management tool: virsh
>>>>>>>>>>>>>>>>>>>>>> Hypervisor URI: xen:///
>>>>>>>>>>>>>>>>>>>>>> Migration URI format: xenmigr://target_host/
>>>>>>>>>>>>>>>>>>>>>> Virtual machine guest1 is error: failed to get domain
>>>>>>>>>>>>>>>>>>>>>> 'guest1'
>>>>>>>>>>>>>>>>>>>>>> error: Domain not found: xenUnifiedDomainLookupByName
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> <debug>  virsh -c xen:/// start guest1
>>>>>>>>>>>>>>>>>>>>>> error: failed to get domain 'guest1'
>>>>>>>>>>>>>>>>>>>>>> error: Domain not found: xenUnifiedDomainLookupByName
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> ++ printf '\033]0;%s@%s:%s\007' root vtst3 '~'
>>>>>>>>>>>>>>>>>>>>>> [root at cvtst3 ~]# set +x
>>>>>>>>>>>>>>>>>>>>>> + set +x
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>  I am wondering why it failed to get domain .
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> -Paras.
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:43 PM, emmanuel segura <
>>>>>>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> yes
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan <pradhanparas at gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> Well it is guest1. Isn't it?.
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> <vm autostart="1" domain="myfd1" exclusive="0"
>>>>>>>>>>>>>>>>>>>>>>>> max_restarts="0" name="guest1" path="/vms_c" recovery="restart"
>>>>>>>>>>>>>>>>>>>>>>>> restart_expire_time="0" use_virsh="0"/>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> It is a vm service if it matters.
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> -Paras.
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:22 PM, emmanuel segura <
>>>>>>>>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> use the servicename you defined in your
>>>>>>>>>>>>>>>>>>>>>>>>> cluster.conf
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan <pradhanparas at gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> Says:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> Running in test mode.
>>>>>>>>>>>>>>>>>>>>>>>>>> No resource guest1 of type service found
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> -Paras.
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:07 PM, emmanuel segura
>>>>>>>>>>>>>>>>>>>>>>>>>> <emi2fast at gmail.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> rg_test test /etc/cluster/cluster.conf start
>>>>>>>>>>>>>>>>>>>>>>>>>>> service guest1
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan <pradhanparas at gmail.com
>>>>>>>>>>>>>>>>>>>>>>>>>>> >
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> My failover domain looks like this:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> <failoverdomain name="myfd1" nofailback="1"
>>>>>>>>>>>>>>>>>>>>>>>>>>>> ordered="1" restricted="0">
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> <failoverdomainnode name="vtst1" priority="1"/>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> <failoverdomainnode name="vtst3" priority="2"/>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> <failoverdomainnode name="vtst2" priority="3"/>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>                         </failoverdomain>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> I have vm service that uses this failover
>>>>>>>>>>>>>>>>>>>>>>>>>>>> domain. If my node vtst1 is offline, the service doesnot start on vtst3
>>>>>>>>>>>>>>>>>>>>>>>>>>>> which is 2nd in the priority.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> I tried to start it with: clusvcadm -e
>>>>>>>>>>>>>>>>>>>>>>>>>>>> vm:guest1   and even with -F and -m option.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> All i see is this error:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]:
>>>>>>>>>>>>>>>>>>>>>>>>>>>> <notice> start on vm "guest1" returned 1 (generic error)
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]:
>>>>>>>>>>>>>>>>>>>>>>>>>>>> <warning> #68: Failed to start vm:guest1; return value: 1
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]:
>>>>>>>>>>>>>>>>>>>>>>>>>>>> <notice> Stopping service vm:guest1
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]:
>>>>>>>>>>>>>>>>>>>>>>>>>>>> <notice> Service vm:guest1 is recovering
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]:
>>>>>>>>>>>>>>>>>>>>>>>>>>>> <warning> #71: Relocating failed service vm:guest1
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]:
>>>>>>>>>>>>>>>>>>>>>>>>>>>> <notice> Service vm:guest1 is stopped
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> How do I debug?
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks!
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Paras.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios
>>>>>>>>>>>>>>>>>>>>>>>>>>> quiera
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list
>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list
>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list
>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list
>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list
>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com
>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list
>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com
>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list
>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com
>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list
>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com
>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list
>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com
>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>> Linux-cluster mailing list
>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com
>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>> Linux-cluster mailing list
>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com
>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>> Linux-cluster mailing list
>>>>>>>>>>>>>>>> Linux-cluster at redhat.com
>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>> Linux-cluster mailing list
>>>>>>>>>>>>>>> Linux-cluster at redhat.com
>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> --
>>>>>>>>>>>>> Linux-cluster mailing list
>>>>>>>>>>>>> Linux-cluster at redhat.com
>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> Linux-cluster mailing list
>>>>>>>>>>>> Linux-cluster at redhat.com
>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> Linux-cluster mailing list
>>>>>>>>>>> Linux-cluster at redhat.com
>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Linux-cluster mailing list
>>>>>>>>>> Linux-cluster at redhat.com
>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Linux-cluster mailing list
>>>>>>>>> Linux-cluster at redhat.com
>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> esta es mi vida e me la vivo hasta que dios quiera
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> esta es mi vida e me la vivo hasta que dios quiera
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> esta es mi vida e me la vivo hasta que dios quiera
>>>>>
>>>>> --
>>>>> Linux-cluster mailing list
>>>>> Linux-cluster at redhat.com
>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>>>>
>>>>
>>>>
>>>
>>> --
>>> Linux-cluster mailing list
>>> Linux-cluster at redhat.com
>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>>
>>
>>
>>
>> --
>> esta es mi vida e me la vivo hasta que dios quiera
>>
>> --
>> Linux-cluster mailing list
>> Linux-cluster at redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>
>
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>



-- 
esta es mi vida e me la vivo hasta que dios quiera
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20131218/1c198559/attachment.htm>


More information about the Linux-cluster mailing list