From orlando.rodriguez107 at alu.ulpgc.es Mon Dec 2 19:40:12 2013 From: orlando.rodriguez107 at alu.ulpgc.es (=?utf-8?B?T3JsYW5kbyBSb2Ryw61ndWV6IE11w7Fveg==?=) Date: Mon, 2 Dec 2013 19:40:12 +0000 Subject: [Linux-cluster] cman Unfencing self... fence_node: cannot connect to cman [FAILED] Message-ID: hi, I have 4 nodes. In the node where I have luci, when cman start: # service cman start Starting cluster: Checking if cluster has been disabled at boot... [ OK ] Checking Network Manager... [ OK ] Global setup... [ OK ] Loading kernel modules... [ OK ] Mounting configfs... [ OK ] Starting cman... [ OK ] Waiting for quorum... [ OK ] Starting fenced... [ OK ] Starting dlm_controld... [ OK ] Starting gfs_controld... [ OK ] Unfencing self... fence_node: cannot connect to cman [FAILED] Stopping cluster: Leaving fence domain... [ OK ] Stopping gfs_controld... [ OK ] Stopping dlm_controld... [ OK ] Stopping fenced... [ OK ] Stopping cman... [ OK ] Unloading kernel modules... [ OK ] Unmounting configfs... [ OK ] And my cluster.conf is: I'm newer, I'd like to know about cman Unfencing self... fence_node: cannot connect to cman. #service cman status Found stale pid file ?? THANKS. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at alteeve.ca Mon Dec 2 19:47:25 2013 From: lists at alteeve.ca (Digimer) Date: Mon, 02 Dec 2013 14:47:25 -0500 Subject: [Linux-cluster] cman Unfencing self... fence_node: cannot connect to cman [FAILED] In-Reply-To: References: Message-ID: <529CE3CD.6070200@alteeve.ca> You haven't configured fencing at all. Do your nodes have IPMI/iLO/iDRAC/etc? On 02/12/13 14:40, Orlando Rodr?guez Mu?oz wrote: > hi, > > I have 4 nodes. In the node where I have luci, when cman start: > > # service cman start > Starting cluster: > Checking if cluster has been disabled at boot... [ OK ] > Checking Network Manager... [ OK ] > Global setup... [ OK ] > Loading kernel modules... [ OK ] > Mounting configfs... [ OK ] > Starting cman... [ OK ] > Waiting for quorum... [ OK ] > Starting fenced... [ OK ] > Starting dlm_controld... [ OK ] > Starting gfs_controld... [ OK ] > Unfencing self... fence_node: cannot connect to cman > [FAILED] > Stopping cluster: > Leaving fence domain... [ OK ] > Stopping gfs_controld... [ OK ] > Stopping dlm_controld... [ OK ] > Stopping fenced... [ OK ] > Stopping cman... [ OK ] > Unloading kernel modules... [ OK ] > Unmounting configfs... [ OK ] > > And my cluster.conf is: > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I'm newer, I'd like to know about /cman Unfencing self... fence_node: > cannot connect to cman/. > > #service cman status > > Found stale pid file > > ?? > > THANKS. > > -- Digimer Papers and Projects: https://alteeve.ca/w/ What if the cure for cancer is trapped in the mind of a person without access to education? From teigland at redhat.com Mon Dec 2 20:55:12 2013 From: teigland at redhat.com (David Teigland) Date: Mon, 2 Dec 2013 15:55:12 -0500 Subject: [Linux-cluster] Add option SO_LINGER to dlm sctp socket when the other endpoint is down. In-Reply-To: <20131128165009.GD22733@suse.de> References: <20131113172449.GB17306@redhat.com> <20131119142141.GM3475@suse.de> <20131119165144.GA6885@redhat.com> <20131128165009.GD22733@suse.de> Message-ID: <20131202205512.GA18393@redhat.com> On Thu, Nov 28, 2013 at 05:50:09PM +0100, Lars Marowsky-Bree wrote: > > With the patch, how much more likely would it be for data from a previous > > connection to interfere with a new connection? (I had this problem some > > years ago, and added some safeguards to deal with it, but I don't think > > they are perfect. There are cases where a very short time separates > > connections being closed and new connections being created.) > > We've not seen this during testing. We now have positive confirmation > not just from our tests but also customers testing this on multiple > nodes. > > And I still don't see how this could happen - we close the socket once > the other node has been fenced or stopped. Short of a false-positive > fence, we shouldn't see what you describe, right? It's just a sign that your experience doesn't cover the entire range of possible dlm usage. There are cases where fencing is not relevant, there are cases where this happens without failures, just quick leaving and rejoining of a lockspace, and my recollection is that userland tests are ones that most easily uncovered problems. > Setting SO_LINGER just before close really doesn't make a big > difference, since we'd always want to set it. We close the connection > because we don't want to talk to the other side any more, hence we might > as well discard anything that is still in the queue. It's not the direct effects of this that concern me as much as the potential secondary effects. > I'm quite interested in driving this discussion forward. Anything more > we can provide? Safest would be to enable it with a config option, but I'd like to avoid that if possible. Mike Christie also uses dlm sctp and has most recently tested it, so I'd be interested to get his feedback. Mike, do you see any potential problems with this patch [1], have any suggestions, or the ability to try it? Thanks, Dave [1] https://www.redhat.com/archives/linux-cluster/2013-November/msg00032.html From lmb at suse.de Mon Dec 2 22:04:30 2013 From: lmb at suse.de (Lars Marowsky-Bree) Date: Mon, 2 Dec 2013 23:04:30 +0100 Subject: [Linux-cluster] Add option SO_LINGER to dlm sctp socket when the other endpoint is down. In-Reply-To: <20131202205512.GA18393@redhat.com> References: <20131113172449.GB17306@redhat.com> <20131119142141.GM3475@suse.de> <20131119165144.GA6885@redhat.com> <20131128165009.GD22733@suse.de> <20131202205512.GA18393@redhat.com> Message-ID: <20131202220430.GC3174@suse.de> On 2013-12-02T15:55:12, David Teigland wrote: > > And I still don't see how this could happen - we close the socket once > > the other node has been fenced or stopped. Short of a false-positive > > fence, we shouldn't see what you describe, right? > It's just a sign that your experience doesn't cover the entire range of > possible dlm usage. What! You mean our testing is imperfect? Liar! ;-) Well, that's obviously true. Hence why I closed that paragraph on a question. > There are cases where fencing is not relevant, there are cases where > this happens without failures, just quick leaving and rejoining of a > lockspace, and my recollection is that userland tests are ones that > most easily uncovered problems. Are you refering to libdlm/dlm/tests/usertest? Which of those would you want us to run for how long and in what environment? (That's not just related to this question, of course. We always welcome feedback on how to do better QA. If you tell us once, we'll keep running them forever and ever again ;-) > > because we don't want to talk to the other side any more, hence we might > > as well discard anything that is still in the queue. > It's not the direct effects of this that concern me as much as the > potential secondary effects. Right. Since I can't reproduce any in practice, I was asking for more theoretical details (both to investigate if we can show it's not possible to hit, or at least testing if we hit it in practice). > Safest would be to enable it with a config option, but I'd like to avoid > that if possible. Yes, definitely. That'd suck. > Mike Christie also uses dlm sctp and has most recently > tested it, so I'd be interested to get his feedback. I'd also be quite curious if Mike can reproduce the problem we hit. Thanks, Lars -- Architect Storage/HA SUSE LINUX Products GmbH, GF: Jeff Hawn, Jennifer Guild, Felix Imend?rffer, HRB 21284 (AG N?rnberg) "Experience is the name everyone gives to their mistakes." -- Oscar Wilde From orlando.rodriguez107 at alu.ulpgc.es Wed Dec 4 09:51:08 2013 From: orlando.rodriguez107 at alu.ulpgc.es (=?utf-8?B?T3JsYW5kbyBSb2Ryw61ndWV6IE11w7Fveg==?=) Date: Wed, 4 Dec 2013 09:51:08 +0000 Subject: [Linux-cluster] cman Unfencing self... fence_node: cannot connect to cman [FAILED] In-Reply-To: <529CE3CD.6070200@alteeve.ca> References: , <529CE3CD.6070200@alteeve.ca> Message-ID: Hi, I'm using KVM Qemu. Shared disks: iSCSI File System: GFS2 ________________________________________ De: linux-cluster-bounces at redhat.com [linux-cluster-bounces at redhat.com] en nombre de Digimer [lists at alteeve.ca] Enviado: lunes, 02 de diciembre de 2013 19:47 Para: linux clustering Asunto: Re: [Linux-cluster] cman Unfencing self... fence_node: cannot connect to cman [FAILED] You haven't configured fencing at all. Do your nodes have IPMI/iLO/iDRAC/etc? On 02/12/13 14:40, Orlando Rodr?guez Mu?oz wrote: > hi, > > I have 4 nodes. In the node where I have luci, when cman start: > > # service cman start > Starting cluster: > Checking if cluster has been disabled at boot... [ OK ] > Checking Network Manager... [ OK ] > Global setup... [ OK ] > Loading kernel modules... [ OK ] > Mounting configfs... [ OK ] > Starting cman... [ OK ] > Waiting for quorum... [ OK ] > Starting fenced... [ OK ] > Starting dlm_controld... [ OK ] > Starting gfs_controld... [ OK ] > Unfencing self... fence_node: cannot connect to cman > [FAILED] > Stopping cluster: > Leaving fence domain... [ OK ] > Stopping gfs_controld... [ OK ] > Stopping dlm_controld... [ OK ] > Stopping fenced... [ OK ] > Stopping cman... [ OK ] > Unloading kernel modules... [ OK ] > Unmounting configfs... [ OK ] > > And my cluster.conf is: > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I'm newer, I'd like to know about /cman Unfencing self... fence_node: > cannot connect to cman/. > > #service cman status > > Found stale pid file > > ?? > > THANKS. > > -- Digimer Papers and Projects: https://alteeve.ca/w/ What if the cure for cancer is trapped in the mind of a person without access to education? -- Linux-cluster mailing list Linux-cluster at redhat.com https://www.redhat.com/mailman/listinfo/linux-cluster From lists at alteeve.ca Wed Dec 4 14:26:16 2013 From: lists at alteeve.ca (Digimer) Date: Wed, 04 Dec 2013 09:26:16 -0500 Subject: [Linux-cluster] cman Unfencing self... fence_node: cannot connect to cman [FAILED] In-Reply-To: References: , <529CE3CD.6070200@alteeve.ca> Message-ID: <529F3B88.60102@alteeve.ca> If your nodes are VMs then, use fence_virsh. On 04/12/13 04:51, Orlando Rodr?guez Mu?oz wrote: > Hi, > > I'm using KVM Qemu. > > Shared disks: iSCSI > File System: GFS2 > > ________________________________________ > De: linux-cluster-bounces at redhat.com [linux-cluster-bounces at redhat.com] en nombre de Digimer [lists at alteeve.ca] > Enviado: lunes, 02 de diciembre de 2013 19:47 > Para: linux clustering > Asunto: Re: [Linux-cluster] cman Unfencing self... fence_node: cannot connect to cman [FAILED] > > You haven't configured fencing at all. Do your nodes have > IPMI/iLO/iDRAC/etc? > > On 02/12/13 14:40, Orlando Rodr?guez Mu?oz wrote: >> hi, >> >> I have 4 nodes. In the node where I have luci, when cman start: >> >> # service cman start >> Starting cluster: >> Checking if cluster has been disabled at boot... [ OK ] >> Checking Network Manager... [ OK ] >> Global setup... [ OK ] >> Loading kernel modules... [ OK ] >> Mounting configfs... [ OK ] >> Starting cman... [ OK ] >> Waiting for quorum... [ OK ] >> Starting fenced... [ OK ] >> Starting dlm_controld... [ OK ] >> Starting gfs_controld... [ OK ] >> Unfencing self... fence_node: cannot connect to cman >> [FAILED] >> Stopping cluster: >> Leaving fence domain... [ OK ] >> Stopping gfs_controld... [ OK ] >> Stopping dlm_controld... [ OK ] >> Stopping fenced... [ OK ] >> Stopping cman... [ OK ] >> Unloading kernel modules... [ OK ] >> Unmounting configfs... [ OK ] >> >> And my cluster.conf is: >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> I'm newer, I'd like to know about /cman Unfencing self... fence_node: >> cannot connect to cman/. >> >> #service cman status >> >> Found stale pid file >> >> ?? >> >> THANKS. >> >> > > > -- > Digimer > Papers and Projects: https://alteeve.ca/w/ > What if the cure for cancer is trapped in the mind of a person without > access to education? > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -- Digimer Papers and Projects: https://alteeve.ca/w/ What if the cure for cancer is trapped in the mind of a person without access to education? From orlando.rodriguez107 at alu.ulpgc.es Thu Dec 5 16:30:40 2013 From: orlando.rodriguez107 at alu.ulpgc.es (=?utf-8?B?T3JsYW5kbyBSb2Ryw61ndWV6IE11w7Fveg==?=) Date: Thu, 5 Dec 2013 16:30:40 +0000 Subject: [Linux-cluster] cman Unfencing self... fence_node: cannot connect to cman [FAILED] In-Reply-To: <529F3B88.60102@alteeve.ca> References: , <529CE3CD.6070200@alteeve.ca> , <529F3B88.60102@alteeve.ca> Message-ID: Thanks ________________________________________ De: linux-cluster-bounces at redhat.com [linux-cluster-bounces at redhat.com] en nombre de Digimer [lists at alteeve.ca] Enviado: mi?rcoles, 04 de diciembre de 2013 14:26 Para: linux clustering Asunto: Re: [Linux-cluster] cman Unfencing self... fence_node: cannot connect to cman [FAILED] If your nodes are VMs then, use fence_virsh. On 04/12/13 04:51, Orlando Rodr?guez Mu?oz wrote: > Hi, > > I'm using KVM Qemu. > > Shared disks: iSCSI > File System: GFS2 > > ________________________________________ > De: linux-cluster-bounces at redhat.com [linux-cluster-bounces at redhat.com] en nombre de Digimer [lists at alteeve.ca] > Enviado: lunes, 02 de diciembre de 2013 19:47 > Para: linux clustering > Asunto: Re: [Linux-cluster] cman Unfencing self... fence_node: cannot connect to cman [FAILED] > > You haven't configured fencing at all. Do your nodes have > IPMI/iLO/iDRAC/etc? > > On 02/12/13 14:40, Orlando Rodr?guez Mu?oz wrote: >> hi, >> >> I have 4 nodes. In the node where I have luci, when cman start: >> >> # service cman start >> Starting cluster: >> Checking if cluster has been disabled at boot... [ OK ] >> Checking Network Manager... [ OK ] >> Global setup... [ OK ] >> Loading kernel modules... [ OK ] >> Mounting configfs... [ OK ] >> Starting cman... [ OK ] >> Waiting for quorum... [ OK ] >> Starting fenced... [ OK ] >> Starting dlm_controld... [ OK ] >> Starting gfs_controld... [ OK ] >> Unfencing self... fence_node: cannot connect to cman >> [FAILED] >> Stopping cluster: >> Leaving fence domain... [ OK ] >> Stopping gfs_controld... [ OK ] >> Stopping dlm_controld... [ OK ] >> Stopping fenced... [ OK ] >> Stopping cman... [ OK ] >> Unloading kernel modules... [ OK ] >> Unmounting configfs... [ OK ] >> >> And my cluster.conf is: >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> I'm newer, I'd like to know about /cman Unfencing self... fence_node: >> cannot connect to cman/. >> >> #service cman status >> >> Found stale pid file >> >> ?? >> >> THANKS. >> >> > > > -- > Digimer > Papers and Projects: https://alteeve.ca/w/ > What if the cure for cancer is trapped in the mind of a person without > access to education? > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -- Digimer Papers and Projects: https://alteeve.ca/w/ What if the cure for cancer is trapped in the mind of a person without access to education? -- Linux-cluster mailing list Linux-cluster at redhat.com https://www.redhat.com/mailman/listinfo/linux-cluster From emi2fast at gmail.com Thu Dec 5 18:26:09 2013 From: emi2fast at gmail.com (emmanuel segura) Date: Thu, 5 Dec 2013 19:26:09 +0100 Subject: [Linux-cluster] failover domain and service start In-Reply-To: References: Message-ID: Hello Paras did you solved the problem? Thanks Emmanuel 2013/11/25 emmanuel segura > Hello Paras > > Maybe i found the solution, in function validate_all we got > > if [ -z "$OCF_RESKEY_hypervisor" ] || > [ "$OCF_RESKEY_hypervisor" = "auto" ]; then > export OCF_RESKEY_hypervisor="`virsh version | grep > \"Running hypervisor:\" | awk '{print $3}' | tr A-Z a-z`" > if [ -z "$OCF_RESKEY_hypervisor" ]; then > ocf_log err "Could not determine Hypervisor" > return $OCF_ERR_ARGS > fi > echo Hypervisor: $OCF_RESKEY_hypervisor > fi > > # > # Xen hypervisor only for when use_virsh = 0. > # > if [ "$OCF_RESKEY_use_virsh" = "0" ]; then > if [ "$OCF_RESKEY_hypervisor" != "xen" ]; then > ocf_log err "Cannot use $OCF_RESKEY_hypervisor > hypervisor without using virsh" > return $OCF_ERR_ARGS > fi > > with this following enviroment variables, when i tested by hand the angent > uses xm commands > > env | grep OCF > OCF_RESKEY_hypervisor=xen > OCF_RESKEY_path=/vms_c > OCF_RESKEY_name=guest1 > OCF_RESKEY_use_virsh=0 > > [root at client ~]# /usr/share/cluster/vm.sh status > Management tool: xm > Cannot find 'xm'; is it installed? > [vm.sh] Cannot find 'xm'; is it installed? > > > I don't have xen installed to test it > > > if [ -n "$OCF_RESKEY_xmlfile" ]; then > ocf_log err "Cannot use xmlfile if use_virsh is > set to 0" > return $OCF_ERR_ARGS > fi > > > > 2013/11/25 emmanuel segura > >> Hello paras >> >> missing the export command in front of variables, the correct way is this >> >> export OCF_RESKEY_name="guest1" ; export OCF_RESKEY_path="/vms_c" ; >> export OCF_RESKEY_use_virsh=0 >> [root at client ~]# env | grep OCF >> OCF_RESKEY_path=/vms_c >> OCF_RESKEY_name=guest1 >> OCF_RESKEY_use_virsh=0 >> >> >> >> 2013/11/25 emmanuel segura >> >>> Hello Paras >>> >>> I have a centos 6, i don't know if it is different on redhat 5, but i >>> saw in the script vm.sh calls do_start function when start parameter is >>> given >>> >>> do_start() >>> { >>> if [ "$OCF_RESKEY_use_virsh" = "1" ]; then >>> do_virsh_start $* >>> return $? >>> fi >>> >>> do_xm_start $* >>> return $? >>> } >>> >>> i don't know why because the vm.sh uses virsh when you launch the script >>> by hand :( >>> >>> >>> 2013/11/25 Paras pradhan >>> >>>> Looks like use_virsh=0 has no effect. >>>> >>>> -- >>>> [root at cvtst3 ~]# export OCF_RESKEY_name="guest1" ; >>>> OCF_RESKEY_path="/vms_c" ; OCF_RESKEY_use_virsh=0 >>>> [root at cvtst3 ~]# set -x >>>> ++ printf '\033]0;%s@%s:%s\007' root cvtst3 '~' >>>> [root at cvtst3 ~]# /usr/share/cluster/vm.sh start >>>> + /usr/share/cluster/vm.sh start >>>> Hypervisor: xen >>>> Management tool: virsh >>>> Hypervisor URI: xen:/// >>>> Migration URI format: xenmigr://target_host/ >>>> Virtual machine guest1 is error: failed to get domain 'guest1' >>>> error: Domain not found: xenUnifiedDomainLookupByName >>>> >>>> virsh -c xen:/// start guest1 >>>> error: failed to get domain 'guest1' >>>> error: Domain not found: xenUnifiedDomainLookupByName >>>> >>>> ++ printf '\033]0;%s@%s:%s\007' root cvtst3 '~' >>>> [root at cvtst3 ~]# set +x >>>> + set +x >>>> --- >>>> >>>> >>>> -Paras. >>>> >>>> >>>> On Fri, Nov 22, 2013 at 5:22 PM, emmanuel segura wrote: >>>> >>>>> Hellos Paras >>>>> >>>>> Stop the vm and retry to start the vm with following commands and if >>>>> you got some error show it >>>>> >>>>> export OCF_RESKEY_name="guest1" ; OCF_RESKEY_path="/vms_c" ; >>>>> OCF_RESKEY_use_virsh=0 >>>>> >>>>> >>>>> set -x >>>>> /usr/share/cluster/vm.sh start >>>>> set +x >>>>> >>>>> >>>>> 2013/11/22 Paras pradhan >>>>> >>>>>> I found the workaround to my issue. What i did is: >>>>>> >>>>>> run the vm using xm and then start using cluvscadm. This works for me >>>>>> for the time being but I am not sure what is causing this. This is what I >>>>>> did >>>>>> >>>>>> xm create /vms_c/guest1 >>>>>> clusvcadm -e vm: guest1 ( This detects that guest1 is up and quickly >>>>>> changes its status to success) >>>>>> >>>>>> Although i used virt-install, it also create a xem format >>>>>> configuration file and since use_virsh=0 it should be able to use this xen >>>>>> format config file. This is my vm configuration: >>>>>> >>>>>> --- >>>>>> name = "guest1" >>>>>> maxmem = 2048 >>>>>> memory = 512 >>>>>> vcpus = 1 >>>>>> #cpus="1-2" >>>>>> bootloader = "/usr/bin/pygrub" >>>>>> on_poweroff = "destroy" >>>>>> on_reboot = "restart" >>>>>> on_crash = "restart" >>>>>> vfb = [ ] >>>>>> disk = [ "tap:aio:/vms_c/guest1.img,xvda,w", >>>>>> "tap:aio:/vms_c/guest1-disk.img,xvdb,w" ] >>>>>> vif = [ "rate=10MB/s,mac=00:16:3e:6b:be:71,bridge=xenbr0" ] >>>>>> >>>>>> --- >>>>>> >>>>>> Thanks for you help Emmanuel ! Really appreciate it. >>>>>> >>>>>> -Paras. >>>>>> >>>>>> >>>>>> On Fri, Nov 22, 2013 at 11:10 AM, emmanuel segura >>>>> > wrote: >>>>>> >>>>>>> ok, but your vm doesn't start on others nodes, i think, for >>>>>>> configuration problems >>>>>>> ================================================================ >>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: start on vm >>>>>>> "guest1" returned 1 (generic error) >>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: #68: Failed to >>>>>>> start vm:guest1; return value: 1 >>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: Stopping service >>>>>>> vm:guest1 >>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: Service vm:guest1 >>>>>>> is recovering >>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: #71: Relocating >>>>>>> failed service vm:guest1 >>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: Service vm:guest1 >>>>>>> is stopped >>>>>>> ================================================================ >>>>>>> in few words, try in every cluster node >>>>>>> >>>>>>> >>>>>>> export OCF_RESKEY_name="guest1" ; OCF_RESKEY_path="/vms_c" >>>>>>> >>>>>>> set -x >>>>>>> /usr/share/cluster/vm.sh start >>>>>>> /usr/share/cluster/vm.sh stop >>>>>>> >>>>>>> after you check if your vm can start and stop on every cluster node, >>>>>>> >>>>>>> /usr/share/cluster/vm.sh start >>>>>>> /usr/share/cluster/vm.sh migrate name_of_a_cluster_node >>>>>>> >>>>>>> can you show me your vm configuration under /vms_c? >>>>>>> >>>>>>> Thanks >>>>>>> Emmanuel >>>>>>> >>>>>>> >>>>>>> 2013/11/22 Paras pradhan >>>>>>> >>>>>>>> And also to test I made use_virsh=1 , same problem. The vm does not >>>>>>>> start up if the FD domains are offline. >>>>>>>> >>>>>>>> -Paras. >>>>>>>> >>>>>>>> >>>>>>>> On Fri, Nov 22, 2013 at 10:37 AM, Paras pradhan < >>>>>>>> pradhanparas at gmail.com> wrote: >>>>>>>> >>>>>>>>> Well thats seems to theoretically correct. But right now my cluser >>>>>>>>> has use_virsh=0 and I don't have any issue untill my mebmers on the >>>>>>>>> failover domains are offline. So wondering what is it that clusvcadm -e is >>>>>>>>> looking when I don't use virsh . >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Fri, Nov 22, 2013 at 10:05 AM, emmanuel segura < >>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>> >>>>>>>>>> If you used virt-install, i think you need use virsh, the cluster >>>>>>>>>> uses xm xen command if you got use_virsh=0 and virsh if you got use_virsh=1 >>>>>>>>>> in your cluster config >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>> >>>>>>>>>>> I use virt-install to create virtual machines. Is there a way to >>>>>>>>>>> debug why clusvcadm -e vm:guest1 is failing? vm.sh seems to use virsh and >>>>>>>>>>> my cluster.conf has use_virsh=0 >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Thanks >>>>>>>>>>> >>>>>>>>>>> Paras. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Nov 21, 2013 5:53 PM, "emmanuel segura" >>>>>>>>>>> wrote: >>>>>>>>>>> >>>>>>>>>>>> but did you configure your vm with xen tools or using >>>>>>>>>>>> virt-manager? >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>> >>>>>>>>>>>>> Well no i don't want to use virsh. But as we are debugging >>>>>>>>>>>>> with virsh now i found a strange issue. >>>>>>>>>>>>> >>>>>>>>>>>>> I exported an xml file and imported to all nodes . Ran >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> --- >>>>>>>>>>>>> name="guest1" path="/vms_c" >>>>>>>>>>>>> >>>>>>>>>>>>> export OCF_RESKEY_name="guest1" ; OCF_RESKEY_path="/vms_c" >>>>>>>>>>>>> >>>>>>>>>>>>> set -x >>>>>>>>>>>>> /usr/share/cluster/vm.sh start >>>>>>>>>>>>> set +x >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> vm starts now. BUT from a cluster service : cluvscam -e >>>>>>>>>>>>> vm:guest1 , same error. >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> So if i populate all my domains' config files to all my cluser >>>>>>>>>>>>> nodes and make use_virsh=1, then the issue is resolved. But this is a lot >>>>>>>>>>>>> of work for those who have hundreds of vm. >>>>>>>>>>>>> >>>>>>>>>>>>> vm.start uses virsh . Is there a way to tell him not use virsh? >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Thanks >>>>>>>>>>>>> Paras. >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On Thu, Nov 21, 2013 at 5:19 PM, emmanuel segura < >>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> if you are using virsh for manage your vms, change this in >>>>>>>>>>>>>> your cluster.conf >>>>>>>>>>>>>> >>>>>>>>>>>>>> from >>>>>>>>>>>>>> use_virsh="0" >>>>>>>>>>>>>> to >>>>>>>>>>>>>> use_virsh="1" >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>>>> >>>>>>>>>>>>>>> I think i found the problem. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> virsh list --all does not show my vm . This is because it >>>>>>>>>>>>>>> was created on another node. and another node has it. Now I want to start >>>>>>>>>>>>>>> the service on a different node in which it was not created or where virsh >>>>>>>>>>>>>>> list --all does not have an entry. Is it possible to create this entry >>>>>>>>>>>>>>> using a xen config file?Looks like this is now a Xen issue rather than a >>>>>>>>>>>>>>> linux-cluster issue . :) >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:58 PM, emmanuel segura < >>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> 1:did you verify your xen livemigration configuration? >>>>>>>>>>>>>>>> 2: where you vm disk reside? >>>>>>>>>>>>>>>> 3: can you see your vm defined on every cluster node with >>>>>>>>>>>>>>>> xm list? >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> This is what I get >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Hypervisor: xen >>>>>>>>>>>>>>>>> Management tool: virsh >>>>>>>>>>>>>>>>> Hypervisor URI: xen:/// >>>>>>>>>>>>>>>>> Migration URI format: xenmigr://target_host/ >>>>>>>>>>>>>>>>> Virtual machine guest1 is error: failed to get domain >>>>>>>>>>>>>>>>> 'guest1' >>>>>>>>>>>>>>>>> error: Domain not found: xenUnifiedDomainLookupByName >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> virsh -c xen:/// start guest1 >>>>>>>>>>>>>>>>> error: failed to get domain 'guest1' >>>>>>>>>>>>>>>>> error: Domain not found: xenUnifiedDomainLookupByName >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> ++ printf '\033]0;%s@%s:%s\007' root vtst3 '~' >>>>>>>>>>>>>>>>> [root at cvtst3 ~]# set +x >>>>>>>>>>>>>>>>> + set +x >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> I am wondering why it failed to get domain . >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:43 PM, emmanuel segura < >>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> yes >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Well it is guest1. Isn't it?. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> max_restarts="0" name="guest1" path="/vms_c" recovery="restart" >>>>>>>>>>>>>>>>>>> restart_expire_time="0" use_virsh="0"/> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> It is a vm service if it matters. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:22 PM, emmanuel segura < >>>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> use the servicename you defined in your cluster.conf >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> Says: >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> Running in test mode. >>>>>>>>>>>>>>>>>>>>> No resource guest1 of type service found >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:07 PM, emmanuel segura < >>>>>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> rg_test test /etc/cluster/cluster.conf start service >>>>>>>>>>>>>>>>>>>>>> guest1 >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> My failover domain looks like this: >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> ordered="1" restricted="0"> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> name="vtst1" priority="1"/> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> name="vtst3" priority="2"/> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> name="vtst2" priority="3"/> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> I have vm service that uses this failover domain. If >>>>>>>>>>>>>>>>>>>>>>> my node vtst1 is offline, the service doesnot start on vtst3 which is 2nd >>>>>>>>>>>>>>>>>>>>>>> in the priority. >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> I tried to start it with: clusvcadm -e vm:guest1 >>>>>>>>>>>>>>>>>>>>>>> and even with -F and -m option. >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> All i see is this error: >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>> start on vm "guest1" returned 1 (generic error) >>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>> #68: Failed to start vm:guest1; return value: 1 >>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>> Stopping service vm:guest1 >>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>> Service vm:guest1 is recovering >>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>> #71: Relocating failed service vm:guest1 >>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>> Service vm:guest1 is stopped >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> How do I debug? >>>>>>>>>>>>>>>>>>>>>>> Thanks! >>>>>>>>>>>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Linux-cluster mailing list >>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Linux-cluster mailing list >>>>>>>> Linux-cluster at redhat.com >>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>> >>>>>>> -- >>>>>>> Linux-cluster mailing list >>>>>>> Linux-cluster at redhat.com >>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Linux-cluster mailing list >>>>>> Linux-cluster at redhat.com >>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>> >>>>> -- >>>>> Linux-cluster mailing list >>>>> Linux-cluster at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>> >>>> >>>> >>>> -- >>>> Linux-cluster mailing list >>>> Linux-cluster at redhat.com >>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>> >>> >>> >>> >>> -- >>> esta es mi vida e me la vivo hasta que dios quiera >>> >> >> >> >> -- >> esta es mi vida e me la vivo hasta que dios quiera >> > > > > -- > esta es mi vida e me la vivo hasta que dios quiera > -- esta es mi vida e me la vivo hasta que dios quiera -------------- next part -------------- An HTML attachment was scrubbed... URL: From pradhanparas at gmail.com Thu Dec 5 18:36:32 2013 From: pradhanparas at gmail.com (Paras pradhan) Date: Thu, 5 Dec 2013 12:36:32 -0600 Subject: [Linux-cluster] failover domain and service start In-Reply-To: References: Message-ID: Emmanue no.l I was busy on some other things I will test and let you know asap ! On Thu, Dec 5, 2013 at 12:26 PM, emmanuel segura wrote: > Hello Paras > > did you solved the problem? > > Thanks > Emmanuel > > > 2013/11/25 emmanuel segura > >> Hello Paras >> >> Maybe i found the solution, in function validate_all we got >> >> if [ -z "$OCF_RESKEY_hypervisor" ] || >> [ "$OCF_RESKEY_hypervisor" = "auto" ]; then >> export OCF_RESKEY_hypervisor="`virsh version | grep >> \"Running hypervisor:\" | awk '{print $3}' | tr A-Z a-z`" >> if [ -z "$OCF_RESKEY_hypervisor" ]; then >> ocf_log err "Could not determine Hypervisor" >> return $OCF_ERR_ARGS >> fi >> echo Hypervisor: $OCF_RESKEY_hypervisor >> fi >> >> # >> # Xen hypervisor only for when use_virsh = 0. >> # >> if [ "$OCF_RESKEY_use_virsh" = "0" ]; then >> if [ "$OCF_RESKEY_hypervisor" != "xen" ]; then >> ocf_log err "Cannot use $OCF_RESKEY_hypervisor >> hypervisor without using virsh" >> return $OCF_ERR_ARGS >> fi >> >> with this following enviroment variables, when i tested by hand the >> angent uses xm commands >> >> env | grep OCF >> OCF_RESKEY_hypervisor=xen >> OCF_RESKEY_path=/vms_c >> OCF_RESKEY_name=guest1 >> OCF_RESKEY_use_virsh=0 >> >> [root at client ~]# /usr/share/cluster/vm.sh status >> Management tool: xm >> Cannot find 'xm'; is it installed? >> [vm.sh] Cannot find 'xm'; is it installed? >> >> >> I don't have xen installed to test it >> >> >> if [ -n "$OCF_RESKEY_xmlfile" ]; then >> ocf_log err "Cannot use xmlfile if use_virsh is >> set to 0" >> return $OCF_ERR_ARGS >> fi >> >> >> >> 2013/11/25 emmanuel segura >> >>> Hello paras >>> >>> missing the export command in front of variables, the correct way is this >>> >>> export OCF_RESKEY_name="guest1" ; export OCF_RESKEY_path="/vms_c" ; >>> export OCF_RESKEY_use_virsh=0 >>> [root at client ~]# env | grep OCF >>> OCF_RESKEY_path=/vms_c >>> OCF_RESKEY_name=guest1 >>> OCF_RESKEY_use_virsh=0 >>> >>> >>> >>> 2013/11/25 emmanuel segura >>> >>>> Hello Paras >>>> >>>> I have a centos 6, i don't know if it is different on redhat 5, but i >>>> saw in the script vm.sh calls do_start function when start parameter is >>>> given >>>> >>>> do_start() >>>> { >>>> if [ "$OCF_RESKEY_use_virsh" = "1" ]; then >>>> do_virsh_start $* >>>> return $? >>>> fi >>>> >>>> do_xm_start $* >>>> return $? >>>> } >>>> >>>> i don't know why because the vm.sh uses virsh when you launch the >>>> script by hand :( >>>> >>>> >>>> 2013/11/25 Paras pradhan >>>> >>>>> Looks like use_virsh=0 has no effect. >>>>> >>>>> -- >>>>> [root at cvtst3 ~]# export OCF_RESKEY_name="guest1" ; >>>>> OCF_RESKEY_path="/vms_c" ; OCF_RESKEY_use_virsh=0 >>>>> [root at cvtst3 ~]# set -x >>>>> ++ printf '\033]0;%s@%s:%s\007' root cvtst3 '~' >>>>> [root at cvtst3 ~]# /usr/share/cluster/vm.sh start >>>>> + /usr/share/cluster/vm.sh start >>>>> Hypervisor: xen >>>>> Management tool: virsh >>>>> Hypervisor URI: xen:/// >>>>> Migration URI format: xenmigr://target_host/ >>>>> Virtual machine guest1 is error: failed to get domain 'guest1' >>>>> error: Domain not found: xenUnifiedDomainLookupByName >>>>> >>>>> virsh -c xen:/// start guest1 >>>>> error: failed to get domain 'guest1' >>>>> error: Domain not found: xenUnifiedDomainLookupByName >>>>> >>>>> ++ printf '\033]0;%s@%s:%s\007' root cvtst3 '~' >>>>> [root at cvtst3 ~]# set +x >>>>> + set +x >>>>> --- >>>>> >>>>> >>>>> -Paras. >>>>> >>>>> >>>>> On Fri, Nov 22, 2013 at 5:22 PM, emmanuel segura wrote: >>>>> >>>>>> Hellos Paras >>>>>> >>>>>> Stop the vm and retry to start the vm with following commands and if >>>>>> you got some error show it >>>>>> >>>>>> export OCF_RESKEY_name="guest1" ; OCF_RESKEY_path="/vms_c" ; >>>>>> OCF_RESKEY_use_virsh=0 >>>>>> >>>>>> >>>>>> set -x >>>>>> /usr/share/cluster/vm.sh start >>>>>> set +x >>>>>> >>>>>> >>>>>> 2013/11/22 Paras pradhan >>>>>> >>>>>>> I found the workaround to my issue. What i did is: >>>>>>> >>>>>>> run the vm using xm and then start using cluvscadm. This works for >>>>>>> me for the time being but I am not sure what is causing this. This is what >>>>>>> I did >>>>>>> >>>>>>> xm create /vms_c/guest1 >>>>>>> clusvcadm -e vm: guest1 ( This detects that guest1 is up and quickly >>>>>>> changes its status to success) >>>>>>> >>>>>>> Although i used virt-install, it also create a xem format >>>>>>> configuration file and since use_virsh=0 it should be able to use this xen >>>>>>> format config file. This is my vm configuration: >>>>>>> >>>>>>> --- >>>>>>> name = "guest1" >>>>>>> maxmem = 2048 >>>>>>> memory = 512 >>>>>>> vcpus = 1 >>>>>>> #cpus="1-2" >>>>>>> bootloader = "/usr/bin/pygrub" >>>>>>> on_poweroff = "destroy" >>>>>>> on_reboot = "restart" >>>>>>> on_crash = "restart" >>>>>>> vfb = [ ] >>>>>>> disk = [ "tap:aio:/vms_c/guest1.img,xvda,w", >>>>>>> "tap:aio:/vms_c/guest1-disk.img,xvdb,w" ] >>>>>>> vif = [ "rate=10MB/s,mac=00:16:3e:6b:be:71,bridge=xenbr0" ] >>>>>>> >>>>>>> --- >>>>>>> >>>>>>> Thanks for you help Emmanuel ! Really appreciate it. >>>>>>> >>>>>>> -Paras. >>>>>>> >>>>>>> >>>>>>> On Fri, Nov 22, 2013 at 11:10 AM, emmanuel segura < >>>>>>> emi2fast at gmail.com> wrote: >>>>>>> >>>>>>>> ok, but your vm doesn't start on others nodes, i think, for >>>>>>>> configuration problems >>>>>>>> ================================================================ >>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: start on vm >>>>>>>> "guest1" returned 1 (generic error) >>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: #68: Failed to >>>>>>>> start vm:guest1; return value: 1 >>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: Stopping service >>>>>>>> vm:guest1 >>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: Service vm:guest1 >>>>>>>> is recovering >>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: #71: Relocating >>>>>>>> failed service vm:guest1 >>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: Service vm:guest1 >>>>>>>> is stopped >>>>>>>> ================================================================ >>>>>>>> in few words, try in every cluster node >>>>>>>> >>>>>>>> >>>>>>>> export OCF_RESKEY_name="guest1" ; OCF_RESKEY_path="/vms_c" >>>>>>>> >>>>>>>> set -x >>>>>>>> /usr/share/cluster/vm.sh start >>>>>>>> /usr/share/cluster/vm.sh stop >>>>>>>> >>>>>>>> after you check if your vm can start and stop on every cluster node, >>>>>>>> >>>>>>>> /usr/share/cluster/vm.sh start >>>>>>>> /usr/share/cluster/vm.sh migrate name_of_a_cluster_node >>>>>>>> >>>>>>>> can you show me your vm configuration under /vms_c? >>>>>>>> >>>>>>>> Thanks >>>>>>>> Emmanuel >>>>>>>> >>>>>>>> >>>>>>>> 2013/11/22 Paras pradhan >>>>>>>> >>>>>>>>> And also to test I made use_virsh=1 , same problem. The vm does >>>>>>>>> not start up if the FD domains are offline. >>>>>>>>> >>>>>>>>> -Paras. >>>>>>>>> >>>>>>>>> >>>>>>>>> On Fri, Nov 22, 2013 at 10:37 AM, Paras pradhan < >>>>>>>>> pradhanparas at gmail.com> wrote: >>>>>>>>> >>>>>>>>>> Well thats seems to theoretically correct. But right now my >>>>>>>>>> cluser has use_virsh=0 and I don't have any issue untill my mebmers on the >>>>>>>>>> failover domains are offline. So wondering what is it that clusvcadm -e is >>>>>>>>>> looking when I don't use virsh . >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Fri, Nov 22, 2013 at 10:05 AM, emmanuel segura < >>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>> >>>>>>>>>>> If you used virt-install, i think you need use virsh, the >>>>>>>>>>> cluster uses xm xen command if you got use_virsh=0 and virsh if you got >>>>>>>>>>> use_virsh=1 in your cluster config >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>> >>>>>>>>>>>> I use virt-install to create virtual machines. Is there a way >>>>>>>>>>>> to debug why clusvcadm -e vm:guest1 is failing? vm.sh seems to use virsh >>>>>>>>>>>> and my cluster.conf has use_virsh=0 >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Thanks >>>>>>>>>>>> >>>>>>>>>>>> Paras. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Nov 21, 2013 5:53 PM, "emmanuel segura" >>>>>>>>>>>> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> but did you configure your vm with xen tools or using >>>>>>>>>>>>> virt-manager? >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>>> >>>>>>>>>>>>>> Well no i don't want to use virsh. But as we are debugging >>>>>>>>>>>>>> with virsh now i found a strange issue. >>>>>>>>>>>>>> >>>>>>>>>>>>>> I exported an xml file and imported to all nodes . Ran >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> --- >>>>>>>>>>>>>> name="guest1" path="/vms_c" >>>>>>>>>>>>>> >>>>>>>>>>>>>> export OCF_RESKEY_name="guest1" ; OCF_RESKEY_path="/vms_c" >>>>>>>>>>>>>> >>>>>>>>>>>>>> set -x >>>>>>>>>>>>>> /usr/share/cluster/vm.sh start >>>>>>>>>>>>>> set +x >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> vm starts now. BUT from a cluster service : cluvscam -e >>>>>>>>>>>>>> vm:guest1 , same error. >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> So if i populate all my domains' config files to all my >>>>>>>>>>>>>> cluser nodes and make use_virsh=1, then the issue is resolved. But this is >>>>>>>>>>>>>> a lot of work for those who have hundreds of vm. >>>>>>>>>>>>>> >>>>>>>>>>>>>> vm.start uses virsh . Is there a way to tell him not use >>>>>>>>>>>>>> virsh? >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> Thanks >>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 5:19 PM, emmanuel segura < >>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> if you are using virsh for manage your vms, change this in >>>>>>>>>>>>>>> your cluster.conf >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> from >>>>>>>>>>>>>>> use_virsh="0" >>>>>>>>>>>>>>> to >>>>>>>>>>>>>>> use_virsh="1" >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I think i found the problem. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> virsh list --all does not show my vm . This is because it >>>>>>>>>>>>>>>> was created on another node. and another node has it. Now I want to start >>>>>>>>>>>>>>>> the service on a different node in which it was not created or where virsh >>>>>>>>>>>>>>>> list --all does not have an entry. Is it possible to create this entry >>>>>>>>>>>>>>>> using a xen config file?Looks like this is now a Xen issue rather than a >>>>>>>>>>>>>>>> linux-cluster issue . :) >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:58 PM, emmanuel segura < >>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> 1:did you verify your xen livemigration configuration? >>>>>>>>>>>>>>>>> 2: where you vm disk reside? >>>>>>>>>>>>>>>>> 3: can you see your vm defined on every cluster node with >>>>>>>>>>>>>>>>> xm list? >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> This is what I get >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Hypervisor: xen >>>>>>>>>>>>>>>>>> Management tool: virsh >>>>>>>>>>>>>>>>>> Hypervisor URI: xen:/// >>>>>>>>>>>>>>>>>> Migration URI format: xenmigr://target_host/ >>>>>>>>>>>>>>>>>> Virtual machine guest1 is error: failed to get domain >>>>>>>>>>>>>>>>>> 'guest1' >>>>>>>>>>>>>>>>>> error: Domain not found: xenUnifiedDomainLookupByName >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> virsh -c xen:/// start guest1 >>>>>>>>>>>>>>>>>> error: failed to get domain 'guest1' >>>>>>>>>>>>>>>>>> error: Domain not found: xenUnifiedDomainLookupByName >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> ++ printf '\033]0;%s@%s:%s\007' root vtst3 '~' >>>>>>>>>>>>>>>>>> [root at cvtst3 ~]# set +x >>>>>>>>>>>>>>>>>> + set +x >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> I am wondering why it failed to get domain . >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:43 PM, emmanuel segura < >>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> yes >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> Well it is guest1. Isn't it?. >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> max_restarts="0" name="guest1" path="/vms_c" recovery="restart" >>>>>>>>>>>>>>>>>>>> restart_expire_time="0" use_virsh="0"/> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> It is a vm service if it matters. >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:22 PM, emmanuel segura < >>>>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> use the servicename you defined in your cluster.conf >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> Says: >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> Running in test mode. >>>>>>>>>>>>>>>>>>>>>> No resource guest1 of type service found >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:07 PM, emmanuel segura < >>>>>>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> rg_test test /etc/cluster/cluster.conf start service >>>>>>>>>>>>>>>>>>>>>>> guest1 >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> My failover domain looks like this: >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> ordered="1" restricted="0"> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> name="vtst1" priority="1"/> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> name="vtst3" priority="2"/> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> name="vtst2" priority="3"/> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> I have vm service that uses this failover domain. >>>>>>>>>>>>>>>>>>>>>>>> If my node vtst1 is offline, the service doesnot start on vtst3 which is >>>>>>>>>>>>>>>>>>>>>>>> 2nd in the priority. >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> I tried to start it with: clusvcadm -e vm:guest1 >>>>>>>>>>>>>>>>>>>>>>>> and even with -F and -m option. >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> All i see is this error: >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>> start on vm "guest1" returned 1 (generic error) >>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>> #68: Failed to start vm:guest1; return value: 1 >>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>> Stopping service vm:guest1 >>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>> Service vm:guest1 is recovering >>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>> #71: Relocating failed service vm:guest1 >>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>> Service vm:guest1 is stopped >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> How do I debug? >>>>>>>>>>>>>>>>>>>>>>>> Thanks! >>>>>>>>>>>>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Linux-cluster mailing list >>>>>>>>> Linux-cluster at redhat.com >>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>> >>>>>>>> -- >>>>>>>> Linux-cluster mailing list >>>>>>>> Linux-cluster at redhat.com >>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Linux-cluster mailing list >>>>>>> Linux-cluster at redhat.com >>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>> >>>>>> -- >>>>>> Linux-cluster mailing list >>>>>> Linux-cluster at redhat.com >>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>> >>>>> >>>>> >>>>> -- >>>>> Linux-cluster mailing list >>>>> Linux-cluster at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>> >>>> >>>> >>>> >>>> -- >>>> esta es mi vida e me la vivo hasta que dios quiera >>>> >>> >>> >>> >>> -- >>> esta es mi vida e me la vivo hasta que dios quiera >>> >> >> >> >> -- >> esta es mi vida e me la vivo hasta que dios quiera >> > > > > -- > esta es mi vida e me la vivo hasta que dios quiera > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -------------- next part -------------- An HTML attachment was scrubbed... URL: From teigland at redhat.com Tue Dec 10 17:06:54 2013 From: teigland at redhat.com (David Teigland) Date: Tue, 10 Dec 2013 12:06:54 -0500 Subject: [Linux-cluster] Add option SO_LINGER to dlm sctp socket when the other endpoint is down. In-Reply-To: <52A15F33.803@cs.wisc.edu> References: <20131113172449.GB17306@redhat.com> <20131119142141.GM3475@suse.de> <20131119165144.GA6885@redhat.com> <20131128165009.GD22733@suse.de> <20131202205512.GA18393@redhat.com> <529D0072.2060309@cs.wisc.edu> <52A15F33.803@cs.wisc.edu> Message-ID: <20131210170654.GA14852@redhat.com> On Thu, Dec 05, 2013 at 11:22:59PM -0600, Mike Christie wrote: > I tested with the patch and did not see any regressions. I tried to hit > the same problem the patch was made for and was not able to hit it with > or without the patch. I get really fast shutdowns either way. Thanks Mike. I've pushed the patch to the linux-dlm next branch. Dave From Mark.Vallevand at UNISYS.com Thu Dec 12 15:54:05 2013 From: Mark.Vallevand at UNISYS.com (Vallevand, Mark K) Date: Thu, 12 Dec 2013 09:54:05 -0600 Subject: [Linux-cluster] Beginner questions about clone resource agent IPaddr2. Message-ID: <99C8B2929B39C24493377AC7A121E21FC5D09747F1@USEA-EXCH8.na.uis.unisys.com> I have a few questions about the IPaddr2 resource agent used in a clone cluster. They probably relate to the iptables CLUSTERIP feature, too. But, I hope someone can answer them here. What happens if one of the node in the clone cluster goes down for an extended time? It seems to me that the iptables CLUSTERIP will continue to work, but messages which hash to the absent node will simply be dropped by all the other nodes. Is there any recovery for this? Seems to me that you'd need to reconfigure the cluster to be n-1 nodes so that iptables CLUSTERIP will hash correctly. Is there a way to know which node an iptables CLUSTERIP will hash to? Say the hash mode is 'sourceip' and I have the source IP. Can I know which node would get messages from that source IP? I know that cluster node number and iptables CLUSTERIP node number can be different. I'd like to know which node so it can be 'prepared' for new traffic from a client on that IP. All the clients will all be configured with the clone cluster IP. The client traffic would be load-balanced over the cluster. Thanks. Regards. Mark K Vallevand Mark.Vallevand at Unisys.com May you live in interesting times, may you come to the attention of important people and may all your wishes come true. THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY MATERIAL and is thus for use only by the intended recipient. If you received this in error, please contact the sender and delete the e-mail and its attachments from all computers. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at alteeve.ca Tue Dec 17 20:58:10 2013 From: lists at alteeve.ca (Digimer) Date: Tue, 17 Dec 2013 15:58:10 -0500 Subject: [Linux-cluster] Panic'ing rhel6 with 'echo c > /proc/sysrq-trigger' no longer stops the guest CPU, so recovery doesn't occur. Message-ID: <52B0BAE2.8070307@alteeve.ca> Hi all, In previous configs, I knew that I could panic a RHEL/CentOS KVM VM with 'echo c > /proc/sysrq-trigger' and the guest would halt entirely, allowing vm.sh/rgmanager to detect the fault and reboot the guest. Now though, when trying it again recently, I noticed that the guest's CPU sat pegged at 50% (2 vcpus, so probably one core pegged). This seems to have prevented the failure from being detected, so the system was not recovered. Following here: https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/s2-proc-dir-sys.html#s3-proc-sys-kernel I tried; 'echo 1 > /proc/sys/kernel/sysrq', I tried with selinux in permissive and enforcing mode and I tried with RHEL 6.4 and 6.5. In all cases, the guest's vcpu stayed active and rgmanager was unable to detect the fault. I know that I can set; 'echo 5 > /proc/sys/kernel/panic' and have RHEL auto-reboot post panic, but I'd like to know if it's possible to have rgmanager actually detect the crash when this is not configured. Is there a way to tell RHEL specifically/Linux in general to cease all activity when it panic's? Thanks! -- Digimer Papers and Projects: https://alteeve.ca/w/ What if the cure for cancer is trapped in the mind of a person without access to education? From pradhanparas at gmail.com Wed Dec 18 17:53:53 2013 From: pradhanparas at gmail.com (Paras pradhan) Date: Wed, 18 Dec 2013 11:53:53 -0600 Subject: [Linux-cluster] failover domain and service start In-Reply-To: References: Message-ID: Emmauel, With export OCF_RESKEY_name="guest1" ; export OCF_RESKEY_path="/vms_c" ; export OCF_RESKEY_use_virsh=0 , I can start the vm using : /usr/share/cluster/vm.sh I am wondering how to make the changes to cluser.conf or other files so that we can start the vm using clucvcsadm. -Thanks sorry for the delay. Paras. On Thu, Dec 5, 2013 at 12:36 PM, Paras pradhan wrote: > Emmanue no.l I was busy on some other things I will test and let you know > asap ! > > > On Thu, Dec 5, 2013 at 12:26 PM, emmanuel segura wrote: > >> Hello Paras >> >> did you solved the problem? >> >> Thanks >> Emmanuel >> >> >> 2013/11/25 emmanuel segura >> >>> Hello Paras >>> >>> Maybe i found the solution, in function validate_all we got >>> >>> if [ -z "$OCF_RESKEY_hypervisor" ] || >>> [ "$OCF_RESKEY_hypervisor" = "auto" ]; then >>> export OCF_RESKEY_hypervisor="`virsh version | grep >>> \"Running hypervisor:\" | awk '{print $3}' | tr A-Z a-z`" >>> if [ -z "$OCF_RESKEY_hypervisor" ]; then >>> ocf_log err "Could not determine Hypervisor" >>> return $OCF_ERR_ARGS >>> fi >>> echo Hypervisor: $OCF_RESKEY_hypervisor >>> fi >>> >>> # >>> # Xen hypervisor only for when use_virsh = 0. >>> # >>> if [ "$OCF_RESKEY_use_virsh" = "0" ]; then >>> if [ "$OCF_RESKEY_hypervisor" != "xen" ]; then >>> ocf_log err "Cannot use $OCF_RESKEY_hypervisor >>> hypervisor without using virsh" >>> return $OCF_ERR_ARGS >>> fi >>> >>> with this following enviroment variables, when i tested by hand the >>> angent uses xm commands >>> >>> env | grep OCF >>> OCF_RESKEY_hypervisor=xen >>> OCF_RESKEY_path=/vms_c >>> OCF_RESKEY_name=guest1 >>> OCF_RESKEY_use_virsh=0 >>> >>> [root at client ~]# /usr/share/cluster/vm.sh status >>> Management tool: xm >>> Cannot find 'xm'; is it installed? >>> [vm.sh] Cannot find 'xm'; is it installed? >>> >>> >>> I don't have xen installed to test it >>> >>> >>> if [ -n "$OCF_RESKEY_xmlfile" ]; then >>> ocf_log err "Cannot use xmlfile if use_virsh is >>> set to 0" >>> return $OCF_ERR_ARGS >>> fi >>> >>> >>> >>> 2013/11/25 emmanuel segura >>> >>>> Hello paras >>>> >>>> missing the export command in front of variables, the correct way is >>>> this >>>> >>>> export OCF_RESKEY_name="guest1" ; export OCF_RESKEY_path="/vms_c" ; >>>> export OCF_RESKEY_use_virsh=0 >>>> [root at client ~]# env | grep OCF >>>> OCF_RESKEY_path=/vms_c >>>> OCF_RESKEY_name=guest1 >>>> OCF_RESKEY_use_virsh=0 >>>> >>>> >>>> >>>> 2013/11/25 emmanuel segura >>>> >>>>> Hello Paras >>>>> >>>>> I have a centos 6, i don't know if it is different on redhat 5, but i >>>>> saw in the script vm.sh calls do_start function when start parameter is >>>>> given >>>>> >>>>> do_start() >>>>> { >>>>> if [ "$OCF_RESKEY_use_virsh" = "1" ]; then >>>>> do_virsh_start $* >>>>> return $? >>>>> fi >>>>> >>>>> do_xm_start $* >>>>> return $? >>>>> } >>>>> >>>>> i don't know why because the vm.sh uses virsh when you launch the >>>>> script by hand :( >>>>> >>>>> >>>>> 2013/11/25 Paras pradhan >>>>> >>>>>> Looks like use_virsh=0 has no effect. >>>>>> >>>>>> -- >>>>>> [root at cvtst3 ~]# export OCF_RESKEY_name="guest1" ; >>>>>> OCF_RESKEY_path="/vms_c" ; OCF_RESKEY_use_virsh=0 >>>>>> [root at cvtst3 ~]# set -x >>>>>> ++ printf '\033]0;%s@%s:%s\007' root cvtst3 '~' >>>>>> [root at cvtst3 ~]# /usr/share/cluster/vm.sh start >>>>>> + /usr/share/cluster/vm.sh start >>>>>> Hypervisor: xen >>>>>> Management tool: virsh >>>>>> Hypervisor URI: xen:/// >>>>>> Migration URI format: xenmigr://target_host/ >>>>>> Virtual machine guest1 is error: failed to get domain 'guest1' >>>>>> error: Domain not found: xenUnifiedDomainLookupByName >>>>>> >>>>>> virsh -c xen:/// start guest1 >>>>>> error: failed to get domain 'guest1' >>>>>> error: Domain not found: xenUnifiedDomainLookupByName >>>>>> >>>>>> ++ printf '\033]0;%s@%s:%s\007' root cvtst3 '~' >>>>>> [root at cvtst3 ~]# set +x >>>>>> + set +x >>>>>> --- >>>>>> >>>>>> >>>>>> -Paras. >>>>>> >>>>>> >>>>>> On Fri, Nov 22, 2013 at 5:22 PM, emmanuel segura wrote: >>>>>> >>>>>>> Hellos Paras >>>>>>> >>>>>>> Stop the vm and retry to start the vm with following commands and if >>>>>>> you got some error show it >>>>>>> >>>>>>> export OCF_RESKEY_name="guest1" ; OCF_RESKEY_path="/vms_c" ; >>>>>>> OCF_RESKEY_use_virsh=0 >>>>>>> >>>>>>> >>>>>>> set -x >>>>>>> /usr/share/cluster/vm.sh start >>>>>>> set +x >>>>>>> >>>>>>> >>>>>>> 2013/11/22 Paras pradhan >>>>>>> >>>>>>>> I found the workaround to my issue. What i did is: >>>>>>>> >>>>>>>> run the vm using xm and then start using cluvscadm. This works for >>>>>>>> me for the time being but I am not sure what is causing this. This is what >>>>>>>> I did >>>>>>>> >>>>>>>> xm create /vms_c/guest1 >>>>>>>> clusvcadm -e vm: guest1 ( This detects that guest1 is up and >>>>>>>> quickly changes its status to success) >>>>>>>> >>>>>>>> Although i used virt-install, it also create a xem format >>>>>>>> configuration file and since use_virsh=0 it should be able to use this xen >>>>>>>> format config file. This is my vm configuration: >>>>>>>> >>>>>>>> --- >>>>>>>> name = "guest1" >>>>>>>> maxmem = 2048 >>>>>>>> memory = 512 >>>>>>>> vcpus = 1 >>>>>>>> #cpus="1-2" >>>>>>>> bootloader = "/usr/bin/pygrub" >>>>>>>> on_poweroff = "destroy" >>>>>>>> on_reboot = "restart" >>>>>>>> on_crash = "restart" >>>>>>>> vfb = [ ] >>>>>>>> disk = [ "tap:aio:/vms_c/guest1.img,xvda,w", >>>>>>>> "tap:aio:/vms_c/guest1-disk.img,xvdb,w" ] >>>>>>>> vif = [ "rate=10MB/s,mac=00:16:3e:6b:be:71,bridge=xenbr0" ] >>>>>>>> >>>>>>>> --- >>>>>>>> >>>>>>>> Thanks for you help Emmanuel ! Really appreciate it. >>>>>>>> >>>>>>>> -Paras. >>>>>>>> >>>>>>>> >>>>>>>> On Fri, Nov 22, 2013 at 11:10 AM, emmanuel segura < >>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>> >>>>>>>>> ok, but your vm doesn't start on others nodes, i think, for >>>>>>>>> configuration problems >>>>>>>>> ================================================================ >>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: start on vm >>>>>>>>> "guest1" returned 1 (generic error) >>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: #68: Failed to >>>>>>>>> start vm:guest1; return value: 1 >>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: Stopping service >>>>>>>>> vm:guest1 >>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: Service vm:guest1 >>>>>>>>> is recovering >>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: #71: Relocating >>>>>>>>> failed service vm:guest1 >>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: Service vm:guest1 >>>>>>>>> is stopped >>>>>>>>> ================================================================ >>>>>>>>> in few words, try in every cluster node >>>>>>>>> >>>>>>>>> >>>>>>>>> export OCF_RESKEY_name="guest1" ; OCF_RESKEY_path="/vms_c" >>>>>>>>> >>>>>>>>> set -x >>>>>>>>> /usr/share/cluster/vm.sh start >>>>>>>>> /usr/share/cluster/vm.sh stop >>>>>>>>> >>>>>>>>> after you check if your vm can start and stop on every cluster >>>>>>>>> node, >>>>>>>>> >>>>>>>>> /usr/share/cluster/vm.sh start >>>>>>>>> /usr/share/cluster/vm.sh migrate name_of_a_cluster_node >>>>>>>>> >>>>>>>>> can you show me your vm configuration under /vms_c? >>>>>>>>> >>>>>>>>> Thanks >>>>>>>>> Emmanuel >>>>>>>>> >>>>>>>>> >>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>> >>>>>>>>>> And also to test I made use_virsh=1 , same problem. The vm does >>>>>>>>>> not start up if the FD domains are offline. >>>>>>>>>> >>>>>>>>>> -Paras. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Fri, Nov 22, 2013 at 10:37 AM, Paras pradhan < >>>>>>>>>> pradhanparas at gmail.com> wrote: >>>>>>>>>> >>>>>>>>>>> Well thats seems to theoretically correct. But right now my >>>>>>>>>>> cluser has use_virsh=0 and I don't have any issue untill my mebmers on the >>>>>>>>>>> failover domains are offline. So wondering what is it that clusvcadm -e is >>>>>>>>>>> looking when I don't use virsh . >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Fri, Nov 22, 2013 at 10:05 AM, emmanuel segura < >>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> If you used virt-install, i think you need use virsh, the >>>>>>>>>>>> cluster uses xm xen command if you got use_virsh=0 and virsh if you got >>>>>>>>>>>> use_virsh=1 in your cluster config >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>> >>>>>>>>>>>>> I use virt-install to create virtual machines. Is there a way >>>>>>>>>>>>> to debug why clusvcadm -e vm:guest1 is failing? vm.sh seems to use virsh >>>>>>>>>>>>> and my cluster.conf has use_virsh=0 >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Thanks >>>>>>>>>>>>> >>>>>>>>>>>>> Paras. >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On Nov 21, 2013 5:53 PM, "emmanuel segura" >>>>>>>>>>>>> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> but did you configure your vm with xen tools or using >>>>>>>>>>>>>> virt-manager? >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Well no i don't want to use virsh. But as we are debugging >>>>>>>>>>>>>>> with virsh now i found a strange issue. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> I exported an xml file and imported to all nodes . Ran >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> --- >>>>>>>>>>>>>>> name="guest1" path="/vms_c" >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> export OCF_RESKEY_name="guest1" ; OCF_RESKEY_path="/vms_c" >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> set -x >>>>>>>>>>>>>>> /usr/share/cluster/vm.sh start >>>>>>>>>>>>>>> set +x >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> vm starts now. BUT from a cluster service : cluvscam -e >>>>>>>>>>>>>>> vm:guest1 , same error. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> So if i populate all my domains' config files to all my >>>>>>>>>>>>>>> cluser nodes and make use_virsh=1, then the issue is resolved. But this is >>>>>>>>>>>>>>> a lot of work for those who have hundreds of vm. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> vm.start uses virsh . Is there a way to tell him not use >>>>>>>>>>>>>>> virsh? >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Thanks >>>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 5:19 PM, emmanuel segura < >>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> if you are using virsh for manage your vms, change this in >>>>>>>>>>>>>>>> your cluster.conf >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> from >>>>>>>>>>>>>>>> use_virsh="0" >>>>>>>>>>>>>>>> to >>>>>>>>>>>>>>>> use_virsh="1" >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> I think i found the problem. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> virsh list --all does not show my vm . This is because it >>>>>>>>>>>>>>>>> was created on another node. and another node has it. Now I want to start >>>>>>>>>>>>>>>>> the service on a different node in which it was not created or where virsh >>>>>>>>>>>>>>>>> list --all does not have an entry. Is it possible to create this entry >>>>>>>>>>>>>>>>> using a xen config file?Looks like this is now a Xen issue rather than a >>>>>>>>>>>>>>>>> linux-cluster issue . :) >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:58 PM, emmanuel segura < >>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> 1:did you verify your xen livemigration configuration? >>>>>>>>>>>>>>>>>> 2: where you vm disk reside? >>>>>>>>>>>>>>>>>> 3: can you see your vm defined on every cluster node with >>>>>>>>>>>>>>>>>> xm list? >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> This is what I get >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Hypervisor: xen >>>>>>>>>>>>>>>>>>> Management tool: virsh >>>>>>>>>>>>>>>>>>> Hypervisor URI: xen:/// >>>>>>>>>>>>>>>>>>> Migration URI format: xenmigr://target_host/ >>>>>>>>>>>>>>>>>>> Virtual machine guest1 is error: failed to get domain >>>>>>>>>>>>>>>>>>> 'guest1' >>>>>>>>>>>>>>>>>>> error: Domain not found: xenUnifiedDomainLookupByName >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> virsh -c xen:/// start guest1 >>>>>>>>>>>>>>>>>>> error: failed to get domain 'guest1' >>>>>>>>>>>>>>>>>>> error: Domain not found: xenUnifiedDomainLookupByName >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> ++ printf '\033]0;%s@%s:%s\007' root vtst3 '~' >>>>>>>>>>>>>>>>>>> [root at cvtst3 ~]# set +x >>>>>>>>>>>>>>>>>>> + set +x >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> I am wondering why it failed to get domain . >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:43 PM, emmanuel segura < >>>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> yes >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> Well it is guest1. Isn't it?. >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> max_restarts="0" name="guest1" path="/vms_c" recovery="restart" >>>>>>>>>>>>>>>>>>>>> restart_expire_time="0" use_virsh="0"/> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> It is a vm service if it matters. >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:22 PM, emmanuel segura < >>>>>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> use the servicename you defined in your cluster.conf >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> Says: >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> Running in test mode. >>>>>>>>>>>>>>>>>>>>>>> No resource guest1 of type service found >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:07 PM, emmanuel segura < >>>>>>>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> rg_test test /etc/cluster/cluster.conf start >>>>>>>>>>>>>>>>>>>>>>>> service guest1 >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> My failover domain looks like this: >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> ordered="1" restricted="0"> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> I have vm service that uses this failover domain. >>>>>>>>>>>>>>>>>>>>>>>>> If my node vtst1 is offline, the service doesnot start on vtst3 which is >>>>>>>>>>>>>>>>>>>>>>>>> 2nd in the priority. >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> I tried to start it with: clusvcadm -e vm:guest1 >>>>>>>>>>>>>>>>>>>>>>>>> and even with -F and -m option. >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> All i see is this error: >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>> start on vm "guest1" returned 1 (generic error) >>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>> #68: Failed to start vm:guest1; return value: 1 >>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>> Stopping service vm:guest1 >>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>> Service vm:guest1 is recovering >>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>> #71: Relocating failed service vm:guest1 >>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>> Service vm:guest1 is stopped >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> How do I debug? >>>>>>>>>>>>>>>>>>>>>>>>> Thanks! >>>>>>>>>>>>>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Linux-cluster mailing list >>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Linux-cluster mailing list >>>>>>>>> Linux-cluster at redhat.com >>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Linux-cluster mailing list >>>>>>>> Linux-cluster at redhat.com >>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>> >>>>>>> -- >>>>>>> Linux-cluster mailing list >>>>>>> Linux-cluster at redhat.com >>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Linux-cluster mailing list >>>>>> Linux-cluster at redhat.com >>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>> >>>> >>>> >>>> >>>> -- >>>> esta es mi vida e me la vivo hasta que dios quiera >>>> >>> >>> >>> >>> -- >>> esta es mi vida e me la vivo hasta que dios quiera >>> >> >> >> >> -- >> esta es mi vida e me la vivo hasta que dios quiera >> >> -- >> Linux-cluster mailing list >> Linux-cluster at redhat.com >> https://www.redhat.com/mailman/listinfo/linux-cluster >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emi2fast at gmail.com Wed Dec 18 18:24:25 2013 From: emi2fast at gmail.com (emmanuel segura) Date: Wed, 18 Dec 2013 19:24:25 +0100 Subject: [Linux-cluster] failover domain and service start In-Reply-To: References: Message-ID: Incriment the config version of cluster.conf and ccs_tool update /etc/cluster/cluster.conf 2013/12/18 Paras pradhan > Emmauel, > > With export OCF_RESKEY_name="guest1" ; export OCF_RESKEY_path="/vms_c" ; > export OCF_RESKEY_use_virsh=0 > > , I can start the vm using : /usr/share/cluster/vm.sh > > I am wondering how to make the changes to cluser.conf or other files so > that we can start the vm using clucvcsadm. > > -Thanks > sorry for the delay. > > Paras. > > > On Thu, Dec 5, 2013 at 12:36 PM, Paras pradhan wrote: > >> Emmanue no.l I was busy on some other things I will test and let you know >> asap ! >> >> >> On Thu, Dec 5, 2013 at 12:26 PM, emmanuel segura wrote: >> >>> Hello Paras >>> >>> did you solved the problem? >>> >>> Thanks >>> Emmanuel >>> >>> >>> 2013/11/25 emmanuel segura >>> >>>> Hello Paras >>>> >>>> Maybe i found the solution, in function validate_all we got >>>> >>>> if [ -z "$OCF_RESKEY_hypervisor" ] || >>>> [ "$OCF_RESKEY_hypervisor" = "auto" ]; then >>>> export OCF_RESKEY_hypervisor="`virsh version | grep >>>> \"Running hypervisor:\" | awk '{print $3}' | tr A-Z a-z`" >>>> if [ -z "$OCF_RESKEY_hypervisor" ]; then >>>> ocf_log err "Could not determine Hypervisor" >>>> return $OCF_ERR_ARGS >>>> fi >>>> echo Hypervisor: $OCF_RESKEY_hypervisor >>>> fi >>>> >>>> # >>>> # Xen hypervisor only for when use_virsh = 0. >>>> # >>>> if [ "$OCF_RESKEY_use_virsh" = "0" ]; then >>>> if [ "$OCF_RESKEY_hypervisor" != "xen" ]; then >>>> ocf_log err "Cannot use $OCF_RESKEY_hypervisor >>>> hypervisor without using virsh" >>>> return $OCF_ERR_ARGS >>>> fi >>>> >>>> with this following enviroment variables, when i tested by hand the >>>> angent uses xm commands >>>> >>>> env | grep OCF >>>> OCF_RESKEY_hypervisor=xen >>>> OCF_RESKEY_path=/vms_c >>>> OCF_RESKEY_name=guest1 >>>> OCF_RESKEY_use_virsh=0 >>>> >>>> [root at client ~]# /usr/share/cluster/vm.sh status >>>> Management tool: xm >>>> Cannot find 'xm'; is it installed? >>>> [vm.sh] Cannot find 'xm'; is it installed? >>>> >>>> >>>> I don't have xen installed to test it >>>> >>>> >>>> if [ -n "$OCF_RESKEY_xmlfile" ]; then >>>> ocf_log err "Cannot use xmlfile if use_virsh is >>>> set to 0" >>>> return $OCF_ERR_ARGS >>>> fi >>>> >>>> >>>> >>>> 2013/11/25 emmanuel segura >>>> >>>>> Hello paras >>>>> >>>>> missing the export command in front of variables, the correct way is >>>>> this >>>>> >>>>> export OCF_RESKEY_name="guest1" ; export OCF_RESKEY_path="/vms_c" ; >>>>> export OCF_RESKEY_use_virsh=0 >>>>> [root at client ~]# env | grep OCF >>>>> OCF_RESKEY_path=/vms_c >>>>> OCF_RESKEY_name=guest1 >>>>> OCF_RESKEY_use_virsh=0 >>>>> >>>>> >>>>> >>>>> 2013/11/25 emmanuel segura >>>>> >>>>>> Hello Paras >>>>>> >>>>>> I have a centos 6, i don't know if it is different on redhat 5, but >>>>>> i saw in the script vm.sh calls do_start function when start parameter is >>>>>> given >>>>>> >>>>>> do_start() >>>>>> { >>>>>> if [ "$OCF_RESKEY_use_virsh" = "1" ]; then >>>>>> do_virsh_start $* >>>>>> return $? >>>>>> fi >>>>>> >>>>>> do_xm_start $* >>>>>> return $? >>>>>> } >>>>>> >>>>>> i don't know why because the vm.sh uses virsh when you launch the >>>>>> script by hand :( >>>>>> >>>>>> >>>>>> 2013/11/25 Paras pradhan >>>>>> >>>>>>> Looks like use_virsh=0 has no effect. >>>>>>> >>>>>>> -- >>>>>>> [root at cvtst3 ~]# export OCF_RESKEY_name="guest1" ; >>>>>>> OCF_RESKEY_path="/vms_c" ; OCF_RESKEY_use_virsh=0 >>>>>>> [root at cvtst3 ~]# set -x >>>>>>> ++ printf '\033]0;%s@%s:%s\007' root cvtst3 '~' >>>>>>> [root at cvtst3 ~]# /usr/share/cluster/vm.sh start >>>>>>> + /usr/share/cluster/vm.sh start >>>>>>> Hypervisor: xen >>>>>>> Management tool: virsh >>>>>>> Hypervisor URI: xen:/// >>>>>>> Migration URI format: xenmigr://target_host/ >>>>>>> Virtual machine guest1 is error: failed to get domain 'guest1' >>>>>>> error: Domain not found: xenUnifiedDomainLookupByName >>>>>>> >>>>>>> virsh -c xen:/// start guest1 >>>>>>> error: failed to get domain 'guest1' >>>>>>> error: Domain not found: xenUnifiedDomainLookupByName >>>>>>> >>>>>>> ++ printf '\033]0;%s@%s:%s\007' root cvtst3 '~' >>>>>>> [root at cvtst3 ~]# set +x >>>>>>> + set +x >>>>>>> --- >>>>>>> >>>>>>> >>>>>>> -Paras. >>>>>>> >>>>>>> >>>>>>> On Fri, Nov 22, 2013 at 5:22 PM, emmanuel segura >>>>>> > wrote: >>>>>>> >>>>>>>> Hellos Paras >>>>>>>> >>>>>>>> Stop the vm and retry to start the vm with following commands and >>>>>>>> if you got some error show it >>>>>>>> >>>>>>>> export OCF_RESKEY_name="guest1" ; OCF_RESKEY_path="/vms_c" ; >>>>>>>> OCF_RESKEY_use_virsh=0 >>>>>>>> >>>>>>>> >>>>>>>> set -x >>>>>>>> /usr/share/cluster/vm.sh start >>>>>>>> set +x >>>>>>>> >>>>>>>> >>>>>>>> 2013/11/22 Paras pradhan >>>>>>>> >>>>>>>>> I found the workaround to my issue. What i did is: >>>>>>>>> >>>>>>>>> run the vm using xm and then start using cluvscadm. This works for >>>>>>>>> me for the time being but I am not sure what is causing this. This is what >>>>>>>>> I did >>>>>>>>> >>>>>>>>> xm create /vms_c/guest1 >>>>>>>>> clusvcadm -e vm: guest1 ( This detects that guest1 is up and >>>>>>>>> quickly changes its status to success) >>>>>>>>> >>>>>>>>> Although i used virt-install, it also create a xem format >>>>>>>>> configuration file and since use_virsh=0 it should be able to use this xen >>>>>>>>> format config file. This is my vm configuration: >>>>>>>>> >>>>>>>>> --- >>>>>>>>> name = "guest1" >>>>>>>>> maxmem = 2048 >>>>>>>>> memory = 512 >>>>>>>>> vcpus = 1 >>>>>>>>> #cpus="1-2" >>>>>>>>> bootloader = "/usr/bin/pygrub" >>>>>>>>> on_poweroff = "destroy" >>>>>>>>> on_reboot = "restart" >>>>>>>>> on_crash = "restart" >>>>>>>>> vfb = [ ] >>>>>>>>> disk = [ "tap:aio:/vms_c/guest1.img,xvda,w", >>>>>>>>> "tap:aio:/vms_c/guest1-disk.img,xvdb,w" ] >>>>>>>>> vif = [ "rate=10MB/s,mac=00:16:3e:6b:be:71,bridge=xenbr0" ] >>>>>>>>> >>>>>>>>> --- >>>>>>>>> >>>>>>>>> Thanks for you help Emmanuel ! Really appreciate it. >>>>>>>>> >>>>>>>>> -Paras. >>>>>>>>> >>>>>>>>> >>>>>>>>> On Fri, Nov 22, 2013 at 11:10 AM, emmanuel segura < >>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>> >>>>>>>>>> ok, but your vm doesn't start on others nodes, i think, for >>>>>>>>>> configuration problems >>>>>>>>>> ================================================================ >>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: start on vm >>>>>>>>>> "guest1" returned 1 (generic error) >>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: #68: Failed to >>>>>>>>>> start vm:guest1; return value: 1 >>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: Stopping service >>>>>>>>>> vm:guest1 >>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: Service >>>>>>>>>> vm:guest1 is recovering >>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: #71: Relocating >>>>>>>>>> failed service vm:guest1 >>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: Service >>>>>>>>>> vm:guest1 is stopped >>>>>>>>>> ================================================================ >>>>>>>>>> in few words, try in every cluster node >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> export OCF_RESKEY_name="guest1" ; OCF_RESKEY_path="/vms_c" >>>>>>>>>> >>>>>>>>>> set -x >>>>>>>>>> /usr/share/cluster/vm.sh start >>>>>>>>>> /usr/share/cluster/vm.sh stop >>>>>>>>>> >>>>>>>>>> after you check if your vm can start and stop on every cluster >>>>>>>>>> node, >>>>>>>>>> >>>>>>>>>> /usr/share/cluster/vm.sh start >>>>>>>>>> /usr/share/cluster/vm.sh migrate name_of_a_cluster_node >>>>>>>>>> >>>>>>>>>> can you show me your vm configuration under /vms_c? >>>>>>>>>> >>>>>>>>>> Thanks >>>>>>>>>> Emmanuel >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>> >>>>>>>>>>> And also to test I made use_virsh=1 , same problem. The vm does >>>>>>>>>>> not start up if the FD domains are offline. >>>>>>>>>>> >>>>>>>>>>> -Paras. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Fri, Nov 22, 2013 at 10:37 AM, Paras pradhan < >>>>>>>>>>> pradhanparas at gmail.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> Well thats seems to theoretically correct. But right now my >>>>>>>>>>>> cluser has use_virsh=0 and I don't have any issue untill my mebmers on the >>>>>>>>>>>> failover domains are offline. So wondering what is it that clusvcadm -e is >>>>>>>>>>>> looking when I don't use virsh . >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Fri, Nov 22, 2013 at 10:05 AM, emmanuel segura < >>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> If you used virt-install, i think you need use virsh, the >>>>>>>>>>>>> cluster uses xm xen command if you got use_virsh=0 and virsh if you got >>>>>>>>>>>>> use_virsh=1 in your cluster config >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>>> >>>>>>>>>>>>>> I use virt-install to create virtual machines. Is there a way >>>>>>>>>>>>>> to debug why clusvcadm -e vm:guest1 is failing? vm.sh seems to use virsh >>>>>>>>>>>>>> and my cluster.conf has use_virsh=0 >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> Thanks >>>>>>>>>>>>>> >>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Nov 21, 2013 5:53 PM, "emmanuel segura" < >>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> but did you configure your vm with xen tools or using >>>>>>>>>>>>>>> virt-manager? >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Well no i don't want to use virsh. But as we are >>>>>>>>>>>>>>>> debugging with virsh now i found a strange issue. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I exported an xml file and imported to all nodes . Ran >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> --- >>>>>>>>>>>>>>>> name="guest1" path="/vms_c" >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> export OCF_RESKEY_name="guest1" ; OCF_RESKEY_path="/vms_c" >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> set -x >>>>>>>>>>>>>>>> /usr/share/cluster/vm.sh start >>>>>>>>>>>>>>>> set +x >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>> vm starts now. BUT from a cluster service : cluvscam -e >>>>>>>>>>>>>>>> vm:guest1 , same error. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> So if i populate all my domains' config files to all my >>>>>>>>>>>>>>>> cluser nodes and make use_virsh=1, then the issue is resolved. But this is >>>>>>>>>>>>>>>> a lot of work for those who have hundreds of vm. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> vm.start uses virsh . Is there a way to tell him not use >>>>>>>>>>>>>>>> virsh? >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Thanks >>>>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 5:19 PM, emmanuel segura < >>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> if you are using virsh for manage your vms, change this in >>>>>>>>>>>>>>>>> your cluster.conf >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> from >>>>>>>>>>>>>>>>> use_virsh="0" >>>>>>>>>>>>>>>>> to >>>>>>>>>>>>>>>>> use_virsh="1" >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> I think i found the problem. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> virsh list --all does not show my vm . This is because it >>>>>>>>>>>>>>>>>> was created on another node. and another node has it. Now I want to start >>>>>>>>>>>>>>>>>> the service on a different node in which it was not created or where virsh >>>>>>>>>>>>>>>>>> list --all does not have an entry. Is it possible to create this entry >>>>>>>>>>>>>>>>>> using a xen config file?Looks like this is now a Xen issue rather than a >>>>>>>>>>>>>>>>>> linux-cluster issue . :) >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:58 PM, emmanuel segura < >>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> 1:did you verify your xen livemigration configuration? >>>>>>>>>>>>>>>>>>> 2: where you vm disk reside? >>>>>>>>>>>>>>>>>>> 3: can you see your vm defined on every cluster node >>>>>>>>>>>>>>>>>>> with xm list? >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> This is what I get >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> Hypervisor: xen >>>>>>>>>>>>>>>>>>>> Management tool: virsh >>>>>>>>>>>>>>>>>>>> Hypervisor URI: xen:/// >>>>>>>>>>>>>>>>>>>> Migration URI format: xenmigr://target_host/ >>>>>>>>>>>>>>>>>>>> Virtual machine guest1 is error: failed to get domain >>>>>>>>>>>>>>>>>>>> 'guest1' >>>>>>>>>>>>>>>>>>>> error: Domain not found: xenUnifiedDomainLookupByName >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> virsh -c xen:/// start guest1 >>>>>>>>>>>>>>>>>>>> error: failed to get domain 'guest1' >>>>>>>>>>>>>>>>>>>> error: Domain not found: xenUnifiedDomainLookupByName >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> ++ printf '\033]0;%s@%s:%s\007' root vtst3 '~' >>>>>>>>>>>>>>>>>>>> [root at cvtst3 ~]# set +x >>>>>>>>>>>>>>>>>>>> + set +x >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> I am wondering why it failed to get domain . >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:43 PM, emmanuel segura < >>>>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> yes >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> Well it is guest1. Isn't it?. >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> max_restarts="0" name="guest1" path="/vms_c" recovery="restart" >>>>>>>>>>>>>>>>>>>>>> restart_expire_time="0" use_virsh="0"/> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> It is a vm service if it matters. >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:22 PM, emmanuel segura < >>>>>>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> use the servicename you defined in your cluster.conf >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> Says: >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> Running in test mode. >>>>>>>>>>>>>>>>>>>>>>>> No resource guest1 of type service found >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:07 PM, emmanuel segura < >>>>>>>>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> rg_test test /etc/cluster/cluster.conf start >>>>>>>>>>>>>>>>>>>>>>>>> service guest1 >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> My failover domain looks like this: >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> ordered="1" restricted="0"> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> I have vm service that uses this failover domain. >>>>>>>>>>>>>>>>>>>>>>>>>> If my node vtst1 is offline, the service doesnot start on vtst3 which is >>>>>>>>>>>>>>>>>>>>>>>>>> 2nd in the priority. >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> I tried to start it with: clusvcadm -e vm:guest1 >>>>>>>>>>>>>>>>>>>>>>>>>> and even with -F and -m option. >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> All i see is this error: >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>> start on vm "guest1" returned 1 (generic error) >>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>> #68: Failed to start vm:guest1; return value: 1 >>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>> Stopping service vm:guest1 >>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>> Service vm:guest1 is recovering >>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>> #71: Relocating failed service vm:guest1 >>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>> Service vm:guest1 is stopped >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> How do I debug? >>>>>>>>>>>>>>>>>>>>>>>>>> Thanks! >>>>>>>>>>>>>>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Linux-cluster mailing list >>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Linux-cluster mailing list >>>>>>>>> Linux-cluster at redhat.com >>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>> >>>>>>>> -- >>>>>>>> Linux-cluster mailing list >>>>>>>> Linux-cluster at redhat.com >>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Linux-cluster mailing list >>>>>>> Linux-cluster at redhat.com >>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>> >>>> >>>> >>>> >>>> -- >>>> esta es mi vida e me la vivo hasta que dios quiera >>>> >>> >>> >>> >>> -- >>> esta es mi vida e me la vivo hasta que dios quiera >>> >>> -- >>> Linux-cluster mailing list >>> Linux-cluster at redhat.com >>> https://www.redhat.com/mailman/listinfo/linux-cluster >>> >> >> > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -- esta es mi vida e me la vivo hasta que dios quiera -------------- next part -------------- An HTML attachment was scrubbed... URL: From pradhanparas at gmail.com Wed Dec 18 18:32:34 2013 From: pradhanparas at gmail.com (Paras pradhan) Date: Wed, 18 Dec 2013 12:32:34 -0600 Subject: [Linux-cluster] failover domain and service start In-Reply-To: References: Message-ID: The only parameter I don't have is: hypervisor="xen" Does it matter? This is what i have: -Paras. On Wed, Dec 18, 2013 at 12:24 PM, emmanuel segura wrote: > > > Incriment the config version of cluster.conf and ccs_tool update > /etc/cluster/cluster.conf > > > 2013/12/18 Paras pradhan > >> Emmauel, >> >> With export OCF_RESKEY_name="guest1" ; export OCF_RESKEY_path="/vms_c" ; >> export OCF_RESKEY_use_virsh=0 >> >> , I can start the vm using : /usr/share/cluster/vm.sh >> >> I am wondering how to make the changes to cluser.conf or other files so >> that we can start the vm using clucvcsadm. >> >> -Thanks >> sorry for the delay. >> >> Paras. >> >> >> On Thu, Dec 5, 2013 at 12:36 PM, Paras pradhan wrote: >> >>> Emmanue no.l I was busy on some other things I will test and let you >>> know asap ! >>> >>> >>> On Thu, Dec 5, 2013 at 12:26 PM, emmanuel segura wrote: >>> >>>> Hello Paras >>>> >>>> did you solved the problem? >>>> >>>> Thanks >>>> Emmanuel >>>> >>>> >>>> 2013/11/25 emmanuel segura >>>> >>>>> Hello Paras >>>>> >>>>> Maybe i found the solution, in function validate_all we got >>>>> >>>>> if [ -z "$OCF_RESKEY_hypervisor" ] || >>>>> [ "$OCF_RESKEY_hypervisor" = "auto" ]; then >>>>> export OCF_RESKEY_hypervisor="`virsh version | grep >>>>> \"Running hypervisor:\" | awk '{print $3}' | tr A-Z a-z`" >>>>> if [ -z "$OCF_RESKEY_hypervisor" ]; then >>>>> ocf_log err "Could not determine Hypervisor" >>>>> return $OCF_ERR_ARGS >>>>> fi >>>>> echo Hypervisor: $OCF_RESKEY_hypervisor >>>>> fi >>>>> >>>>> # >>>>> # Xen hypervisor only for when use_virsh = 0. >>>>> # >>>>> if [ "$OCF_RESKEY_use_virsh" = "0" ]; then >>>>> if [ "$OCF_RESKEY_hypervisor" != "xen" ]; then >>>>> ocf_log err "Cannot use $OCF_RESKEY_hypervisor >>>>> hypervisor without using virsh" >>>>> return $OCF_ERR_ARGS >>>>> fi >>>>> >>>>> with this following enviroment variables, when i tested by hand the >>>>> angent uses xm commands >>>>> >>>>> env | grep OCF >>>>> OCF_RESKEY_hypervisor=xen >>>>> OCF_RESKEY_path=/vms_c >>>>> OCF_RESKEY_name=guest1 >>>>> OCF_RESKEY_use_virsh=0 >>>>> >>>>> [root at client ~]# /usr/share/cluster/vm.sh status >>>>> Management tool: xm >>>>> Cannot find 'xm'; is it installed? >>>>> [vm.sh] Cannot find 'xm'; is it installed? >>>>> >>>>> >>>>> I don't have xen installed to test it >>>>> >>>>> >>>>> if [ -n "$OCF_RESKEY_xmlfile" ]; then >>>>> ocf_log err "Cannot use xmlfile if use_virsh >>>>> is set to 0" >>>>> return $OCF_ERR_ARGS >>>>> fi >>>>> >>>>> >>>>> >>>>> 2013/11/25 emmanuel segura >>>>> >>>>>> Hello paras >>>>>> >>>>>> missing the export command in front of variables, the correct way is >>>>>> this >>>>>> >>>>>> export OCF_RESKEY_name="guest1" ; export OCF_RESKEY_path="/vms_c" ; >>>>>> export OCF_RESKEY_use_virsh=0 >>>>>> [root at client ~]# env | grep OCF >>>>>> OCF_RESKEY_path=/vms_c >>>>>> OCF_RESKEY_name=guest1 >>>>>> OCF_RESKEY_use_virsh=0 >>>>>> >>>>>> >>>>>> >>>>>> 2013/11/25 emmanuel segura >>>>>> >>>>>>> Hello Paras >>>>>>> >>>>>>> I have a centos 6, i don't know if it is different on redhat 5, but >>>>>>> i saw in the script vm.sh calls do_start function when start parameter is >>>>>>> given >>>>>>> >>>>>>> do_start() >>>>>>> { >>>>>>> if [ "$OCF_RESKEY_use_virsh" = "1" ]; then >>>>>>> do_virsh_start $* >>>>>>> return $? >>>>>>> fi >>>>>>> >>>>>>> do_xm_start $* >>>>>>> return $? >>>>>>> } >>>>>>> >>>>>>> i don't know why because the vm.sh uses virsh when you launch the >>>>>>> script by hand :( >>>>>>> >>>>>>> >>>>>>> 2013/11/25 Paras pradhan >>>>>>> >>>>>>>> Looks like use_virsh=0 has no effect. >>>>>>>> >>>>>>>> -- >>>>>>>> [root at cvtst3 ~]# export OCF_RESKEY_name="guest1" ; >>>>>>>> OCF_RESKEY_path="/vms_c" ; OCF_RESKEY_use_virsh=0 >>>>>>>> [root at cvtst3 ~]# set -x >>>>>>>> ++ printf '\033]0;%s@%s:%s\007' root cvtst3 '~' >>>>>>>> [root at cvtst3 ~]# /usr/share/cluster/vm.sh start >>>>>>>> + /usr/share/cluster/vm.sh start >>>>>>>> Hypervisor: xen >>>>>>>> Management tool: virsh >>>>>>>> Hypervisor URI: xen:/// >>>>>>>> Migration URI format: xenmigr://target_host/ >>>>>>>> Virtual machine guest1 is error: failed to get domain 'guest1' >>>>>>>> error: Domain not found: xenUnifiedDomainLookupByName >>>>>>>> >>>>>>>> virsh -c xen:/// start guest1 >>>>>>>> error: failed to get domain 'guest1' >>>>>>>> error: Domain not found: xenUnifiedDomainLookupByName >>>>>>>> >>>>>>>> ++ printf '\033]0;%s@%s:%s\007' root cvtst3 '~' >>>>>>>> [root at cvtst3 ~]# set +x >>>>>>>> + set +x >>>>>>>> --- >>>>>>>> >>>>>>>> >>>>>>>> -Paras. >>>>>>>> >>>>>>>> >>>>>>>> On Fri, Nov 22, 2013 at 5:22 PM, emmanuel segura < >>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>> >>>>>>>>> Hellos Paras >>>>>>>>> >>>>>>>>> Stop the vm and retry to start the vm with following commands and >>>>>>>>> if you got some error show it >>>>>>>>> >>>>>>>>> export OCF_RESKEY_name="guest1" ; OCF_RESKEY_path="/vms_c" ; >>>>>>>>> OCF_RESKEY_use_virsh=0 >>>>>>>>> >>>>>>>>> >>>>>>>>> set -x >>>>>>>>> /usr/share/cluster/vm.sh start >>>>>>>>> set +x >>>>>>>>> >>>>>>>>> >>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>> >>>>>>>>>> I found the workaround to my issue. What i did is: >>>>>>>>>> >>>>>>>>>> run the vm using xm and then start using cluvscadm. This works >>>>>>>>>> for me for the time being but I am not sure what is causing this. This is >>>>>>>>>> what I did >>>>>>>>>> >>>>>>>>>> xm create /vms_c/guest1 >>>>>>>>>> clusvcadm -e vm: guest1 ( This detects that guest1 is up and >>>>>>>>>> quickly changes its status to success) >>>>>>>>>> >>>>>>>>>> Although i used virt-install, it also create a xem format >>>>>>>>>> configuration file and since use_virsh=0 it should be able to use this xen >>>>>>>>>> format config file. This is my vm configuration: >>>>>>>>>> >>>>>>>>>> --- >>>>>>>>>> name = "guest1" >>>>>>>>>> maxmem = 2048 >>>>>>>>>> memory = 512 >>>>>>>>>> vcpus = 1 >>>>>>>>>> #cpus="1-2" >>>>>>>>>> bootloader = "/usr/bin/pygrub" >>>>>>>>>> on_poweroff = "destroy" >>>>>>>>>> on_reboot = "restart" >>>>>>>>>> on_crash = "restart" >>>>>>>>>> vfb = [ ] >>>>>>>>>> disk = [ "tap:aio:/vms_c/guest1.img,xvda,w", >>>>>>>>>> "tap:aio:/vms_c/guest1-disk.img,xvdb,w" ] >>>>>>>>>> vif = [ "rate=10MB/s,mac=00:16:3e:6b:be:71,bridge=xenbr0" ] >>>>>>>>>> >>>>>>>>>> --- >>>>>>>>>> >>>>>>>>>> Thanks for you help Emmanuel ! Really appreciate it. >>>>>>>>>> >>>>>>>>>> -Paras. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Fri, Nov 22, 2013 at 11:10 AM, emmanuel segura < >>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>> >>>>>>>>>>> ok, but your vm doesn't start on others nodes, i think, for >>>>>>>>>>> configuration problems >>>>>>>>>>> ================================================================ >>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: start on vm >>>>>>>>>>> "guest1" returned 1 (generic error) >>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: #68: Failed to >>>>>>>>>>> start vm:guest1; return value: 1 >>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: Stopping >>>>>>>>>>> service vm:guest1 >>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: Service >>>>>>>>>>> vm:guest1 is recovering >>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: #71: >>>>>>>>>>> Relocating failed service vm:guest1 >>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: Service >>>>>>>>>>> vm:guest1 is stopped >>>>>>>>>>> ================================================================ >>>>>>>>>>> in few words, try in every cluster node >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> export OCF_RESKEY_name="guest1" ; OCF_RESKEY_path="/vms_c" >>>>>>>>>>> >>>>>>>>>>> set -x >>>>>>>>>>> /usr/share/cluster/vm.sh start >>>>>>>>>>> /usr/share/cluster/vm.sh stop >>>>>>>>>>> >>>>>>>>>>> after you check if your vm can start and stop on every cluster >>>>>>>>>>> node, >>>>>>>>>>> >>>>>>>>>>> /usr/share/cluster/vm.sh start >>>>>>>>>>> /usr/share/cluster/vm.sh migrate name_of_a_cluster_node >>>>>>>>>>> >>>>>>>>>>> can you show me your vm configuration under /vms_c? >>>>>>>>>>> >>>>>>>>>>> Thanks >>>>>>>>>>> Emmanuel >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>> >>>>>>>>>>>> And also to test I made use_virsh=1 , same problem. The vm does >>>>>>>>>>>> not start up if the FD domains are offline. >>>>>>>>>>>> >>>>>>>>>>>> -Paras. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Fri, Nov 22, 2013 at 10:37 AM, Paras pradhan < >>>>>>>>>>>> pradhanparas at gmail.com> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Well thats seems to theoretically correct. But right now my >>>>>>>>>>>>> cluser has use_virsh=0 and I don't have any issue untill my mebmers on the >>>>>>>>>>>>> failover domains are offline. So wondering what is it that clusvcadm -e is >>>>>>>>>>>>> looking when I don't use virsh . >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On Fri, Nov 22, 2013 at 10:05 AM, emmanuel segura < >>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> If you used virt-install, i think you need use virsh, the >>>>>>>>>>>>>> cluster uses xm xen command if you got use_virsh=0 and virsh if you got >>>>>>>>>>>>>> use_virsh=1 in your cluster config >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>>>> >>>>>>>>>>>>>>> I use virt-install to create virtual machines. Is there a >>>>>>>>>>>>>>> way to debug why clusvcadm -e vm:guest1 is failing? vm.sh seems to use >>>>>>>>>>>>>>> virsh and my cluster.conf has use_virsh=0 >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Thanks >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Nov 21, 2013 5:53 PM, "emmanuel segura" < >>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> but did you configure your vm with xen tools or using >>>>>>>>>>>>>>>> virt-manager? >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Well no i don't want to use virsh. But as we are >>>>>>>>>>>>>>>>> debugging with virsh now i found a strange issue. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> I exported an xml file and imported to all nodes . Ran >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> --- >>>>>>>>>>>>>>>>> name="guest1" path="/vms_c" >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> export OCF_RESKEY_name="guest1" ; OCF_RESKEY_path="/vms_c" >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> set -x >>>>>>>>>>>>>>>>> /usr/share/cluster/vm.sh start >>>>>>>>>>>>>>>>> set +x >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>> vm starts now. BUT from a cluster service : cluvscam -e >>>>>>>>>>>>>>>>> vm:guest1 , same error. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> So if i populate all my domains' config files to all my >>>>>>>>>>>>>>>>> cluser nodes and make use_virsh=1, then the issue is resolved. But this is >>>>>>>>>>>>>>>>> a lot of work for those who have hundreds of vm. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> vm.start uses virsh . Is there a way to tell him not use >>>>>>>>>>>>>>>>> virsh? >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Thanks >>>>>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 5:19 PM, emmanuel segura < >>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> if you are using virsh for manage your vms, change this >>>>>>>>>>>>>>>>>> in your cluster.conf >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> from >>>>>>>>>>>>>>>>>> use_virsh="0" >>>>>>>>>>>>>>>>>> to >>>>>>>>>>>>>>>>>> use_virsh="1" >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> I think i found the problem. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> virsh list --all does not show my vm . This is because >>>>>>>>>>>>>>>>>>> it was created on another node. and another node has it. Now I want to >>>>>>>>>>>>>>>>>>> start the service on a different node in which it was not created or where >>>>>>>>>>>>>>>>>>> virsh list --all does not have an entry. Is it possible to create this >>>>>>>>>>>>>>>>>>> entry using a xen config file?Looks like this is now a Xen issue rather >>>>>>>>>>>>>>>>>>> than a linux-cluster issue . :) >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:58 PM, emmanuel segura < >>>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> 1:did you verify your xen livemigration configuration? >>>>>>>>>>>>>>>>>>>> 2: where you vm disk reside? >>>>>>>>>>>>>>>>>>>> 3: can you see your vm defined on every cluster node >>>>>>>>>>>>>>>>>>>> with xm list? >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> This is what I get >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> Hypervisor: xen >>>>>>>>>>>>>>>>>>>>> Management tool: virsh >>>>>>>>>>>>>>>>>>>>> Hypervisor URI: xen:/// >>>>>>>>>>>>>>>>>>>>> Migration URI format: xenmigr://target_host/ >>>>>>>>>>>>>>>>>>>>> Virtual machine guest1 is error: failed to get domain >>>>>>>>>>>>>>>>>>>>> 'guest1' >>>>>>>>>>>>>>>>>>>>> error: Domain not found: xenUnifiedDomainLookupByName >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> virsh -c xen:/// start guest1 >>>>>>>>>>>>>>>>>>>>> error: failed to get domain 'guest1' >>>>>>>>>>>>>>>>>>>>> error: Domain not found: xenUnifiedDomainLookupByName >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> ++ printf '\033]0;%s@%s:%s\007' root vtst3 '~' >>>>>>>>>>>>>>>>>>>>> [root at cvtst3 ~]# set +x >>>>>>>>>>>>>>>>>>>>> + set +x >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> I am wondering why it failed to get domain . >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:43 PM, emmanuel segura < >>>>>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> yes >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> Well it is guest1. Isn't it?. >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> max_restarts="0" name="guest1" path="/vms_c" recovery="restart" >>>>>>>>>>>>>>>>>>>>>>> restart_expire_time="0" use_virsh="0"/> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> It is a vm service if it matters. >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:22 PM, emmanuel segura < >>>>>>>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> use the servicename you defined in your cluster.conf >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> Says: >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> Running in test mode. >>>>>>>>>>>>>>>>>>>>>>>>> No resource guest1 of type service found >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:07 PM, emmanuel segura < >>>>>>>>>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> rg_test test /etc/cluster/cluster.conf start >>>>>>>>>>>>>>>>>>>>>>>>>> service guest1 >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> My failover domain looks like this: >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> ordered="1" restricted="0"> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> I have vm service that uses this failover >>>>>>>>>>>>>>>>>>>>>>>>>>> domain. If my node vtst1 is offline, the service doesnot start on vtst3 >>>>>>>>>>>>>>>>>>>>>>>>>>> which is 2nd in the priority. >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> I tried to start it with: clusvcadm -e vm:guest1 >>>>>>>>>>>>>>>>>>>>>>>>>>> and even with -F and -m option. >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> All i see is this error: >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>> start on vm "guest1" returned 1 (generic error) >>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>> #68: Failed to start vm:guest1; return value: 1 >>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>> Stopping service vm:guest1 >>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>> Service vm:guest1 is recovering >>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>> #71: Relocating failed service vm:guest1 >>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>> Service vm:guest1 is stopped >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> How do I debug? >>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks! >>>>>>>>>>>>>>>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios >>>>>>>>>>>>>>>>>>>>>>>>>> quiera >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Linux-cluster mailing list >>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Linux-cluster mailing list >>>>>>>>> Linux-cluster at redhat.com >>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Linux-cluster mailing list >>>>>>>> Linux-cluster at redhat.com >>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>> >>>> >>>> >>>> >>>> -- >>>> esta es mi vida e me la vivo hasta que dios quiera >>>> >>>> -- >>>> Linux-cluster mailing list >>>> Linux-cluster at redhat.com >>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>> >>> >>> >> >> -- >> Linux-cluster mailing list >> Linux-cluster at redhat.com >> https://www.redhat.com/mailman/listinfo/linux-cluster >> > > > > -- > esta es mi vida e me la vivo hasta que dios quiera > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emi2fast at gmail.com Wed Dec 18 19:13:04 2013 From: emi2fast at gmail.com (emmanuel segura) Date: Wed, 18 Dec 2013 20:13:04 +0100 Subject: [Linux-cluster] failover domain and service start In-Reply-To: References: Message-ID: from script vm.sh i saw it try to discovery the hypervisor you are using, with hypervisor="xen" you force the script to use xen 2013/12/18 Paras pradhan > The only parameter I don't have is: hypervisor="xen" > > Does it matter? > > This is what i have: > > name="guest1" path="/vms_c" recovery="restart" restart_expire_time="0" > use_virsh="0"/> > > -Paras. > > > On Wed, Dec 18, 2013 at 12:24 PM, emmanuel segura wrote: > >> >> >> Incriment the config version of cluster.conf and ccs_tool update >> /etc/cluster/cluster.conf >> >> >> 2013/12/18 Paras pradhan >> >>> Emmauel, >>> >>> With export OCF_RESKEY_name="guest1" ; export OCF_RESKEY_path="/vms_c" ; >>> export OCF_RESKEY_use_virsh=0 >>> >>> , I can start the vm using : /usr/share/cluster/vm.sh >>> >>> I am wondering how to make the changes to cluser.conf or other files so >>> that we can start the vm using clucvcsadm. >>> >>> -Thanks >>> sorry for the delay. >>> >>> Paras. >>> >>> >>> On Thu, Dec 5, 2013 at 12:36 PM, Paras pradhan wrote: >>> >>>> Emmanue no.l I was busy on some other things I will test and let you >>>> know asap ! >>>> >>>> >>>> On Thu, Dec 5, 2013 at 12:26 PM, emmanuel segura wrote: >>>> >>>>> Hello Paras >>>>> >>>>> did you solved the problem? >>>>> >>>>> Thanks >>>>> Emmanuel >>>>> >>>>> >>>>> 2013/11/25 emmanuel segura >>>>> >>>>>> Hello Paras >>>>>> >>>>>> Maybe i found the solution, in function validate_all we got >>>>>> >>>>>> if [ -z "$OCF_RESKEY_hypervisor" ] || >>>>>> [ "$OCF_RESKEY_hypervisor" = "auto" ]; then >>>>>> export OCF_RESKEY_hypervisor="`virsh version | grep >>>>>> \"Running hypervisor:\" | awk '{print $3}' | tr A-Z a-z`" >>>>>> if [ -z "$OCF_RESKEY_hypervisor" ]; then >>>>>> ocf_log err "Could not determine Hypervisor" >>>>>> return $OCF_ERR_ARGS >>>>>> fi >>>>>> echo Hypervisor: $OCF_RESKEY_hypervisor >>>>>> fi >>>>>> >>>>>> # >>>>>> # Xen hypervisor only for when use_virsh = 0. >>>>>> # >>>>>> if [ "$OCF_RESKEY_use_virsh" = "0" ]; then >>>>>> if [ "$OCF_RESKEY_hypervisor" != "xen" ]; then >>>>>> ocf_log err "Cannot use >>>>>> $OCF_RESKEY_hypervisor hypervisor without using virsh" >>>>>> return $OCF_ERR_ARGS >>>>>> fi >>>>>> >>>>>> with this following enviroment variables, when i tested by hand the >>>>>> angent uses xm commands >>>>>> >>>>>> env | grep OCF >>>>>> OCF_RESKEY_hypervisor=xen >>>>>> OCF_RESKEY_path=/vms_c >>>>>> OCF_RESKEY_name=guest1 >>>>>> OCF_RESKEY_use_virsh=0 >>>>>> >>>>>> [root at client ~]# /usr/share/cluster/vm.sh status >>>>>> Management tool: xm >>>>>> Cannot find 'xm'; is it installed? >>>>>> [vm.sh] Cannot find 'xm'; is it installed? >>>>>> >>>>>> >>>>>> I don't have xen installed to test it >>>>>> >>>>>> >>>>>> if [ -n "$OCF_RESKEY_xmlfile" ]; then >>>>>> ocf_log err "Cannot use xmlfile if use_virsh >>>>>> is set to 0" >>>>>> return $OCF_ERR_ARGS >>>>>> fi >>>>>> >>>>>> >>>>>> >>>>>> 2013/11/25 emmanuel segura >>>>>> >>>>>>> Hello paras >>>>>>> >>>>>>> missing the export command in front of variables, the correct way is >>>>>>> this >>>>>>> >>>>>>> export OCF_RESKEY_name="guest1" ; export OCF_RESKEY_path="/vms_c" ; >>>>>>> export OCF_RESKEY_use_virsh=0 >>>>>>> [root at client ~]# env | grep OCF >>>>>>> OCF_RESKEY_path=/vms_c >>>>>>> OCF_RESKEY_name=guest1 >>>>>>> OCF_RESKEY_use_virsh=0 >>>>>>> >>>>>>> >>>>>>> >>>>>>> 2013/11/25 emmanuel segura >>>>>>> >>>>>>>> Hello Paras >>>>>>>> >>>>>>>> I have a centos 6, i don't know if it is different on redhat 5, >>>>>>>> but i saw in the script vm.sh calls do_start function when start parameter >>>>>>>> is given >>>>>>>> >>>>>>>> do_start() >>>>>>>> { >>>>>>>> if [ "$OCF_RESKEY_use_virsh" = "1" ]; then >>>>>>>> do_virsh_start $* >>>>>>>> return $? >>>>>>>> fi >>>>>>>> >>>>>>>> do_xm_start $* >>>>>>>> return $? >>>>>>>> } >>>>>>>> >>>>>>>> i don't know why because the vm.sh uses virsh when you launch the >>>>>>>> script by hand :( >>>>>>>> >>>>>>>> >>>>>>>> 2013/11/25 Paras pradhan >>>>>>>> >>>>>>>>> Looks like use_virsh=0 has no effect. >>>>>>>>> >>>>>>>>> -- >>>>>>>>> [root at cvtst3 ~]# export OCF_RESKEY_name="guest1" ; >>>>>>>>> OCF_RESKEY_path="/vms_c" ; OCF_RESKEY_use_virsh=0 >>>>>>>>> [root at cvtst3 ~]# set -x >>>>>>>>> ++ printf '\033]0;%s@%s:%s\007' root cvtst3 '~' >>>>>>>>> [root at cvtst3 ~]# /usr/share/cluster/vm.sh start >>>>>>>>> + /usr/share/cluster/vm.sh start >>>>>>>>> Hypervisor: xen >>>>>>>>> Management tool: virsh >>>>>>>>> Hypervisor URI: xen:/// >>>>>>>>> Migration URI format: xenmigr://target_host/ >>>>>>>>> Virtual machine guest1 is error: failed to get domain 'guest1' >>>>>>>>> error: Domain not found: xenUnifiedDomainLookupByName >>>>>>>>> >>>>>>>>> virsh -c xen:/// start guest1 >>>>>>>>> error: failed to get domain 'guest1' >>>>>>>>> error: Domain not found: xenUnifiedDomainLookupByName >>>>>>>>> >>>>>>>>> ++ printf '\033]0;%s@%s:%s\007' root cvtst3 '~' >>>>>>>>> [root at cvtst3 ~]# set +x >>>>>>>>> + set +x >>>>>>>>> --- >>>>>>>>> >>>>>>>>> >>>>>>>>> -Paras. >>>>>>>>> >>>>>>>>> >>>>>>>>> On Fri, Nov 22, 2013 at 5:22 PM, emmanuel segura < >>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>> >>>>>>>>>> Hellos Paras >>>>>>>>>> >>>>>>>>>> Stop the vm and retry to start the vm with following commands and >>>>>>>>>> if you got some error show it >>>>>>>>>> >>>>>>>>>> export OCF_RESKEY_name="guest1" ; OCF_RESKEY_path="/vms_c" ; >>>>>>>>>> OCF_RESKEY_use_virsh=0 >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> set -x >>>>>>>>>> /usr/share/cluster/vm.sh start >>>>>>>>>> set +x >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>> >>>>>>>>>>> I found the workaround to my issue. What i did is: >>>>>>>>>>> >>>>>>>>>>> run the vm using xm and then start using cluvscadm. This works >>>>>>>>>>> for me for the time being but I am not sure what is causing this. This is >>>>>>>>>>> what I did >>>>>>>>>>> >>>>>>>>>>> xm create /vms_c/guest1 >>>>>>>>>>> clusvcadm -e vm: guest1 ( This detects that guest1 is up and >>>>>>>>>>> quickly changes its status to success) >>>>>>>>>>> >>>>>>>>>>> Although i used virt-install, it also create a xem format >>>>>>>>>>> configuration file and since use_virsh=0 it should be able to use this xen >>>>>>>>>>> format config file. This is my vm configuration: >>>>>>>>>>> >>>>>>>>>>> --- >>>>>>>>>>> name = "guest1" >>>>>>>>>>> maxmem = 2048 >>>>>>>>>>> memory = 512 >>>>>>>>>>> vcpus = 1 >>>>>>>>>>> #cpus="1-2" >>>>>>>>>>> bootloader = "/usr/bin/pygrub" >>>>>>>>>>> on_poweroff = "destroy" >>>>>>>>>>> on_reboot = "restart" >>>>>>>>>>> on_crash = "restart" >>>>>>>>>>> vfb = [ ] >>>>>>>>>>> disk = [ "tap:aio:/vms_c/guest1.img,xvda,w", >>>>>>>>>>> "tap:aio:/vms_c/guest1-disk.img,xvdb,w" ] >>>>>>>>>>> vif = [ "rate=10MB/s,mac=00:16:3e:6b:be:71,bridge=xenbr0" ] >>>>>>>>>>> >>>>>>>>>>> --- >>>>>>>>>>> >>>>>>>>>>> Thanks for you help Emmanuel ! Really appreciate it. >>>>>>>>>>> >>>>>>>>>>> -Paras. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Fri, Nov 22, 2013 at 11:10 AM, emmanuel segura < >>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> ok, but your vm doesn't start on others nodes, i think, for >>>>>>>>>>>> configuration problems >>>>>>>>>>>> ================================================================ >>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: start on vm >>>>>>>>>>>> "guest1" returned 1 (generic error) >>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: #68: Failed >>>>>>>>>>>> to start vm:guest1; return value: 1 >>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: Stopping >>>>>>>>>>>> service vm:guest1 >>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: Service >>>>>>>>>>>> vm:guest1 is recovering >>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: #71: >>>>>>>>>>>> Relocating failed service vm:guest1 >>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: Service >>>>>>>>>>>> vm:guest1 is stopped >>>>>>>>>>>> ================================================================ >>>>>>>>>>>> in few words, try in every cluster node >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> export OCF_RESKEY_name="guest1" ; OCF_RESKEY_path="/vms_c" >>>>>>>>>>>> >>>>>>>>>>>> set -x >>>>>>>>>>>> /usr/share/cluster/vm.sh start >>>>>>>>>>>> /usr/share/cluster/vm.sh stop >>>>>>>>>>>> >>>>>>>>>>>> after you check if your vm can start and stop on every cluster >>>>>>>>>>>> node, >>>>>>>>>>>> >>>>>>>>>>>> /usr/share/cluster/vm.sh start >>>>>>>>>>>> /usr/share/cluster/vm.sh migrate name_of_a_cluster_node >>>>>>>>>>>> >>>>>>>>>>>> can you show me your vm configuration under /vms_c? >>>>>>>>>>>> >>>>>>>>>>>> Thanks >>>>>>>>>>>> Emmanuel >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>> >>>>>>>>>>>>> And also to test I made use_virsh=1 , same problem. The vm >>>>>>>>>>>>> does not start up if the FD domains are offline. >>>>>>>>>>>>> >>>>>>>>>>>>> -Paras. >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On Fri, Nov 22, 2013 at 10:37 AM, Paras pradhan < >>>>>>>>>>>>> pradhanparas at gmail.com> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> Well thats seems to theoretically correct. But right now my >>>>>>>>>>>>>> cluser has use_virsh=0 and I don't have any issue untill my mebmers on the >>>>>>>>>>>>>> failover domains are offline. So wondering what is it that clusvcadm -e is >>>>>>>>>>>>>> looking when I don't use virsh . >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Fri, Nov 22, 2013 at 10:05 AM, emmanuel segura < >>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> If you used virt-install, i think you need use virsh, the >>>>>>>>>>>>>>> cluster uses xm xen command if you got use_virsh=0 and virsh if you got >>>>>>>>>>>>>>> use_virsh=1 in your cluster config >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I use virt-install to create virtual machines. Is there a >>>>>>>>>>>>>>>> way to debug why clusvcadm -e vm:guest1 is failing? vm.sh seems to use >>>>>>>>>>>>>>>> virsh and my cluster.conf has use_virsh=0 >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Thanks >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On Nov 21, 2013 5:53 PM, "emmanuel segura" < >>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> but did you configure your vm with xen tools or using >>>>>>>>>>>>>>>>> virt-manager? >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Well no i don't want to use virsh. But as we are >>>>>>>>>>>>>>>>>> debugging with virsh now i found a strange issue. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> I exported an xml file and imported to all nodes . Ran >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> --- >>>>>>>>>>>>>>>>>> name="guest1" path="/vms_c" >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> export OCF_RESKEY_name="guest1" ; OCF_RESKEY_path="/vms_c" >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> set -x >>>>>>>>>>>>>>>>>> /usr/share/cluster/vm.sh start >>>>>>>>>>>>>>>>>> set +x >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>> vm starts now. BUT from a cluster service : cluvscam -e >>>>>>>>>>>>>>>>>> vm:guest1 , same error. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> So if i populate all my domains' config files to all my >>>>>>>>>>>>>>>>>> cluser nodes and make use_virsh=1, then the issue is resolved. But this is >>>>>>>>>>>>>>>>>> a lot of work for those who have hundreds of vm. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> vm.start uses virsh . Is there a way to tell him not use >>>>>>>>>>>>>>>>>> virsh? >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Thanks >>>>>>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 5:19 PM, emmanuel segura < >>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> if you are using virsh for manage your vms, change this >>>>>>>>>>>>>>>>>>> in your cluster.conf >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> from >>>>>>>>>>>>>>>>>>> use_virsh="0" >>>>>>>>>>>>>>>>>>> to >>>>>>>>>>>>>>>>>>> use_virsh="1" >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> I think i found the problem. >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> virsh list --all does not show my vm . This is because >>>>>>>>>>>>>>>>>>>> it was created on another node. and another node has it. Now I want to >>>>>>>>>>>>>>>>>>>> start the service on a different node in which it was not created or where >>>>>>>>>>>>>>>>>>>> virsh list --all does not have an entry. Is it possible to create this >>>>>>>>>>>>>>>>>>>> entry using a xen config file?Looks like this is now a Xen issue rather >>>>>>>>>>>>>>>>>>>> than a linux-cluster issue . :) >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:58 PM, emmanuel segura < >>>>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> 1:did you verify your xen livemigration configuration? >>>>>>>>>>>>>>>>>>>>> 2: where you vm disk reside? >>>>>>>>>>>>>>>>>>>>> 3: can you see your vm defined on every cluster node >>>>>>>>>>>>>>>>>>>>> with xm list? >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> This is what I get >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> Hypervisor: xen >>>>>>>>>>>>>>>>>>>>>> Management tool: virsh >>>>>>>>>>>>>>>>>>>>>> Hypervisor URI: xen:/// >>>>>>>>>>>>>>>>>>>>>> Migration URI format: xenmigr://target_host/ >>>>>>>>>>>>>>>>>>>>>> Virtual machine guest1 is error: failed to get domain >>>>>>>>>>>>>>>>>>>>>> 'guest1' >>>>>>>>>>>>>>>>>>>>>> error: Domain not found: xenUnifiedDomainLookupByName >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> virsh -c xen:/// start guest1 >>>>>>>>>>>>>>>>>>>>>> error: failed to get domain 'guest1' >>>>>>>>>>>>>>>>>>>>>> error: Domain not found: xenUnifiedDomainLookupByName >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> ++ printf '\033]0;%s@%s:%s\007' root vtst3 '~' >>>>>>>>>>>>>>>>>>>>>> [root at cvtst3 ~]# set +x >>>>>>>>>>>>>>>>>>>>>> + set +x >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> I am wondering why it failed to get domain . >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:43 PM, emmanuel segura < >>>>>>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> yes >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> Well it is guest1. Isn't it?. >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> max_restarts="0" name="guest1" path="/vms_c" recovery="restart" >>>>>>>>>>>>>>>>>>>>>>>> restart_expire_time="0" use_virsh="0"/> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> It is a vm service if it matters. >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:22 PM, emmanuel segura < >>>>>>>>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> use the servicename you defined in your >>>>>>>>>>>>>>>>>>>>>>>>> cluster.conf >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> Says: >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> Running in test mode. >>>>>>>>>>>>>>>>>>>>>>>>>> No resource guest1 of type service found >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:07 PM, emmanuel segura >>>>>>>>>>>>>>>>>>>>>>>>>> wrote: >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> rg_test test /etc/cluster/cluster.conf start >>>>>>>>>>>>>>>>>>>>>>>>>>> service guest1 >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan >>>>>>>>>>>>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> My failover domain looks like this: >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> ordered="1" restricted="0"> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> I have vm service that uses this failover >>>>>>>>>>>>>>>>>>>>>>>>>>>> domain. If my node vtst1 is offline, the service doesnot start on vtst3 >>>>>>>>>>>>>>>>>>>>>>>>>>>> which is 2nd in the priority. >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> I tried to start it with: clusvcadm -e >>>>>>>>>>>>>>>>>>>>>>>>>>>> vm:guest1 and even with -F and -m option. >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> All i see is this error: >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>>> start on vm "guest1" returned 1 (generic error) >>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>>> #68: Failed to start vm:guest1; return value: 1 >>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>>> Stopping service vm:guest1 >>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>>> Service vm:guest1 is recovering >>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>>> #71: Relocating failed service vm:guest1 >>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>>> Service vm:guest1 is stopped >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> How do I debug? >>>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks! >>>>>>>>>>>>>>>>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios >>>>>>>>>>>>>>>>>>>>>>>>>>> quiera >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Linux-cluster mailing list >>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Linux-cluster mailing list >>>>>>>>> Linux-cluster at redhat.com >>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>> >>>>> -- >>>>> Linux-cluster mailing list >>>>> Linux-cluster at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>> >>>> >>>> >>> >>> -- >>> Linux-cluster mailing list >>> Linux-cluster at redhat.com >>> https://www.redhat.com/mailman/listinfo/linux-cluster >>> >> >> >> >> -- >> esta es mi vida e me la vivo hasta que dios quiera >> >> -- >> Linux-cluster mailing list >> Linux-cluster at redhat.com >> https://www.redhat.com/mailman/listinfo/linux-cluster >> > > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -- esta es mi vida e me la vivo hasta que dios quiera -------------- next part -------------- An HTML attachment was scrubbed... URL: From pradhanparas at gmail.com Wed Dec 18 22:18:37 2013 From: pradhanparas at gmail.com (Paras pradhan) Date: Wed, 18 Dec 2013 16:18:37 -0600 Subject: [Linux-cluster] failover domain and service start In-Reply-To: References: Message-ID: Added but same problem. vm does not start. -Paras. On Wed, Dec 18, 2013 at 1:13 PM, emmanuel segura wrote: > from script vm.sh i saw it try to discovery the hypervisor you are using, > with hypervisor="xen" you force the script to use xen > > > 2013/12/18 Paras pradhan > >> The only parameter I don't have is: hypervisor="xen" >> >> Does it matter? >> >> This is what i have: >> >> > name="guest1" path="/vms_c" recovery="restart" restart_expire_time="0" >> use_virsh="0"/> >> >> -Paras. >> >> >> On Wed, Dec 18, 2013 at 12:24 PM, emmanuel segura wrote: >> >>> >>> >>> Incriment the config version of cluster.conf and ccs_tool update >>> /etc/cluster/cluster.conf >>> >>> >>> 2013/12/18 Paras pradhan >>> >>>> Emmauel, >>>> >>>> With export OCF_RESKEY_name="guest1" ; export OCF_RESKEY_path="/vms_c" >>>> ; export OCF_RESKEY_use_virsh=0 >>>> >>>> , I can start the vm using : /usr/share/cluster/vm.sh >>>> >>>> I am wondering how to make the changes to cluser.conf or other files so >>>> that we can start the vm using clucvcsadm. >>>> >>>> -Thanks >>>> sorry for the delay. >>>> >>>> Paras. >>>> >>>> >>>> On Thu, Dec 5, 2013 at 12:36 PM, Paras pradhan wrote: >>>> >>>>> Emmanue no.l I was busy on some other things I will test and let you >>>>> know asap ! >>>>> >>>>> >>>>> On Thu, Dec 5, 2013 at 12:26 PM, emmanuel segura wrote: >>>>> >>>>>> Hello Paras >>>>>> >>>>>> did you solved the problem? >>>>>> >>>>>> Thanks >>>>>> Emmanuel >>>>>> >>>>>> >>>>>> 2013/11/25 emmanuel segura >>>>>> >>>>>>> Hello Paras >>>>>>> >>>>>>> Maybe i found the solution, in function validate_all we got >>>>>>> >>>>>>> if [ -z "$OCF_RESKEY_hypervisor" ] || >>>>>>> [ "$OCF_RESKEY_hypervisor" = "auto" ]; then >>>>>>> export OCF_RESKEY_hypervisor="`virsh version | grep >>>>>>> \"Running hypervisor:\" | awk '{print $3}' | tr A-Z a-z`" >>>>>>> if [ -z "$OCF_RESKEY_hypervisor" ]; then >>>>>>> ocf_log err "Could not determine Hypervisor" >>>>>>> return $OCF_ERR_ARGS >>>>>>> fi >>>>>>> echo Hypervisor: $OCF_RESKEY_hypervisor >>>>>>> fi >>>>>>> >>>>>>> # >>>>>>> # Xen hypervisor only for when use_virsh = 0. >>>>>>> # >>>>>>> if [ "$OCF_RESKEY_use_virsh" = "0" ]; then >>>>>>> if [ "$OCF_RESKEY_hypervisor" != "xen" ]; then >>>>>>> ocf_log err "Cannot use >>>>>>> $OCF_RESKEY_hypervisor hypervisor without using virsh" >>>>>>> return $OCF_ERR_ARGS >>>>>>> fi >>>>>>> >>>>>>> with this following enviroment variables, when i tested by hand the >>>>>>> angent uses xm commands >>>>>>> >>>>>>> env | grep OCF >>>>>>> OCF_RESKEY_hypervisor=xen >>>>>>> OCF_RESKEY_path=/vms_c >>>>>>> OCF_RESKEY_name=guest1 >>>>>>> OCF_RESKEY_use_virsh=0 >>>>>>> >>>>>>> [root at client ~]# /usr/share/cluster/vm.sh status >>>>>>> Management tool: xm >>>>>>> Cannot find 'xm'; is it installed? >>>>>>> [vm.sh] Cannot find 'xm'; is it installed? >>>>>>> >>>>>>> >>>>>>> I don't have xen installed to test it >>>>>>> >>>>>>> >>>>>>> if [ -n "$OCF_RESKEY_xmlfile" ]; then >>>>>>> ocf_log err "Cannot use xmlfile if use_virsh >>>>>>> is set to 0" >>>>>>> return $OCF_ERR_ARGS >>>>>>> fi >>>>>>> >>>>>>> >>>>>>> >>>>>>> 2013/11/25 emmanuel segura >>>>>>> >>>>>>>> Hello paras >>>>>>>> >>>>>>>> missing the export command in front of variables, the correct way >>>>>>>> is this >>>>>>>> >>>>>>>> export OCF_RESKEY_name="guest1" ; export OCF_RESKEY_path="/vms_c" ; >>>>>>>> export OCF_RESKEY_use_virsh=0 >>>>>>>> [root at client ~]# env | grep OCF >>>>>>>> OCF_RESKEY_path=/vms_c >>>>>>>> OCF_RESKEY_name=guest1 >>>>>>>> OCF_RESKEY_use_virsh=0 >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> 2013/11/25 emmanuel segura >>>>>>>> >>>>>>>>> Hello Paras >>>>>>>>> >>>>>>>>> I have a centos 6, i don't know if it is different on redhat 5, >>>>>>>>> but i saw in the script vm.sh calls do_start function when start parameter >>>>>>>>> is given >>>>>>>>> >>>>>>>>> do_start() >>>>>>>>> { >>>>>>>>> if [ "$OCF_RESKEY_use_virsh" = "1" ]; then >>>>>>>>> do_virsh_start $* >>>>>>>>> return $? >>>>>>>>> fi >>>>>>>>> >>>>>>>>> do_xm_start $* >>>>>>>>> return $? >>>>>>>>> } >>>>>>>>> >>>>>>>>> i don't know why because the vm.sh uses virsh when you launch the >>>>>>>>> script by hand :( >>>>>>>>> >>>>>>>>> >>>>>>>>> 2013/11/25 Paras pradhan >>>>>>>>> >>>>>>>>>> Looks like use_virsh=0 has no effect. >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> [root at cvtst3 ~]# export OCF_RESKEY_name="guest1" ; >>>>>>>>>> OCF_RESKEY_path="/vms_c" ; OCF_RESKEY_use_virsh=0 >>>>>>>>>> [root at cvtst3 ~]# set -x >>>>>>>>>> ++ printf '\033]0;%s@%s:%s\007' root cvtst3 '~' >>>>>>>>>> [root at cvtst3 ~]# /usr/share/cluster/vm.sh start >>>>>>>>>> + /usr/share/cluster/vm.sh start >>>>>>>>>> Hypervisor: xen >>>>>>>>>> Management tool: virsh >>>>>>>>>> Hypervisor URI: xen:/// >>>>>>>>>> Migration URI format: xenmigr://target_host/ >>>>>>>>>> Virtual machine guest1 is error: failed to get domain 'guest1' >>>>>>>>>> error: Domain not found: xenUnifiedDomainLookupByName >>>>>>>>>> >>>>>>>>>> virsh -c xen:/// start guest1 >>>>>>>>>> error: failed to get domain 'guest1' >>>>>>>>>> error: Domain not found: xenUnifiedDomainLookupByName >>>>>>>>>> >>>>>>>>>> ++ printf '\033]0;%s@%s:%s\007' root cvtst3 '~' >>>>>>>>>> [root at cvtst3 ~]# set +x >>>>>>>>>> + set +x >>>>>>>>>> --- >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -Paras. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Fri, Nov 22, 2013 at 5:22 PM, emmanuel segura < >>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>> >>>>>>>>>>> Hellos Paras >>>>>>>>>>> >>>>>>>>>>> Stop the vm and retry to start the vm with following commands >>>>>>>>>>> and if you got some error show it >>>>>>>>>>> >>>>>>>>>>> export OCF_RESKEY_name="guest1" ; OCF_RESKEY_path="/vms_c" ; >>>>>>>>>>> OCF_RESKEY_use_virsh=0 >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> set -x >>>>>>>>>>> /usr/share/cluster/vm.sh start >>>>>>>>>>> set +x >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>> >>>>>>>>>>>> I found the workaround to my issue. What i did is: >>>>>>>>>>>> >>>>>>>>>>>> run the vm using xm and then start using cluvscadm. This works >>>>>>>>>>>> for me for the time being but I am not sure what is causing this. This is >>>>>>>>>>>> what I did >>>>>>>>>>>> >>>>>>>>>>>> xm create /vms_c/guest1 >>>>>>>>>>>> clusvcadm -e vm: guest1 ( This detects that guest1 is up and >>>>>>>>>>>> quickly changes its status to success) >>>>>>>>>>>> >>>>>>>>>>>> Although i used virt-install, it also create a xem format >>>>>>>>>>>> configuration file and since use_virsh=0 it should be able to use this xen >>>>>>>>>>>> format config file. This is my vm configuration: >>>>>>>>>>>> >>>>>>>>>>>> --- >>>>>>>>>>>> name = "guest1" >>>>>>>>>>>> maxmem = 2048 >>>>>>>>>>>> memory = 512 >>>>>>>>>>>> vcpus = 1 >>>>>>>>>>>> #cpus="1-2" >>>>>>>>>>>> bootloader = "/usr/bin/pygrub" >>>>>>>>>>>> on_poweroff = "destroy" >>>>>>>>>>>> on_reboot = "restart" >>>>>>>>>>>> on_crash = "restart" >>>>>>>>>>>> vfb = [ ] >>>>>>>>>>>> disk = [ "tap:aio:/vms_c/guest1.img,xvda,w", >>>>>>>>>>>> "tap:aio:/vms_c/guest1-disk.img,xvdb,w" ] >>>>>>>>>>>> vif = [ "rate=10MB/s,mac=00:16:3e:6b:be:71,bridge=xenbr0" ] >>>>>>>>>>>> >>>>>>>>>>>> --- >>>>>>>>>>>> >>>>>>>>>>>> Thanks for you help Emmanuel ! Really appreciate it. >>>>>>>>>>>> >>>>>>>>>>>> -Paras. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Fri, Nov 22, 2013 at 11:10 AM, emmanuel segura < >>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> ok, but your vm doesn't start on others nodes, i think, for >>>>>>>>>>>>> configuration problems >>>>>>>>>>>>> >>>>>>>>>>>>> ================================================================ >>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: start on vm >>>>>>>>>>>>> "guest1" returned 1 (generic error) >>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: #68: Failed >>>>>>>>>>>>> to start vm:guest1; return value: 1 >>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: Stopping >>>>>>>>>>>>> service vm:guest1 >>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: Service >>>>>>>>>>>>> vm:guest1 is recovering >>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: #71: >>>>>>>>>>>>> Relocating failed service vm:guest1 >>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: Service >>>>>>>>>>>>> vm:guest1 is stopped >>>>>>>>>>>>> >>>>>>>>>>>>> ================================================================ >>>>>>>>>>>>> in few words, try in every cluster node >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> export OCF_RESKEY_name="guest1" ; OCF_RESKEY_path="/vms_c" >>>>>>>>>>>>> >>>>>>>>>>>>> set -x >>>>>>>>>>>>> /usr/share/cluster/vm.sh start >>>>>>>>>>>>> /usr/share/cluster/vm.sh stop >>>>>>>>>>>>> >>>>>>>>>>>>> after you check if your vm can start and stop on every cluster >>>>>>>>>>>>> node, >>>>>>>>>>>>> >>>>>>>>>>>>> /usr/share/cluster/vm.sh start >>>>>>>>>>>>> /usr/share/cluster/vm.sh migrate name_of_a_cluster_node >>>>>>>>>>>>> >>>>>>>>>>>>> can you show me your vm configuration under /vms_c? >>>>>>>>>>>>> >>>>>>>>>>>>> Thanks >>>>>>>>>>>>> Emmanuel >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>>> >>>>>>>>>>>>>> And also to test I made use_virsh=1 , same problem. The vm >>>>>>>>>>>>>> does not start up if the FD domains are offline. >>>>>>>>>>>>>> >>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Fri, Nov 22, 2013 at 10:37 AM, Paras pradhan < >>>>>>>>>>>>>> pradhanparas at gmail.com> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Well thats seems to theoretically correct. But right now my >>>>>>>>>>>>>>> cluser has use_virsh=0 and I don't have any issue untill my mebmers on the >>>>>>>>>>>>>>> failover domains are offline. So wondering what is it that clusvcadm -e is >>>>>>>>>>>>>>> looking when I don't use virsh . >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Fri, Nov 22, 2013 at 10:05 AM, emmanuel segura < >>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> If you used virt-install, i think you need use virsh, the >>>>>>>>>>>>>>>> cluster uses xm xen command if you got use_virsh=0 and virsh if you got >>>>>>>>>>>>>>>> use_virsh=1 in your cluster config >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> I use virt-install to create virtual machines. Is there a >>>>>>>>>>>>>>>>> way to debug why clusvcadm -e vm:guest1 is failing? vm.sh seems to use >>>>>>>>>>>>>>>>> virsh and my cluster.conf has use_virsh=0 >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Thanks >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> On Nov 21, 2013 5:53 PM, "emmanuel segura" < >>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> but did you configure your vm with xen tools or using >>>>>>>>>>>>>>>>>> virt-manager? >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Well no i don't want to use virsh. But as we are >>>>>>>>>>>>>>>>>>> debugging with virsh now i found a strange issue. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> I exported an xml file and imported to all nodes . Ran >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> --- >>>>>>>>>>>>>>>>>>> name="guest1" path="/vms_c" >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> export OCF_RESKEY_name="guest1" ; >>>>>>>>>>>>>>>>>>> OCF_RESKEY_path="/vms_c" >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> set -x >>>>>>>>>>>>>>>>>>> /usr/share/cluster/vm.sh start >>>>>>>>>>>>>>>>>>> set +x >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>> vm starts now. BUT from a cluster service : cluvscam -e >>>>>>>>>>>>>>>>>>> vm:guest1 , same error. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> So if i populate all my domains' config files to all my >>>>>>>>>>>>>>>>>>> cluser nodes and make use_virsh=1, then the issue is resolved. But this is >>>>>>>>>>>>>>>>>>> a lot of work for those who have hundreds of vm. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> vm.start uses virsh . Is there a way to tell him not use >>>>>>>>>>>>>>>>>>> virsh? >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Thanks >>>>>>>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 5:19 PM, emmanuel segura < >>>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> if you are using virsh for manage your vms, change this >>>>>>>>>>>>>>>>>>>> in your cluster.conf >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> from >>>>>>>>>>>>>>>>>>>> use_virsh="0" >>>>>>>>>>>>>>>>>>>> to >>>>>>>>>>>>>>>>>>>> use_virsh="1" >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> I think i found the problem. >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> virsh list --all does not show my vm . This is because >>>>>>>>>>>>>>>>>>>>> it was created on another node. and another node has it. Now I want to >>>>>>>>>>>>>>>>>>>>> start the service on a different node in which it was not created or where >>>>>>>>>>>>>>>>>>>>> virsh list --all does not have an entry. Is it possible to create this >>>>>>>>>>>>>>>>>>>>> entry using a xen config file?Looks like this is now a Xen issue rather >>>>>>>>>>>>>>>>>>>>> than a linux-cluster issue . :) >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:58 PM, emmanuel segura < >>>>>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> 1:did you verify your xen livemigration configuration? >>>>>>>>>>>>>>>>>>>>>> 2: where you vm disk reside? >>>>>>>>>>>>>>>>>>>>>> 3: can you see your vm defined on every cluster node >>>>>>>>>>>>>>>>>>>>>> with xm list? >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> This is what I get >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> Hypervisor: xen >>>>>>>>>>>>>>>>>>>>>>> Management tool: virsh >>>>>>>>>>>>>>>>>>>>>>> Hypervisor URI: xen:/// >>>>>>>>>>>>>>>>>>>>>>> Migration URI format: xenmigr://target_host/ >>>>>>>>>>>>>>>>>>>>>>> Virtual machine guest1 is error: failed to get >>>>>>>>>>>>>>>>>>>>>>> domain 'guest1' >>>>>>>>>>>>>>>>>>>>>>> error: Domain not found: xenUnifiedDomainLookupByName >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> virsh -c xen:/// start guest1 >>>>>>>>>>>>>>>>>>>>>>> error: failed to get domain 'guest1' >>>>>>>>>>>>>>>>>>>>>>> error: Domain not found: xenUnifiedDomainLookupByName >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> ++ printf '\033]0;%s@%s:%s\007' root vtst3 '~' >>>>>>>>>>>>>>>>>>>>>>> [root at cvtst3 ~]# set +x >>>>>>>>>>>>>>>>>>>>>>> + set +x >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> I am wondering why it failed to get domain . >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:43 PM, emmanuel segura < >>>>>>>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> yes >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> Well it is guest1. Isn't it?. >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> max_restarts="0" name="guest1" path="/vms_c" recovery="restart" >>>>>>>>>>>>>>>>>>>>>>>>> restart_expire_time="0" use_virsh="0"/> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> It is a vm service if it matters. >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:22 PM, emmanuel segura < >>>>>>>>>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> use the servicename you defined in your >>>>>>>>>>>>>>>>>>>>>>>>>> cluster.conf >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> Says: >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> Running in test mode. >>>>>>>>>>>>>>>>>>>>>>>>>>> No resource guest1 of type service found >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:07 PM, emmanuel segura >>>>>>>>>>>>>>>>>>>>>>>>>>> wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> rg_test test /etc/cluster/cluster.conf start >>>>>>>>>>>>>>>>>>>>>>>>>>>> service guest1 >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan < >>>>>>>>>>>>>>>>>>>>>>>>>>>> pradhanparas at gmail.com> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> My failover domain looks like this: >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> ordered="1" restricted="0"> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> I have vm service that uses this failover >>>>>>>>>>>>>>>>>>>>>>>>>>>>> domain. If my node vtst1 is offline, the service doesnot start on vtst3 >>>>>>>>>>>>>>>>>>>>>>>>>>>>> which is 2nd in the priority. >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> I tried to start it with: clusvcadm -e >>>>>>>>>>>>>>>>>>>>>>>>>>>>> vm:guest1 and even with -F and -m option. >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> All i see is this error: >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>>>> start on vm "guest1" returned 1 (generic error) >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>>>> #68: Failed to start vm:guest1; return value: 1 >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Stopping service vm:guest1 >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Service vm:guest1 is recovering >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>>>> #71: Relocating failed service vm:guest1 >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Service vm:guest1 is stopped >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> How do I debug? >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks! >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios >>>>>>>>>>>>>>>>>>>>>>>>>>>> quiera >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios >>>>>>>>>>>>>>>>>>>>>>>>>> quiera >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Linux-cluster mailing list >>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>> >>>>>> -- >>>>>> Linux-cluster mailing list >>>>>> Linux-cluster at redhat.com >>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>> >>>>> >>>>> >>>> >>>> -- >>>> Linux-cluster mailing list >>>> Linux-cluster at redhat.com >>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>> >>> >>> >>> >>> -- >>> esta es mi vida e me la vivo hasta que dios quiera >>> >>> -- >>> Linux-cluster mailing list >>> Linux-cluster at redhat.com >>> https://www.redhat.com/mailman/listinfo/linux-cluster >>> >> >> >> -- >> Linux-cluster mailing list >> Linux-cluster at redhat.com >> https://www.redhat.com/mailman/listinfo/linux-cluster >> > > > > -- > esta es mi vida e me la vivo hasta que dios quiera > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emi2fast at gmail.com Wed Dec 18 22:39:36 2013 From: emi2fast at gmail.com (emmanuel segura) Date: Wed, 18 Dec 2013 23:39:36 +0100 Subject: [Linux-cluster] failover domain and service start In-Reply-To: References: Message-ID: can you show the log? 2013/12/18 Paras pradhan > Added but same problem. vm does not start. > > -Paras. > > > On Wed, Dec 18, 2013 at 1:13 PM, emmanuel segura wrote: > >> from script vm.sh i saw it try to discovery the hypervisor you are using, >> with hypervisor="xen" you force the script to use xen >> >> >> 2013/12/18 Paras pradhan >> >>> The only parameter I don't have is: hypervisor="xen" >>> >>> Does it matter? >>> >>> This is what i have: >>> >>> >> name="guest1" path="/vms_c" recovery="restart" restart_expire_time="0" >>> use_virsh="0"/> >>> >>> -Paras. >>> >>> >>> On Wed, Dec 18, 2013 at 12:24 PM, emmanuel segura wrote: >>> >>>> >>>> >>>> Incriment the config version of cluster.conf and ccs_tool update >>>> /etc/cluster/cluster.conf >>>> >>>> >>>> 2013/12/18 Paras pradhan >>>> >>>>> Emmauel, >>>>> >>>>> With export OCF_RESKEY_name="guest1" ; export OCF_RESKEY_path="/vms_c" >>>>> ; export OCF_RESKEY_use_virsh=0 >>>>> >>>>> , I can start the vm using : /usr/share/cluster/vm.sh >>>>> >>>>> I am wondering how to make the changes to cluser.conf or other files >>>>> so that we can start the vm using clucvcsadm. >>>>> >>>>> -Thanks >>>>> sorry for the delay. >>>>> >>>>> Paras. >>>>> >>>>> >>>>> On Thu, Dec 5, 2013 at 12:36 PM, Paras pradhan >>>> > wrote: >>>>> >>>>>> Emmanue no.l I was busy on some other things I will test and let you >>>>>> know asap ! >>>>>> >>>>>> >>>>>> On Thu, Dec 5, 2013 at 12:26 PM, emmanuel segura wrote: >>>>>> >>>>>>> Hello Paras >>>>>>> >>>>>>> did you solved the problem? >>>>>>> >>>>>>> Thanks >>>>>>> Emmanuel >>>>>>> >>>>>>> >>>>>>> 2013/11/25 emmanuel segura >>>>>>> >>>>>>>> Hello Paras >>>>>>>> >>>>>>>> Maybe i found the solution, in function validate_all we got >>>>>>>> >>>>>>>> if [ -z "$OCF_RESKEY_hypervisor" ] || >>>>>>>> [ "$OCF_RESKEY_hypervisor" = "auto" ]; then >>>>>>>> export OCF_RESKEY_hypervisor="`virsh version | grep >>>>>>>> \"Running hypervisor:\" | awk '{print $3}' | tr A-Z a-z`" >>>>>>>> if [ -z "$OCF_RESKEY_hypervisor" ]; then >>>>>>>> ocf_log err "Could not determine Hypervisor" >>>>>>>> return $OCF_ERR_ARGS >>>>>>>> fi >>>>>>>> echo Hypervisor: $OCF_RESKEY_hypervisor >>>>>>>> fi >>>>>>>> >>>>>>>> # >>>>>>>> # Xen hypervisor only for when use_virsh = 0. >>>>>>>> # >>>>>>>> if [ "$OCF_RESKEY_use_virsh" = "0" ]; then >>>>>>>> if [ "$OCF_RESKEY_hypervisor" != "xen" ]; then >>>>>>>> ocf_log err "Cannot use >>>>>>>> $OCF_RESKEY_hypervisor hypervisor without using virsh" >>>>>>>> return $OCF_ERR_ARGS >>>>>>>> fi >>>>>>>> >>>>>>>> with this following enviroment variables, when i tested by hand the >>>>>>>> angent uses xm commands >>>>>>>> >>>>>>>> env | grep OCF >>>>>>>> OCF_RESKEY_hypervisor=xen >>>>>>>> OCF_RESKEY_path=/vms_c >>>>>>>> OCF_RESKEY_name=guest1 >>>>>>>> OCF_RESKEY_use_virsh=0 >>>>>>>> >>>>>>>> [root at client ~]# /usr/share/cluster/vm.sh status >>>>>>>> Management tool: xm >>>>>>>> Cannot find 'xm'; is it installed? >>>>>>>> [vm.sh] Cannot find 'xm'; is it installed? >>>>>>>> >>>>>>>> >>>>>>>> I don't have xen installed to test it >>>>>>>> >>>>>>>> >>>>>>>> if [ -n "$OCF_RESKEY_xmlfile" ]; then >>>>>>>> ocf_log err "Cannot use xmlfile if >>>>>>>> use_virsh is set to 0" >>>>>>>> return $OCF_ERR_ARGS >>>>>>>> fi >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> 2013/11/25 emmanuel segura >>>>>>>> >>>>>>>>> Hello paras >>>>>>>>> >>>>>>>>> missing the export command in front of variables, the correct way >>>>>>>>> is this >>>>>>>>> >>>>>>>>> export OCF_RESKEY_name="guest1" ; export OCF_RESKEY_path="/vms_c" >>>>>>>>> ; export OCF_RESKEY_use_virsh=0 >>>>>>>>> [root at client ~]# env | grep OCF >>>>>>>>> OCF_RESKEY_path=/vms_c >>>>>>>>> OCF_RESKEY_name=guest1 >>>>>>>>> OCF_RESKEY_use_virsh=0 >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> 2013/11/25 emmanuel segura >>>>>>>>> >>>>>>>>>> Hello Paras >>>>>>>>>> >>>>>>>>>> I have a centos 6, i don't know if it is different on redhat 5, >>>>>>>>>> but i saw in the script vm.sh calls do_start function when start parameter >>>>>>>>>> is given >>>>>>>>>> >>>>>>>>>> do_start() >>>>>>>>>> { >>>>>>>>>> if [ "$OCF_RESKEY_use_virsh" = "1" ]; then >>>>>>>>>> do_virsh_start $* >>>>>>>>>> return $? >>>>>>>>>> fi >>>>>>>>>> >>>>>>>>>> do_xm_start $* >>>>>>>>>> return $? >>>>>>>>>> } >>>>>>>>>> >>>>>>>>>> i don't know why because the vm.sh uses virsh when you launch the >>>>>>>>>> script by hand :( >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> 2013/11/25 Paras pradhan >>>>>>>>>> >>>>>>>>>>> Looks like use_virsh=0 has no effect. >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> [root at cvtst3 ~]# export OCF_RESKEY_name="guest1" ; >>>>>>>>>>> OCF_RESKEY_path="/vms_c" ; OCF_RESKEY_use_virsh=0 >>>>>>>>>>> [root at cvtst3 ~]# set -x >>>>>>>>>>> ++ printf '\033]0;%s@%s:%s\007' root cvtst3 '~' >>>>>>>>>>> [root at cvtst3 ~]# /usr/share/cluster/vm.sh start >>>>>>>>>>> + /usr/share/cluster/vm.sh start >>>>>>>>>>> Hypervisor: xen >>>>>>>>>>> Management tool: virsh >>>>>>>>>>> Hypervisor URI: xen:/// >>>>>>>>>>> Migration URI format: xenmigr://target_host/ >>>>>>>>>>> Virtual machine guest1 is error: failed to get domain 'guest1' >>>>>>>>>>> error: Domain not found: xenUnifiedDomainLookupByName >>>>>>>>>>> >>>>>>>>>>> virsh -c xen:/// start guest1 >>>>>>>>>>> error: failed to get domain 'guest1' >>>>>>>>>>> error: Domain not found: xenUnifiedDomainLookupByName >>>>>>>>>>> >>>>>>>>>>> ++ printf '\033]0;%s@%s:%s\007' root cvtst3 '~' >>>>>>>>>>> [root at cvtst3 ~]# set +x >>>>>>>>>>> + set +x >>>>>>>>>>> --- >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -Paras. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Fri, Nov 22, 2013 at 5:22 PM, emmanuel segura < >>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> Hellos Paras >>>>>>>>>>>> >>>>>>>>>>>> Stop the vm and retry to start the vm with following commands >>>>>>>>>>>> and if you got some error show it >>>>>>>>>>>> >>>>>>>>>>>> export OCF_RESKEY_name="guest1" ; OCF_RESKEY_path="/vms_c" ; >>>>>>>>>>>> OCF_RESKEY_use_virsh=0 >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> set -x >>>>>>>>>>>> /usr/share/cluster/vm.sh start >>>>>>>>>>>> set +x >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>> >>>>>>>>>>>>> I found the workaround to my issue. What i did is: >>>>>>>>>>>>> >>>>>>>>>>>>> run the vm using xm and then start using cluvscadm. This works >>>>>>>>>>>>> for me for the time being but I am not sure what is causing this. This is >>>>>>>>>>>>> what I did >>>>>>>>>>>>> >>>>>>>>>>>>> xm create /vms_c/guest1 >>>>>>>>>>>>> clusvcadm -e vm: guest1 ( This detects that guest1 is up and >>>>>>>>>>>>> quickly changes its status to success) >>>>>>>>>>>>> >>>>>>>>>>>>> Although i used virt-install, it also create a xem format >>>>>>>>>>>>> configuration file and since use_virsh=0 it should be able to use this xen >>>>>>>>>>>>> format config file. This is my vm configuration: >>>>>>>>>>>>> >>>>>>>>>>>>> --- >>>>>>>>>>>>> name = "guest1" >>>>>>>>>>>>> maxmem = 2048 >>>>>>>>>>>>> memory = 512 >>>>>>>>>>>>> vcpus = 1 >>>>>>>>>>>>> #cpus="1-2" >>>>>>>>>>>>> bootloader = "/usr/bin/pygrub" >>>>>>>>>>>>> on_poweroff = "destroy" >>>>>>>>>>>>> on_reboot = "restart" >>>>>>>>>>>>> on_crash = "restart" >>>>>>>>>>>>> vfb = [ ] >>>>>>>>>>>>> disk = [ "tap:aio:/vms_c/guest1.img,xvda,w", >>>>>>>>>>>>> "tap:aio:/vms_c/guest1-disk.img,xvdb,w" ] >>>>>>>>>>>>> vif = [ "rate=10MB/s,mac=00:16:3e:6b:be:71,bridge=xenbr0" ] >>>>>>>>>>>>> >>>>>>>>>>>>> --- >>>>>>>>>>>>> >>>>>>>>>>>>> Thanks for you help Emmanuel ! Really appreciate it. >>>>>>>>>>>>> >>>>>>>>>>>>> -Paras. >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On Fri, Nov 22, 2013 at 11:10 AM, emmanuel segura < >>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> ok, but your vm doesn't start on others nodes, i think, for >>>>>>>>>>>>>> configuration problems >>>>>>>>>>>>>> >>>>>>>>>>>>>> ================================================================ >>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: start on vm >>>>>>>>>>>>>> "guest1" returned 1 (generic error) >>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: #68: Failed >>>>>>>>>>>>>> to start vm:guest1; return value: 1 >>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: Stopping >>>>>>>>>>>>>> service vm:guest1 >>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: Service >>>>>>>>>>>>>> vm:guest1 is recovering >>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: #71: >>>>>>>>>>>>>> Relocating failed service vm:guest1 >>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: Service >>>>>>>>>>>>>> vm:guest1 is stopped >>>>>>>>>>>>>> >>>>>>>>>>>>>> ================================================================ >>>>>>>>>>>>>> in few words, try in every cluster node >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> export OCF_RESKEY_name="guest1" ; OCF_RESKEY_path="/vms_c" >>>>>>>>>>>>>> >>>>>>>>>>>>>> set -x >>>>>>>>>>>>>> /usr/share/cluster/vm.sh start >>>>>>>>>>>>>> /usr/share/cluster/vm.sh stop >>>>>>>>>>>>>> >>>>>>>>>>>>>> after you check if your vm can start and stop on every >>>>>>>>>>>>>> cluster node, >>>>>>>>>>>>>> >>>>>>>>>>>>>> /usr/share/cluster/vm.sh start >>>>>>>>>>>>>> /usr/share/cluster/vm.sh migrate name_of_a_cluster_node >>>>>>>>>>>>>> >>>>>>>>>>>>>> can you show me your vm configuration under /vms_c? >>>>>>>>>>>>>> >>>>>>>>>>>>>> Thanks >>>>>>>>>>>>>> Emmanuel >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>>>> >>>>>>>>>>>>>>> And also to test I made use_virsh=1 , same problem. The vm >>>>>>>>>>>>>>> does not start up if the FD domains are offline. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Fri, Nov 22, 2013 at 10:37 AM, Paras pradhan < >>>>>>>>>>>>>>> pradhanparas at gmail.com> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Well thats seems to theoretically correct. But right now my >>>>>>>>>>>>>>>> cluser has use_virsh=0 and I don't have any issue untill my mebmers on the >>>>>>>>>>>>>>>> failover domains are offline. So wondering what is it that clusvcadm -e is >>>>>>>>>>>>>>>> looking when I don't use virsh . >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On Fri, Nov 22, 2013 at 10:05 AM, emmanuel segura < >>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> If you used virt-install, i think you need use virsh, the >>>>>>>>>>>>>>>>> cluster uses xm xen command if you got use_virsh=0 and virsh if you got >>>>>>>>>>>>>>>>> use_virsh=1 in your cluster config >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> I use virt-install to create virtual machines. Is there a >>>>>>>>>>>>>>>>>> way to debug why clusvcadm -e vm:guest1 is failing? vm.sh seems to use >>>>>>>>>>>>>>>>>> virsh and my cluster.conf has use_virsh=0 >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Thanks >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> On Nov 21, 2013 5:53 PM, "emmanuel segura" < >>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> but did you configure your vm with xen tools or using >>>>>>>>>>>>>>>>>>> virt-manager? >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> Well no i don't want to use virsh. But as we are >>>>>>>>>>>>>>>>>>>> debugging with virsh now i found a strange issue. >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> I exported an xml file and imported to all nodes . Ran >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> --- >>>>>>>>>>>>>>>>>>>> name="guest1" path="/vms_c" >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> export OCF_RESKEY_name="guest1" ; >>>>>>>>>>>>>>>>>>>> OCF_RESKEY_path="/vms_c" >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> set -x >>>>>>>>>>>>>>>>>>>> /usr/share/cluster/vm.sh start >>>>>>>>>>>>>>>>>>>> set +x >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>> vm starts now. BUT from a cluster service : cluvscam -e >>>>>>>>>>>>>>>>>>>> vm:guest1 , same error. >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> So if i populate all my domains' config files to all my >>>>>>>>>>>>>>>>>>>> cluser nodes and make use_virsh=1, then the issue is resolved. But this is >>>>>>>>>>>>>>>>>>>> a lot of work for those who have hundreds of vm. >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> vm.start uses virsh . Is there a way to tell him not >>>>>>>>>>>>>>>>>>>> use virsh? >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> Thanks >>>>>>>>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 5:19 PM, emmanuel segura < >>>>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> if you are using virsh for manage your vms, change >>>>>>>>>>>>>>>>>>>>> this in your cluster.conf >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> from >>>>>>>>>>>>>>>>>>>>> use_virsh="0" >>>>>>>>>>>>>>>>>>>>> to >>>>>>>>>>>>>>>>>>>>> use_virsh="1" >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> I think i found the problem. >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> virsh list --all does not show my vm . This is >>>>>>>>>>>>>>>>>>>>>> because it was created on another node. and another node has it. Now I want >>>>>>>>>>>>>>>>>>>>>> to start the service on a different node in which it was not created or >>>>>>>>>>>>>>>>>>>>>> where virsh list --all does not have an entry. Is it possible to create >>>>>>>>>>>>>>>>>>>>>> this entry using a xen config file?Looks like this is now a Xen issue >>>>>>>>>>>>>>>>>>>>>> rather than a linux-cluster issue . :) >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:58 PM, emmanuel segura < >>>>>>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> 1:did you verify your xen livemigration >>>>>>>>>>>>>>>>>>>>>>> configuration? >>>>>>>>>>>>>>>>>>>>>>> 2: where you vm disk reside? >>>>>>>>>>>>>>>>>>>>>>> 3: can you see your vm defined on every cluster node >>>>>>>>>>>>>>>>>>>>>>> with xm list? >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> This is what I get >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> Hypervisor: xen >>>>>>>>>>>>>>>>>>>>>>>> Management tool: virsh >>>>>>>>>>>>>>>>>>>>>>>> Hypervisor URI: xen:/// >>>>>>>>>>>>>>>>>>>>>>>> Migration URI format: xenmigr://target_host/ >>>>>>>>>>>>>>>>>>>>>>>> Virtual machine guest1 is error: failed to get >>>>>>>>>>>>>>>>>>>>>>>> domain 'guest1' >>>>>>>>>>>>>>>>>>>>>>>> error: Domain not found: >>>>>>>>>>>>>>>>>>>>>>>> xenUnifiedDomainLookupByName >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> virsh -c xen:/// start guest1 >>>>>>>>>>>>>>>>>>>>>>>> error: failed to get domain 'guest1' >>>>>>>>>>>>>>>>>>>>>>>> error: Domain not found: >>>>>>>>>>>>>>>>>>>>>>>> xenUnifiedDomainLookupByName >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> ++ printf '\033]0;%s@%s:%s\007' root vtst3 '~' >>>>>>>>>>>>>>>>>>>>>>>> [root at cvtst3 ~]# set +x >>>>>>>>>>>>>>>>>>>>>>>> + set +x >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> I am wondering why it failed to get domain . >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:43 PM, emmanuel segura < >>>>>>>>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> yes >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> Well it is guest1. Isn't it?. >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> max_restarts="0" name="guest1" path="/vms_c" recovery="restart" >>>>>>>>>>>>>>>>>>>>>>>>>> restart_expire_time="0" use_virsh="0"/> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> It is a vm service if it matters. >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:22 PM, emmanuel segura >>>>>>>>>>>>>>>>>>>>>>>>>> wrote: >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> use the servicename you defined in your >>>>>>>>>>>>>>>>>>>>>>>>>>> cluster.conf >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan >>>>>>>>>>>>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> Says: >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> Running in test mode. >>>>>>>>>>>>>>>>>>>>>>>>>>>> No resource guest1 of type service found >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:07 PM, emmanuel >>>>>>>>>>>>>>>>>>>>>>>>>>>> segura wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> rg_test test /etc/cluster/cluster.conf start >>>>>>>>>>>>>>>>>>>>>>>>>>>>> service guest1 >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan < >>>>>>>>>>>>>>>>>>>>>>>>>>>>> pradhanparas at gmail.com> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> My failover domain looks like this: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> ordered="1" restricted="0"> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I have vm service that uses this failover >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> domain. If my node vtst1 is offline, the service doesnot start on vtst3 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> which is 2nd in the priority. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I tried to start it with: clusvcadm -e >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> vm:guest1 and even with -F and -m option. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> All i see is this error: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> start on vm "guest1" returned 1 (generic error) >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> #68: Failed to start vm:guest1; return value: 1 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Stopping service vm:guest1 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Service vm:guest1 is recovering >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> #71: Relocating failed service vm:guest1 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Service vm:guest1 is stopped >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> How do I debug? >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks! >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios >>>>>>>>>>>>>>>>>>>>>>>>>>>>> quiera >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios >>>>>>>>>>>>>>>>>>>>>>>>>>> quiera >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>> >>>>>>> -- >>>>>>> Linux-cluster mailing list >>>>>>> Linux-cluster at redhat.com >>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>> >>>>>> >>>>>> >>>>> >>>>> -- >>>>> Linux-cluster mailing list >>>>> Linux-cluster at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>> >>>> >>>> >>>> >>>> -- >>>> esta es mi vida e me la vivo hasta que dios quiera >>>> >>>> -- >>>> Linux-cluster mailing list >>>> Linux-cluster at redhat.com >>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>> >>> >>> >>> -- >>> Linux-cluster mailing list >>> Linux-cluster at redhat.com >>> https://www.redhat.com/mailman/listinfo/linux-cluster >>> >> >> >> >> -- >> esta es mi vida e me la vivo hasta que dios quiera >> >> -- >> Linux-cluster mailing list >> Linux-cluster at redhat.com >> https://www.redhat.com/mailman/listinfo/linux-cluster >> > > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -- esta es mi vida e me la vivo hasta que dios quiera -------------- next part -------------- An HTML attachment was scrubbed... URL: From pradhanparas at gmail.com Wed Dec 18 22:47:27 2013 From: pradhanparas at gmail.com (Paras pradhan) Date: Wed, 18 Dec 2013 16:47:27 -0600 Subject: [Linux-cluster] failover domain and service start In-Reply-To: References: Message-ID: I see this: Dec 18 16:17:18 cvtst3 clurgmgrd[13935]: Starting stopped service vm:guest1 Dec 18 16:17:19 cvtst3 clurgmgrd[13935]: start on vm "guest1" returned 1 (generic error) Dec 18 16:17:19 cvtst3 clurgmgrd[13935]: #68: Failed to start vm:guest1; return value: 1 Dec 18 16:17:19 cvtst3 clurgmgrd[13935]: Stopping service vm:guest1 On Wed, Dec 18, 2013 at 4:39 PM, emmanuel segura wrote: > can you show the log? > > > 2013/12/18 Paras pradhan > >> Added but same problem. vm does not start. >> >> -Paras. >> >> >> On Wed, Dec 18, 2013 at 1:13 PM, emmanuel segura wrote: >> >>> from script vm.sh i saw it try to discovery the hypervisor you are >>> using, with hypervisor="xen" you force the script to use xen >>> >>> >>> 2013/12/18 Paras pradhan >>> >>>> The only parameter I don't have is: hypervisor="xen" >>>> >>>> Does it matter? >>>> >>>> This is what i have: >>>> >>>> >>> name="guest1" path="/vms_c" recovery="restart" restart_expire_time="0" >>>> use_virsh="0"/> >>>> >>>> -Paras. >>>> >>>> >>>> On Wed, Dec 18, 2013 at 12:24 PM, emmanuel segura wrote: >>>> >>>>> >>>>> >>>>> Incriment the config version of cluster.conf and ccs_tool update >>>>> /etc/cluster/cluster.conf >>>>> >>>>> >>>>> 2013/12/18 Paras pradhan >>>>> >>>>>> Emmauel, >>>>>> >>>>>> With export OCF_RESKEY_name="guest1" ; export >>>>>> OCF_RESKEY_path="/vms_c" ; export OCF_RESKEY_use_virsh=0 >>>>>> >>>>>> , I can start the vm using : /usr/share/cluster/vm.sh >>>>>> >>>>>> I am wondering how to make the changes to cluser.conf or other files >>>>>> so that we can start the vm using clucvcsadm. >>>>>> >>>>>> -Thanks >>>>>> sorry for the delay. >>>>>> >>>>>> Paras. >>>>>> >>>>>> >>>>>> On Thu, Dec 5, 2013 at 12:36 PM, Paras pradhan < >>>>>> pradhanparas at gmail.com> wrote: >>>>>> >>>>>>> Emmanue no.l I was busy on some other things I will test and let you >>>>>>> know asap ! >>>>>>> >>>>>>> >>>>>>> On Thu, Dec 5, 2013 at 12:26 PM, emmanuel segura >>>>>> > wrote: >>>>>>> >>>>>>>> Hello Paras >>>>>>>> >>>>>>>> did you solved the problem? >>>>>>>> >>>>>>>> Thanks >>>>>>>> Emmanuel >>>>>>>> >>>>>>>> >>>>>>>> 2013/11/25 emmanuel segura >>>>>>>> >>>>>>>>> Hello Paras >>>>>>>>> >>>>>>>>> Maybe i found the solution, in function validate_all we got >>>>>>>>> >>>>>>>>> if [ -z "$OCF_RESKEY_hypervisor" ] || >>>>>>>>> [ "$OCF_RESKEY_hypervisor" = "auto" ]; then >>>>>>>>> export OCF_RESKEY_hypervisor="`virsh version | >>>>>>>>> grep \"Running hypervisor:\" | awk '{print $3}' | tr A-Z a-z`" >>>>>>>>> if [ -z "$OCF_RESKEY_hypervisor" ]; then >>>>>>>>> ocf_log err "Could not determine >>>>>>>>> Hypervisor" >>>>>>>>> return $OCF_ERR_ARGS >>>>>>>>> fi >>>>>>>>> echo Hypervisor: $OCF_RESKEY_hypervisor >>>>>>>>> fi >>>>>>>>> >>>>>>>>> # >>>>>>>>> # Xen hypervisor only for when use_virsh = 0. >>>>>>>>> # >>>>>>>>> if [ "$OCF_RESKEY_use_virsh" = "0" ]; then >>>>>>>>> if [ "$OCF_RESKEY_hypervisor" != "xen" ]; then >>>>>>>>> ocf_log err "Cannot use >>>>>>>>> $OCF_RESKEY_hypervisor hypervisor without using virsh" >>>>>>>>> return $OCF_ERR_ARGS >>>>>>>>> fi >>>>>>>>> >>>>>>>>> with this following enviroment variables, when i tested by hand >>>>>>>>> the angent uses xm commands >>>>>>>>> >>>>>>>>> env | grep OCF >>>>>>>>> OCF_RESKEY_hypervisor=xen >>>>>>>>> OCF_RESKEY_path=/vms_c >>>>>>>>> OCF_RESKEY_name=guest1 >>>>>>>>> OCF_RESKEY_use_virsh=0 >>>>>>>>> >>>>>>>>> [root at client ~]# /usr/share/cluster/vm.sh status >>>>>>>>> Management tool: xm >>>>>>>>> Cannot find 'xm'; is it installed? >>>>>>>>> [vm.sh] Cannot find 'xm'; is it installed? >>>>>>>>> >>>>>>>>> >>>>>>>>> I don't have xen installed to test it >>>>>>>>> >>>>>>>>> >>>>>>>>> if [ -n "$OCF_RESKEY_xmlfile" ]; then >>>>>>>>> ocf_log err "Cannot use xmlfile if >>>>>>>>> use_virsh is set to 0" >>>>>>>>> return $OCF_ERR_ARGS >>>>>>>>> fi >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> 2013/11/25 emmanuel segura >>>>>>>>> >>>>>>>>>> Hello paras >>>>>>>>>> >>>>>>>>>> missing the export command in front of variables, the correct way >>>>>>>>>> is this >>>>>>>>>> >>>>>>>>>> export OCF_RESKEY_name="guest1" ; export OCF_RESKEY_path="/vms_c" >>>>>>>>>> ; export OCF_RESKEY_use_virsh=0 >>>>>>>>>> [root at client ~]# env | grep OCF >>>>>>>>>> OCF_RESKEY_path=/vms_c >>>>>>>>>> OCF_RESKEY_name=guest1 >>>>>>>>>> OCF_RESKEY_use_virsh=0 >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> 2013/11/25 emmanuel segura >>>>>>>>>> >>>>>>>>>>> Hello Paras >>>>>>>>>>> >>>>>>>>>>> I have a centos 6, i don't know if it is different on redhat 5, >>>>>>>>>>> but i saw in the script vm.sh calls do_start function when start parameter >>>>>>>>>>> is given >>>>>>>>>>> >>>>>>>>>>> do_start() >>>>>>>>>>> { >>>>>>>>>>> if [ "$OCF_RESKEY_use_virsh" = "1" ]; then >>>>>>>>>>> do_virsh_start $* >>>>>>>>>>> return $? >>>>>>>>>>> fi >>>>>>>>>>> >>>>>>>>>>> do_xm_start $* >>>>>>>>>>> return $? >>>>>>>>>>> } >>>>>>>>>>> >>>>>>>>>>> i don't know why because the vm.sh uses virsh when you launch >>>>>>>>>>> the script by hand :( >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> 2013/11/25 Paras pradhan >>>>>>>>>>> >>>>>>>>>>>> Looks like use_virsh=0 has no effect. >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> [root at cvtst3 ~]# export OCF_RESKEY_name="guest1" ; >>>>>>>>>>>> OCF_RESKEY_path="/vms_c" ; OCF_RESKEY_use_virsh=0 >>>>>>>>>>>> [root at cvtst3 ~]# set -x >>>>>>>>>>>> ++ printf '\033]0;%s@%s:%s\007' root cvtst3 '~' >>>>>>>>>>>> [root at cvtst3 ~]# /usr/share/cluster/vm.sh start >>>>>>>>>>>> + /usr/share/cluster/vm.sh start >>>>>>>>>>>> Hypervisor: xen >>>>>>>>>>>> Management tool: virsh >>>>>>>>>>>> Hypervisor URI: xen:/// >>>>>>>>>>>> Migration URI format: xenmigr://target_host/ >>>>>>>>>>>> Virtual machine guest1 is error: failed to get domain 'guest1' >>>>>>>>>>>> error: Domain not found: xenUnifiedDomainLookupByName >>>>>>>>>>>> >>>>>>>>>>>> virsh -c xen:/// start guest1 >>>>>>>>>>>> error: failed to get domain 'guest1' >>>>>>>>>>>> error: Domain not found: xenUnifiedDomainLookupByName >>>>>>>>>>>> >>>>>>>>>>>> ++ printf '\033]0;%s@%s:%s\007' root cvtst3 '~' >>>>>>>>>>>> [root at cvtst3 ~]# set +x >>>>>>>>>>>> + set +x >>>>>>>>>>>> --- >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -Paras. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Fri, Nov 22, 2013 at 5:22 PM, emmanuel segura < >>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Hellos Paras >>>>>>>>>>>>> >>>>>>>>>>>>> Stop the vm and retry to start the vm with following commands >>>>>>>>>>>>> and if you got some error show it >>>>>>>>>>>>> >>>>>>>>>>>>> export OCF_RESKEY_name="guest1" ; OCF_RESKEY_path="/vms_c" ; >>>>>>>>>>>>> OCF_RESKEY_use_virsh=0 >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> set -x >>>>>>>>>>>>> /usr/share/cluster/vm.sh start >>>>>>>>>>>>> set +x >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>>> >>>>>>>>>>>>>> I found the workaround to my issue. What i did is: >>>>>>>>>>>>>> >>>>>>>>>>>>>> run the vm using xm and then start using cluvscadm. This >>>>>>>>>>>>>> works for me for the time being but I am not sure what is causing this. >>>>>>>>>>>>>> This is what I did >>>>>>>>>>>>>> >>>>>>>>>>>>>> xm create /vms_c/guest1 >>>>>>>>>>>>>> clusvcadm -e vm: guest1 ( This detects that guest1 is up and >>>>>>>>>>>>>> quickly changes its status to success) >>>>>>>>>>>>>> >>>>>>>>>>>>>> Although i used virt-install, it also create a xem format >>>>>>>>>>>>>> configuration file and since use_virsh=0 it should be able to use this xen >>>>>>>>>>>>>> format config file. This is my vm configuration: >>>>>>>>>>>>>> >>>>>>>>>>>>>> --- >>>>>>>>>>>>>> name = "guest1" >>>>>>>>>>>>>> maxmem = 2048 >>>>>>>>>>>>>> memory = 512 >>>>>>>>>>>>>> vcpus = 1 >>>>>>>>>>>>>> #cpus="1-2" >>>>>>>>>>>>>> bootloader = "/usr/bin/pygrub" >>>>>>>>>>>>>> on_poweroff = "destroy" >>>>>>>>>>>>>> on_reboot = "restart" >>>>>>>>>>>>>> on_crash = "restart" >>>>>>>>>>>>>> vfb = [ ] >>>>>>>>>>>>>> disk = [ "tap:aio:/vms_c/guest1.img,xvda,w", >>>>>>>>>>>>>> "tap:aio:/vms_c/guest1-disk.img,xvdb,w" ] >>>>>>>>>>>>>> vif = [ "rate=10MB/s,mac=00:16:3e:6b:be:71,bridge=xenbr0" ] >>>>>>>>>>>>>> >>>>>>>>>>>>>> --- >>>>>>>>>>>>>> >>>>>>>>>>>>>> Thanks for you help Emmanuel ! Really appreciate it. >>>>>>>>>>>>>> >>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Fri, Nov 22, 2013 at 11:10 AM, emmanuel segura < >>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> ok, but your vm doesn't start on others nodes, i think, for >>>>>>>>>>>>>>> configuration problems >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> ================================================================ >>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: start on vm >>>>>>>>>>>>>>> "guest1" returned 1 (generic error) >>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: #68: >>>>>>>>>>>>>>> Failed to start vm:guest1; return value: 1 >>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: Stopping >>>>>>>>>>>>>>> service vm:guest1 >>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: Service >>>>>>>>>>>>>>> vm:guest1 is recovering >>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: #71: >>>>>>>>>>>>>>> Relocating failed service vm:guest1 >>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: Service >>>>>>>>>>>>>>> vm:guest1 is stopped >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> ================================================================ >>>>>>>>>>>>>>> in few words, try in every cluster node >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> export OCF_RESKEY_name="guest1" ; OCF_RESKEY_path="/vms_c" >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> set -x >>>>>>>>>>>>>>> /usr/share/cluster/vm.sh start >>>>>>>>>>>>>>> /usr/share/cluster/vm.sh stop >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> after you check if your vm can start and stop on every >>>>>>>>>>>>>>> cluster node, >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> /usr/share/cluster/vm.sh start >>>>>>>>>>>>>>> /usr/share/cluster/vm.sh migrate name_of_a_cluster_node >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> can you show me your vm configuration under /vms_c? >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Thanks >>>>>>>>>>>>>>> Emmanuel >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> And also to test I made use_virsh=1 , same problem. The vm >>>>>>>>>>>>>>>> does not start up if the FD domains are offline. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On Fri, Nov 22, 2013 at 10:37 AM, Paras pradhan < >>>>>>>>>>>>>>>> pradhanparas at gmail.com> wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Well thats seems to theoretically correct. But right now >>>>>>>>>>>>>>>>> my cluser has use_virsh=0 and I don't have any issue untill my mebmers on >>>>>>>>>>>>>>>>> the failover domains are offline. So wondering what is it that clusvcadm -e >>>>>>>>>>>>>>>>> is looking when I don't use virsh . >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> On Fri, Nov 22, 2013 at 10:05 AM, emmanuel segura < >>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> If you used virt-install, i think you need use virsh, the >>>>>>>>>>>>>>>>>> cluster uses xm xen command if you got use_virsh=0 and virsh if you got >>>>>>>>>>>>>>>>>> use_virsh=1 in your cluster config >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> I use virt-install to create virtual machines. Is there >>>>>>>>>>>>>>>>>>> a way to debug why clusvcadm -e vm:guest1 is failing? vm.sh seems to use >>>>>>>>>>>>>>>>>>> virsh and my cluster.conf has use_virsh=0 >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Thanks >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> On Nov 21, 2013 5:53 PM, "emmanuel segura" < >>>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> but did you configure your vm with xen tools or using >>>>>>>>>>>>>>>>>>>> virt-manager? >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> Well no i don't want to use virsh. But as we are >>>>>>>>>>>>>>>>>>>>> debugging with virsh now i found a strange issue. >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> I exported an xml file and imported to all nodes . Ran >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> --- >>>>>>>>>>>>>>>>>>>>> name="guest1" path="/vms_c" >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> export OCF_RESKEY_name="guest1" ; >>>>>>>>>>>>>>>>>>>>> OCF_RESKEY_path="/vms_c" >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> set -x >>>>>>>>>>>>>>>>>>>>> /usr/share/cluster/vm.sh start >>>>>>>>>>>>>>>>>>>>> set +x >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>> vm starts now. BUT from a cluster service : cluvscam >>>>>>>>>>>>>>>>>>>>> -e vm:guest1 , same error. >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> So if i populate all my domains' config files to all >>>>>>>>>>>>>>>>>>>>> my cluser nodes and make use_virsh=1, then the issue is resolved. But this >>>>>>>>>>>>>>>>>>>>> is a lot of work for those who have hundreds of vm. >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> vm.start uses virsh . Is there a way to tell him not >>>>>>>>>>>>>>>>>>>>> use virsh? >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> Thanks >>>>>>>>>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 5:19 PM, emmanuel segura < >>>>>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> if you are using virsh for manage your vms, change >>>>>>>>>>>>>>>>>>>>>> this in your cluster.conf >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> from >>>>>>>>>>>>>>>>>>>>>> use_virsh="0" >>>>>>>>>>>>>>>>>>>>>> to >>>>>>>>>>>>>>>>>>>>>> use_virsh="1" >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> I think i found the problem. >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> virsh list --all does not show my vm . This is >>>>>>>>>>>>>>>>>>>>>>> because it was created on another node. and another node has it. Now I want >>>>>>>>>>>>>>>>>>>>>>> to start the service on a different node in which it was not created or >>>>>>>>>>>>>>>>>>>>>>> where virsh list --all does not have an entry. Is it possible to create >>>>>>>>>>>>>>>>>>>>>>> this entry using a xen config file?Looks like this is now a Xen issue >>>>>>>>>>>>>>>>>>>>>>> rather than a linux-cluster issue . :) >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:58 PM, emmanuel segura < >>>>>>>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> 1:did you verify your xen livemigration >>>>>>>>>>>>>>>>>>>>>>>> configuration? >>>>>>>>>>>>>>>>>>>>>>>> 2: where you vm disk reside? >>>>>>>>>>>>>>>>>>>>>>>> 3: can you see your vm defined on every cluster >>>>>>>>>>>>>>>>>>>>>>>> node with xm list? >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> This is what I get >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> Hypervisor: xen >>>>>>>>>>>>>>>>>>>>>>>>> Management tool: virsh >>>>>>>>>>>>>>>>>>>>>>>>> Hypervisor URI: xen:/// >>>>>>>>>>>>>>>>>>>>>>>>> Migration URI format: xenmigr://target_host/ >>>>>>>>>>>>>>>>>>>>>>>>> Virtual machine guest1 is error: failed to get >>>>>>>>>>>>>>>>>>>>>>>>> domain 'guest1' >>>>>>>>>>>>>>>>>>>>>>>>> error: Domain not found: >>>>>>>>>>>>>>>>>>>>>>>>> xenUnifiedDomainLookupByName >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> virsh -c xen:/// start guest1 >>>>>>>>>>>>>>>>>>>>>>>>> error: failed to get domain 'guest1' >>>>>>>>>>>>>>>>>>>>>>>>> error: Domain not found: >>>>>>>>>>>>>>>>>>>>>>>>> xenUnifiedDomainLookupByName >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> ++ printf '\033]0;%s@%s:%s\007' root vtst3 '~' >>>>>>>>>>>>>>>>>>>>>>>>> [root at cvtst3 ~]# set +x >>>>>>>>>>>>>>>>>>>>>>>>> + set +x >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> I am wondering why it failed to get domain . >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:43 PM, emmanuel segura < >>>>>>>>>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> yes >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> Well it is guest1. Isn't it?. >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> max_restarts="0" name="guest1" path="/vms_c" recovery="restart" >>>>>>>>>>>>>>>>>>>>>>>>>>> restart_expire_time="0" use_virsh="0"/> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> It is a vm service if it matters. >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:22 PM, emmanuel segura >>>>>>>>>>>>>>>>>>>>>>>>>>> wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> use the servicename you defined in your >>>>>>>>>>>>>>>>>>>>>>>>>>>> cluster.conf >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan < >>>>>>>>>>>>>>>>>>>>>>>>>>>> pradhanparas at gmail.com> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Says: >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Running in test mode. >>>>>>>>>>>>>>>>>>>>>>>>>>>>> No resource guest1 of type service found >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:07 PM, emmanuel >>>>>>>>>>>>>>>>>>>>>>>>>>>>> segura wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> rg_test test /etc/cluster/cluster.conf start >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> service guest1 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan < >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> pradhanparas at gmail.com> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> My failover domain looks like this: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ordered="1" restricted="0"> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I have vm service that uses this failover >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> domain. If my node vtst1 is offline, the service doesnot start on vtst3 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> which is 2nd in the priority. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I tried to start it with: clusvcadm -e >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> vm:guest1 and even with -F and -m option. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> All i see is this error: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> start on vm "guest1" returned 1 (generic error) >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> #68: Failed to start vm:guest1; return value: 1 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Stopping service vm:guest1 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Service vm:guest1 is recovering >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> #71: Relocating failed service vm:guest1 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Service vm:guest1 is stopped >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> How do I debug? >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks! >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> quiera >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios >>>>>>>>>>>>>>>>>>>>>>>>>>>> quiera >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios >>>>>>>>>>>>>>>>>>>>>>>>>> quiera >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>> >>>>>>>> -- >>>>>>>> Linux-cluster mailing list >>>>>>>> Linux-cluster at redhat.com >>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> -- >>>>>> Linux-cluster mailing list >>>>>> Linux-cluster at redhat.com >>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>> >>>>> -- >>>>> Linux-cluster mailing list >>>>> Linux-cluster at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>> >>>> >>>> >>>> -- >>>> Linux-cluster mailing list >>>> Linux-cluster at redhat.com >>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>> >>> >>> >>> >>> -- >>> esta es mi vida e me la vivo hasta que dios quiera >>> >>> -- >>> Linux-cluster mailing list >>> Linux-cluster at redhat.com >>> https://www.redhat.com/mailman/listinfo/linux-cluster >>> >> >> >> -- >> Linux-cluster mailing list >> Linux-cluster at redhat.com >> https://www.redhat.com/mailman/listinfo/linux-cluster >> > > > > -- > esta es mi vida e me la vivo hasta que dios quiera > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emi2fast at gmail.com Thu Dec 19 09:02:54 2013 From: emi2fast at gmail.com (emmanuel segura) Date: Thu, 19 Dec 2013 10:02:54 +0100 Subject: [Linux-cluster] failover domain and service start In-Reply-To: References: Message-ID: grep "/vms_c" /proc/mounts in every node 2013/12/18 Paras pradhan > I see this: > > Dec 18 16:17:18 cvtst3 clurgmgrd[13935]: Starting stopped service > vm:guest1 > Dec 18 16:17:19 cvtst3 clurgmgrd[13935]: start on vm "guest1" > returned 1 (generic error) > Dec 18 16:17:19 cvtst3 clurgmgrd[13935]: #68: Failed to start > vm:guest1; return value: 1 > Dec 18 16:17:19 cvtst3 clurgmgrd[13935]: Stopping service > vm:guest1 > > > > On Wed, Dec 18, 2013 at 4:39 PM, emmanuel segura wrote: > >> can you show the log? >> >> >> 2013/12/18 Paras pradhan >> >>> Added but same problem. vm does not start. >>> >>> -Paras. >>> >>> >>> On Wed, Dec 18, 2013 at 1:13 PM, emmanuel segura wrote: >>> >>>> from script vm.sh i saw it try to discovery the hypervisor you are >>>> using, with hypervisor="xen" you force the script to use xen >>>> >>>> >>>> 2013/12/18 Paras pradhan >>>> >>>>> The only parameter I don't have is: hypervisor="xen" >>>>> >>>>> Does it matter? >>>>> >>>>> This is what i have: >>>>> >>>>> >>>> name="guest1" path="/vms_c" recovery="restart" restart_expire_time="0" >>>>> use_virsh="0"/> >>>>> >>>>> -Paras. >>>>> >>>>> >>>>> On Wed, Dec 18, 2013 at 12:24 PM, emmanuel segura wrote: >>>>> >>>>>> >>>>>> >>>>>> Incriment the config version of cluster.conf and ccs_tool update >>>>>> /etc/cluster/cluster.conf >>>>>> >>>>>> >>>>>> 2013/12/18 Paras pradhan >>>>>> >>>>>>> Emmauel, >>>>>>> >>>>>>> With export OCF_RESKEY_name="guest1" ; export >>>>>>> OCF_RESKEY_path="/vms_c" ; export OCF_RESKEY_use_virsh=0 >>>>>>> >>>>>>> , I can start the vm using : /usr/share/cluster/vm.sh >>>>>>> >>>>>>> I am wondering how to make the changes to cluser.conf or other files >>>>>>> so that we can start the vm using clucvcsadm. >>>>>>> >>>>>>> -Thanks >>>>>>> sorry for the delay. >>>>>>> >>>>>>> Paras. >>>>>>> >>>>>>> >>>>>>> On Thu, Dec 5, 2013 at 12:36 PM, Paras pradhan < >>>>>>> pradhanparas at gmail.com> wrote: >>>>>>> >>>>>>>> Emmanue no.l I was busy on some other things I will test and let >>>>>>>> you know asap ! >>>>>>>> >>>>>>>> >>>>>>>> On Thu, Dec 5, 2013 at 12:26 PM, emmanuel segura < >>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>> >>>>>>>>> Hello Paras >>>>>>>>> >>>>>>>>> did you solved the problem? >>>>>>>>> >>>>>>>>> Thanks >>>>>>>>> Emmanuel >>>>>>>>> >>>>>>>>> >>>>>>>>> 2013/11/25 emmanuel segura >>>>>>>>> >>>>>>>>>> Hello Paras >>>>>>>>>> >>>>>>>>>> Maybe i found the solution, in function validate_all we got >>>>>>>>>> >>>>>>>>>> if [ -z "$OCF_RESKEY_hypervisor" ] || >>>>>>>>>> [ "$OCF_RESKEY_hypervisor" = "auto" ]; then >>>>>>>>>> export OCF_RESKEY_hypervisor="`virsh version | >>>>>>>>>> grep \"Running hypervisor:\" | awk '{print $3}' | tr A-Z a-z`" >>>>>>>>>> if [ -z "$OCF_RESKEY_hypervisor" ]; then >>>>>>>>>> ocf_log err "Could not determine >>>>>>>>>> Hypervisor" >>>>>>>>>> return $OCF_ERR_ARGS >>>>>>>>>> fi >>>>>>>>>> echo Hypervisor: $OCF_RESKEY_hypervisor >>>>>>>>>> fi >>>>>>>>>> >>>>>>>>>> # >>>>>>>>>> # Xen hypervisor only for when use_virsh = 0. >>>>>>>>>> # >>>>>>>>>> if [ "$OCF_RESKEY_use_virsh" = "0" ]; then >>>>>>>>>> if [ "$OCF_RESKEY_hypervisor" != "xen" ]; then >>>>>>>>>> ocf_log err "Cannot use >>>>>>>>>> $OCF_RESKEY_hypervisor hypervisor without using virsh" >>>>>>>>>> return $OCF_ERR_ARGS >>>>>>>>>> fi >>>>>>>>>> >>>>>>>>>> with this following enviroment variables, when i tested by hand >>>>>>>>>> the angent uses xm commands >>>>>>>>>> >>>>>>>>>> env | grep OCF >>>>>>>>>> OCF_RESKEY_hypervisor=xen >>>>>>>>>> OCF_RESKEY_path=/vms_c >>>>>>>>>> OCF_RESKEY_name=guest1 >>>>>>>>>> OCF_RESKEY_use_virsh=0 >>>>>>>>>> >>>>>>>>>> [root at client ~]# /usr/share/cluster/vm.sh status >>>>>>>>>> Management tool: xm >>>>>>>>>> Cannot find 'xm'; is it installed? >>>>>>>>>> [vm.sh] Cannot find 'xm'; is it installed? >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> I don't have xen installed to test it >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> if [ -n "$OCF_RESKEY_xmlfile" ]; then >>>>>>>>>> ocf_log err "Cannot use xmlfile if >>>>>>>>>> use_virsh is set to 0" >>>>>>>>>> return $OCF_ERR_ARGS >>>>>>>>>> fi >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> 2013/11/25 emmanuel segura >>>>>>>>>> >>>>>>>>>>> Hello paras >>>>>>>>>>> >>>>>>>>>>> missing the export command in front of variables, the correct >>>>>>>>>>> way is this >>>>>>>>>>> >>>>>>>>>>> export OCF_RESKEY_name="guest1" ; export >>>>>>>>>>> OCF_RESKEY_path="/vms_c" ; export OCF_RESKEY_use_virsh=0 >>>>>>>>>>> [root at client ~]# env | grep OCF >>>>>>>>>>> OCF_RESKEY_path=/vms_c >>>>>>>>>>> OCF_RESKEY_name=guest1 >>>>>>>>>>> OCF_RESKEY_use_virsh=0 >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> 2013/11/25 emmanuel segura >>>>>>>>>>> >>>>>>>>>>>> Hello Paras >>>>>>>>>>>> >>>>>>>>>>>> I have a centos 6, i don't know if it is different on redhat >>>>>>>>>>>> 5, but i saw in the script vm.sh calls do_start function when start >>>>>>>>>>>> parameter is given >>>>>>>>>>>> >>>>>>>>>>>> do_start() >>>>>>>>>>>> { >>>>>>>>>>>> if [ "$OCF_RESKEY_use_virsh" = "1" ]; then >>>>>>>>>>>> do_virsh_start $* >>>>>>>>>>>> return $? >>>>>>>>>>>> fi >>>>>>>>>>>> >>>>>>>>>>>> do_xm_start $* >>>>>>>>>>>> return $? >>>>>>>>>>>> } >>>>>>>>>>>> >>>>>>>>>>>> i don't know why because the vm.sh uses virsh when you launch >>>>>>>>>>>> the script by hand :( >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> 2013/11/25 Paras pradhan >>>>>>>>>>>> >>>>>>>>>>>>> Looks like use_virsh=0 has no effect. >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> [root at cvtst3 ~]# export OCF_RESKEY_name="guest1" ; >>>>>>>>>>>>> OCF_RESKEY_path="/vms_c" ; OCF_RESKEY_use_virsh=0 >>>>>>>>>>>>> [root at cvtst3 ~]# set -x >>>>>>>>>>>>> ++ printf '\033]0;%s@%s:%s\007' root cvtst3 '~' >>>>>>>>>>>>> [root at cvtst3 ~]# /usr/share/cluster/vm.sh start >>>>>>>>>>>>> + /usr/share/cluster/vm.sh start >>>>>>>>>>>>> Hypervisor: xen >>>>>>>>>>>>> Management tool: virsh >>>>>>>>>>>>> Hypervisor URI: xen:/// >>>>>>>>>>>>> Migration URI format: xenmigr://target_host/ >>>>>>>>>>>>> Virtual machine guest1 is error: failed to get domain 'guest1' >>>>>>>>>>>>> error: Domain not found: xenUnifiedDomainLookupByName >>>>>>>>>>>>> >>>>>>>>>>>>> virsh -c xen:/// start guest1 >>>>>>>>>>>>> error: failed to get domain 'guest1' >>>>>>>>>>>>> error: Domain not found: xenUnifiedDomainLookupByName >>>>>>>>>>>>> >>>>>>>>>>>>> ++ printf '\033]0;%s@%s:%s\007' root cvtst3 '~' >>>>>>>>>>>>> [root at cvtst3 ~]# set +x >>>>>>>>>>>>> + set +x >>>>>>>>>>>>> --- >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> -Paras. >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On Fri, Nov 22, 2013 at 5:22 PM, emmanuel segura < >>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> Hellos Paras >>>>>>>>>>>>>> >>>>>>>>>>>>>> Stop the vm and retry to start the vm with following commands >>>>>>>>>>>>>> and if you got some error show it >>>>>>>>>>>>>> >>>>>>>>>>>>>> export OCF_RESKEY_name="guest1" ; OCF_RESKEY_path="/vms_c" ; >>>>>>>>>>>>>> OCF_RESKEY_use_virsh=0 >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> set -x >>>>>>>>>>>>>> /usr/share/cluster/vm.sh start >>>>>>>>>>>>>> set +x >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>>>> >>>>>>>>>>>>>>> I found the workaround to my issue. What i did is: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> run the vm using xm and then start using cluvscadm. This >>>>>>>>>>>>>>> works for me for the time being but I am not sure what is causing this. >>>>>>>>>>>>>>> This is what I did >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> xm create /vms_c/guest1 >>>>>>>>>>>>>>> clusvcadm -e vm: guest1 ( This detects that guest1 is up and >>>>>>>>>>>>>>> quickly changes its status to success) >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Although i used virt-install, it also create a xem format >>>>>>>>>>>>>>> configuration file and since use_virsh=0 it should be able to use this xen >>>>>>>>>>>>>>> format config file. This is my vm configuration: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> --- >>>>>>>>>>>>>>> name = "guest1" >>>>>>>>>>>>>>> maxmem = 2048 >>>>>>>>>>>>>>> memory = 512 >>>>>>>>>>>>>>> vcpus = 1 >>>>>>>>>>>>>>> #cpus="1-2" >>>>>>>>>>>>>>> bootloader = "/usr/bin/pygrub" >>>>>>>>>>>>>>> on_poweroff = "destroy" >>>>>>>>>>>>>>> on_reboot = "restart" >>>>>>>>>>>>>>> on_crash = "restart" >>>>>>>>>>>>>>> vfb = [ ] >>>>>>>>>>>>>>> disk = [ "tap:aio:/vms_c/guest1.img,xvda,w", >>>>>>>>>>>>>>> "tap:aio:/vms_c/guest1-disk.img,xvdb,w" ] >>>>>>>>>>>>>>> vif = [ "rate=10MB/s,mac=00:16:3e:6b:be:71,bridge=xenbr0" ] >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> --- >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Thanks for you help Emmanuel ! Really appreciate it. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Fri, Nov 22, 2013 at 11:10 AM, emmanuel segura < >>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> ok, but your vm doesn't start on others nodes, i think, for >>>>>>>>>>>>>>>> configuration problems >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> ================================================================ >>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: start on >>>>>>>>>>>>>>>> vm "guest1" returned 1 (generic error) >>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: #68: >>>>>>>>>>>>>>>> Failed to start vm:guest1; return value: 1 >>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: Stopping >>>>>>>>>>>>>>>> service vm:guest1 >>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: Service >>>>>>>>>>>>>>>> vm:guest1 is recovering >>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: #71: >>>>>>>>>>>>>>>> Relocating failed service vm:guest1 >>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: Service >>>>>>>>>>>>>>>> vm:guest1 is stopped >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> ================================================================ >>>>>>>>>>>>>>>> in few words, try in every cluster node >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> export OCF_RESKEY_name="guest1" ; OCF_RESKEY_path="/vms_c" >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> set -x >>>>>>>>>>>>>>>> /usr/share/cluster/vm.sh start >>>>>>>>>>>>>>>> /usr/share/cluster/vm.sh stop >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> after you check if your vm can start and stop on every >>>>>>>>>>>>>>>> cluster node, >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> /usr/share/cluster/vm.sh start >>>>>>>>>>>>>>>> /usr/share/cluster/vm.sh migrate name_of_a_cluster_node >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> can you show me your vm configuration under /vms_c? >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Thanks >>>>>>>>>>>>>>>> Emmanuel >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> And also to test I made use_virsh=1 , same problem. The vm >>>>>>>>>>>>>>>>> does not start up if the FD domains are offline. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> On Fri, Nov 22, 2013 at 10:37 AM, Paras pradhan < >>>>>>>>>>>>>>>>> pradhanparas at gmail.com> wrote: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Well thats seems to theoretically correct. But right now >>>>>>>>>>>>>>>>>> my cluser has use_virsh=0 and I don't have any issue untill my mebmers on >>>>>>>>>>>>>>>>>> the failover domains are offline. So wondering what is it that clusvcadm -e >>>>>>>>>>>>>>>>>> is looking when I don't use virsh . >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> On Fri, Nov 22, 2013 at 10:05 AM, emmanuel segura < >>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> If you used virt-install, i think you need use virsh, >>>>>>>>>>>>>>>>>>> the cluster uses xm xen command if you got use_virsh=0 and virsh if you got >>>>>>>>>>>>>>>>>>> use_virsh=1 in your cluster config >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> I use virt-install to create virtual machines. Is there >>>>>>>>>>>>>>>>>>>> a way to debug why clusvcadm -e vm:guest1 is failing? vm.sh seems to use >>>>>>>>>>>>>>>>>>>> virsh and my cluster.conf has use_virsh=0 >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> Thanks >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> On Nov 21, 2013 5:53 PM, "emmanuel segura" < >>>>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> but did you configure your vm with xen tools or using >>>>>>>>>>>>>>>>>>>>> virt-manager? >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> Well no i don't want to use virsh. But as we are >>>>>>>>>>>>>>>>>>>>>> debugging with virsh now i found a strange issue. >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> I exported an xml file and imported to all nodes . Ran >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> --- >>>>>>>>>>>>>>>>>>>>>> name="guest1" path="/vms_c" >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> export OCF_RESKEY_name="guest1" ; >>>>>>>>>>>>>>>>>>>>>> OCF_RESKEY_path="/vms_c" >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> set -x >>>>>>>>>>>>>>>>>>>>>> /usr/share/cluster/vm.sh start >>>>>>>>>>>>>>>>>>>>>> set +x >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>> vm starts now. BUT from a cluster service : cluvscam >>>>>>>>>>>>>>>>>>>>>> -e vm:guest1 , same error. >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> So if i populate all my domains' config files to all >>>>>>>>>>>>>>>>>>>>>> my cluser nodes and make use_virsh=1, then the issue is resolved. But this >>>>>>>>>>>>>>>>>>>>>> is a lot of work for those who have hundreds of vm. >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> vm.start uses virsh . Is there a way to tell him not >>>>>>>>>>>>>>>>>>>>>> use virsh? >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> Thanks >>>>>>>>>>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 5:19 PM, emmanuel segura < >>>>>>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> if you are using virsh for manage your vms, change >>>>>>>>>>>>>>>>>>>>>>> this in your cluster.conf >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> from >>>>>>>>>>>>>>>>>>>>>>> use_virsh="0" >>>>>>>>>>>>>>>>>>>>>>> to >>>>>>>>>>>>>>>>>>>>>>> use_virsh="1" >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> I think i found the problem. >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> virsh list --all does not show my vm . This is >>>>>>>>>>>>>>>>>>>>>>>> because it was created on another node. and another node has it. Now I want >>>>>>>>>>>>>>>>>>>>>>>> to start the service on a different node in which it was not created or >>>>>>>>>>>>>>>>>>>>>>>> where virsh list --all does not have an entry. Is it possible to create >>>>>>>>>>>>>>>>>>>>>>>> this entry using a xen config file?Looks like this is now a Xen issue >>>>>>>>>>>>>>>>>>>>>>>> rather than a linux-cluster issue . :) >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:58 PM, emmanuel segura < >>>>>>>>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> 1:did you verify your xen livemigration >>>>>>>>>>>>>>>>>>>>>>>>> configuration? >>>>>>>>>>>>>>>>>>>>>>>>> 2: where you vm disk reside? >>>>>>>>>>>>>>>>>>>>>>>>> 3: can you see your vm defined on every cluster >>>>>>>>>>>>>>>>>>>>>>>>> node with xm list? >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> This is what I get >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> Hypervisor: xen >>>>>>>>>>>>>>>>>>>>>>>>>> Management tool: virsh >>>>>>>>>>>>>>>>>>>>>>>>>> Hypervisor URI: xen:/// >>>>>>>>>>>>>>>>>>>>>>>>>> Migration URI format: xenmigr://target_host/ >>>>>>>>>>>>>>>>>>>>>>>>>> Virtual machine guest1 is error: failed to get >>>>>>>>>>>>>>>>>>>>>>>>>> domain 'guest1' >>>>>>>>>>>>>>>>>>>>>>>>>> error: Domain not found: >>>>>>>>>>>>>>>>>>>>>>>>>> xenUnifiedDomainLookupByName >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> virsh -c xen:/// start guest1 >>>>>>>>>>>>>>>>>>>>>>>>>> error: failed to get domain 'guest1' >>>>>>>>>>>>>>>>>>>>>>>>>> error: Domain not found: >>>>>>>>>>>>>>>>>>>>>>>>>> xenUnifiedDomainLookupByName >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> ++ printf '\033]0;%s@%s:%s\007' root vtst3 '~' >>>>>>>>>>>>>>>>>>>>>>>>>> [root at cvtst3 ~]# set +x >>>>>>>>>>>>>>>>>>>>>>>>>> + set +x >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> I am wondering why it failed to get domain . >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:43 PM, emmanuel segura >>>>>>>>>>>>>>>>>>>>>>>>>> wrote: >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> yes >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan >>>>>>>>>>>>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> Well it is guest1. Isn't it?. >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> max_restarts="0" name="guest1" path="/vms_c" recovery="restart" >>>>>>>>>>>>>>>>>>>>>>>>>>>> restart_expire_time="0" use_virsh="0"/> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> It is a vm service if it matters. >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:22 PM, emmanuel >>>>>>>>>>>>>>>>>>>>>>>>>>>> segura wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> use the servicename you defined in your >>>>>>>>>>>>>>>>>>>>>>>>>>>>> cluster.conf >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan < >>>>>>>>>>>>>>>>>>>>>>>>>>>>> pradhanparas at gmail.com> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Says: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Running in test mode. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> No resource guest1 of type service found >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:07 PM, emmanuel >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> segura wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> rg_test test /etc/cluster/cluster.conf start >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> service guest1 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan < >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> pradhanparas at gmail.com> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> My failover domain looks like this: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ordered="1" restricted="0"> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I have vm service that uses this failover >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> domain. If my node vtst1 is offline, the service doesnot start on vtst3 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> which is 2nd in the priority. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I tried to start it with: clusvcadm -e >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> vm:guest1 and even with -F and -m option. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> All i see is this error: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> start on vm "guest1" returned 1 (generic error) >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> #68: Failed to start vm:guest1; return value: 1 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Stopping service vm:guest1 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Service vm:guest1 is recovering >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> #71: Relocating failed service vm:guest1 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Service vm:guest1 is stopped >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> How do I debug? >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks! >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> quiera >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios >>>>>>>>>>>>>>>>>>>>>>>>>>>>> quiera >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios >>>>>>>>>>>>>>>>>>>>>>>>>>> quiera >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Linux-cluster mailing list >>>>>>>>> Linux-cluster at redhat.com >>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Linux-cluster mailing list >>>>>>> Linux-cluster at redhat.com >>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>> >>>>>> -- >>>>>> Linux-cluster mailing list >>>>>> Linux-cluster at redhat.com >>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>> >>>>> >>>>> >>>>> -- >>>>> Linux-cluster mailing list >>>>> Linux-cluster at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>> >>>> >>>> >>>> >>>> -- >>>> esta es mi vida e me la vivo hasta que dios quiera >>>> >>>> -- >>>> Linux-cluster mailing list >>>> Linux-cluster at redhat.com >>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>> >>> >>> >>> -- >>> Linux-cluster mailing list >>> Linux-cluster at redhat.com >>> https://www.redhat.com/mailman/listinfo/linux-cluster >>> >> >> >> >> -- >> esta es mi vida e me la vivo hasta que dios quiera >> >> -- >> Linux-cluster mailing list >> Linux-cluster at redhat.com >> https://www.redhat.com/mailman/listinfo/linux-cluster >> > > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -- esta es mi vida e me la vivo hasta que dios quiera -------------- next part -------------- An HTML attachment was scrubbed... URL: From emi2fast at gmail.com Thu Dec 19 09:05:18 2013 From: emi2fast at gmail.com (emmanuel segura) Date: Thu, 19 Dec 2013 10:05:18 +0100 Subject: [Linux-cluster] failover domain and service start In-Reply-To: References: Message-ID: other thing, change the rgmanager loglevel, for more information you can use this link https://fedorahosted.org/cluster/wiki/RGManager 2013/12/19 emmanuel segura > grep "/vms_c" /proc/mounts in every node > > > 2013/12/18 Paras pradhan > >> I see this: >> >> Dec 18 16:17:18 cvtst3 clurgmgrd[13935]: Starting stopped >> service vm:guest1 >> Dec 18 16:17:19 cvtst3 clurgmgrd[13935]: start on vm "guest1" >> returned 1 (generic error) >> Dec 18 16:17:19 cvtst3 clurgmgrd[13935]: #68: Failed to start >> vm:guest1; return value: 1 >> Dec 18 16:17:19 cvtst3 clurgmgrd[13935]: Stopping service >> vm:guest1 >> >> >> >> On Wed, Dec 18, 2013 at 4:39 PM, emmanuel segura wrote: >> >>> can you show the log? >>> >>> >>> 2013/12/18 Paras pradhan >>> >>>> Added but same problem. vm does not start. >>>> >>>> -Paras. >>>> >>>> >>>> On Wed, Dec 18, 2013 at 1:13 PM, emmanuel segura wrote: >>>> >>>>> from script vm.sh i saw it try to discovery the hypervisor you are >>>>> using, with hypervisor="xen" you force the script to use xen >>>>> >>>>> >>>>> 2013/12/18 Paras pradhan >>>>> >>>>>> The only parameter I don't have is: hypervisor="xen" >>>>>> >>>>>> Does it matter? >>>>>> >>>>>> This is what i have: >>>>>> >>>>>> >>>>> name="guest1" path="/vms_c" recovery="restart" restart_expire_time="0" >>>>>> use_virsh="0"/> >>>>>> >>>>>> -Paras. >>>>>> >>>>>> >>>>>> On Wed, Dec 18, 2013 at 12:24 PM, emmanuel segura >>>>> > wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> Incriment the config version of cluster.conf and ccs_tool update >>>>>>> /etc/cluster/cluster.conf >>>>>>> >>>>>>> >>>>>>> 2013/12/18 Paras pradhan >>>>>>> >>>>>>>> Emmauel, >>>>>>>> >>>>>>>> With export OCF_RESKEY_name="guest1" ; export >>>>>>>> OCF_RESKEY_path="/vms_c" ; export OCF_RESKEY_use_virsh=0 >>>>>>>> >>>>>>>> , I can start the vm using : /usr/share/cluster/vm.sh >>>>>>>> >>>>>>>> I am wondering how to make the changes to cluser.conf or other >>>>>>>> files so that we can start the vm using clucvcsadm. >>>>>>>> >>>>>>>> -Thanks >>>>>>>> sorry for the delay. >>>>>>>> >>>>>>>> Paras. >>>>>>>> >>>>>>>> >>>>>>>> On Thu, Dec 5, 2013 at 12:36 PM, Paras pradhan < >>>>>>>> pradhanparas at gmail.com> wrote: >>>>>>>> >>>>>>>>> Emmanue no.l I was busy on some other things I will test and let >>>>>>>>> you know asap ! >>>>>>>>> >>>>>>>>> >>>>>>>>> On Thu, Dec 5, 2013 at 12:26 PM, emmanuel segura < >>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>> >>>>>>>>>> Hello Paras >>>>>>>>>> >>>>>>>>>> did you solved the problem? >>>>>>>>>> >>>>>>>>>> Thanks >>>>>>>>>> Emmanuel >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> 2013/11/25 emmanuel segura >>>>>>>>>> >>>>>>>>>>> Hello Paras >>>>>>>>>>> >>>>>>>>>>> Maybe i found the solution, in function validate_all we got >>>>>>>>>>> >>>>>>>>>>> if [ -z "$OCF_RESKEY_hypervisor" ] || >>>>>>>>>>> [ "$OCF_RESKEY_hypervisor" = "auto" ]; then >>>>>>>>>>> export OCF_RESKEY_hypervisor="`virsh version | >>>>>>>>>>> grep \"Running hypervisor:\" | awk '{print $3}' | tr A-Z a-z`" >>>>>>>>>>> if [ -z "$OCF_RESKEY_hypervisor" ]; then >>>>>>>>>>> ocf_log err "Could not determine >>>>>>>>>>> Hypervisor" >>>>>>>>>>> return $OCF_ERR_ARGS >>>>>>>>>>> fi >>>>>>>>>>> echo Hypervisor: $OCF_RESKEY_hypervisor >>>>>>>>>>> fi >>>>>>>>>>> >>>>>>>>>>> # >>>>>>>>>>> # Xen hypervisor only for when use_virsh = 0. >>>>>>>>>>> # >>>>>>>>>>> if [ "$OCF_RESKEY_use_virsh" = "0" ]; then >>>>>>>>>>> if [ "$OCF_RESKEY_hypervisor" != "xen" ]; then >>>>>>>>>>> ocf_log err "Cannot use >>>>>>>>>>> $OCF_RESKEY_hypervisor hypervisor without using virsh" >>>>>>>>>>> return $OCF_ERR_ARGS >>>>>>>>>>> fi >>>>>>>>>>> >>>>>>>>>>> with this following enviroment variables, when i tested by hand >>>>>>>>>>> the angent uses xm commands >>>>>>>>>>> >>>>>>>>>>> env | grep OCF >>>>>>>>>>> OCF_RESKEY_hypervisor=xen >>>>>>>>>>> OCF_RESKEY_path=/vms_c >>>>>>>>>>> OCF_RESKEY_name=guest1 >>>>>>>>>>> OCF_RESKEY_use_virsh=0 >>>>>>>>>>> >>>>>>>>>>> [root at client ~]# /usr/share/cluster/vm.sh status >>>>>>>>>>> Management tool: xm >>>>>>>>>>> Cannot find 'xm'; is it installed? >>>>>>>>>>> [vm.sh] Cannot find 'xm'; is it installed? >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> I don't have xen installed to test it >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> if [ -n "$OCF_RESKEY_xmlfile" ]; then >>>>>>>>>>> ocf_log err "Cannot use xmlfile if >>>>>>>>>>> use_virsh is set to 0" >>>>>>>>>>> return $OCF_ERR_ARGS >>>>>>>>>>> fi >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> 2013/11/25 emmanuel segura >>>>>>>>>>> >>>>>>>>>>>> Hello paras >>>>>>>>>>>> >>>>>>>>>>>> missing the export command in front of variables, the correct >>>>>>>>>>>> way is this >>>>>>>>>>>> >>>>>>>>>>>> export OCF_RESKEY_name="guest1" ; export >>>>>>>>>>>> OCF_RESKEY_path="/vms_c" ; export OCF_RESKEY_use_virsh=0 >>>>>>>>>>>> [root at client ~]# env | grep OCF >>>>>>>>>>>> OCF_RESKEY_path=/vms_c >>>>>>>>>>>> OCF_RESKEY_name=guest1 >>>>>>>>>>>> OCF_RESKEY_use_virsh=0 >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> 2013/11/25 emmanuel segura >>>>>>>>>>>> >>>>>>>>>>>>> Hello Paras >>>>>>>>>>>>> >>>>>>>>>>>>> I have a centos 6, i don't know if it is different on redhat >>>>>>>>>>>>> 5, but i saw in the script vm.sh calls do_start function when start >>>>>>>>>>>>> parameter is given >>>>>>>>>>>>> >>>>>>>>>>>>> do_start() >>>>>>>>>>>>> { >>>>>>>>>>>>> if [ "$OCF_RESKEY_use_virsh" = "1" ]; then >>>>>>>>>>>>> do_virsh_start $* >>>>>>>>>>>>> return $? >>>>>>>>>>>>> fi >>>>>>>>>>>>> >>>>>>>>>>>>> do_xm_start $* >>>>>>>>>>>>> return $? >>>>>>>>>>>>> } >>>>>>>>>>>>> >>>>>>>>>>>>> i don't know why because the vm.sh uses virsh when you launch >>>>>>>>>>>>> the script by hand :( >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> 2013/11/25 Paras pradhan >>>>>>>>>>>>> >>>>>>>>>>>>>> Looks like use_virsh=0 has no effect. >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> [root at cvtst3 ~]# export OCF_RESKEY_name="guest1" ; >>>>>>>>>>>>>> OCF_RESKEY_path="/vms_c" ; OCF_RESKEY_use_virsh=0 >>>>>>>>>>>>>> [root at cvtst3 ~]# set -x >>>>>>>>>>>>>> ++ printf '\033]0;%s@%s:%s\007' root cvtst3 '~' >>>>>>>>>>>>>> [root at cvtst3 ~]# /usr/share/cluster/vm.sh start >>>>>>>>>>>>>> + /usr/share/cluster/vm.sh start >>>>>>>>>>>>>> Hypervisor: xen >>>>>>>>>>>>>> Management tool: virsh >>>>>>>>>>>>>> Hypervisor URI: xen:/// >>>>>>>>>>>>>> Migration URI format: xenmigr://target_host/ >>>>>>>>>>>>>> Virtual machine guest1 is error: failed to get domain 'guest1' >>>>>>>>>>>>>> error: Domain not found: xenUnifiedDomainLookupByName >>>>>>>>>>>>>> >>>>>>>>>>>>>> virsh -c xen:/// start guest1 >>>>>>>>>>>>>> error: failed to get domain 'guest1' >>>>>>>>>>>>>> error: Domain not found: xenUnifiedDomainLookupByName >>>>>>>>>>>>>> >>>>>>>>>>>>>> ++ printf '\033]0;%s@%s:%s\007' root cvtst3 '~' >>>>>>>>>>>>>> [root at cvtst3 ~]# set +x >>>>>>>>>>>>>> + set +x >>>>>>>>>>>>>> --- >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Fri, Nov 22, 2013 at 5:22 PM, emmanuel segura < >>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Hellos Paras >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Stop the vm and retry to start the vm with following >>>>>>>>>>>>>>> commands and if you got some error show it >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> export OCF_RESKEY_name="guest1" ; OCF_RESKEY_path="/vms_c"; OCF_RESKEY_use_virsh=0 >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> set -x >>>>>>>>>>>>>>> /usr/share/cluster/vm.sh start >>>>>>>>>>>>>>> set +x >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I found the workaround to my issue. What i did is: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> run the vm using xm and then start using cluvscadm. This >>>>>>>>>>>>>>>> works for me for the time being but I am not sure what is causing this. >>>>>>>>>>>>>>>> This is what I did >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> xm create /vms_c/guest1 >>>>>>>>>>>>>>>> clusvcadm -e vm: guest1 ( This detects that guest1 is up >>>>>>>>>>>>>>>> and quickly changes its status to success) >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Although i used virt-install, it also create a xem format >>>>>>>>>>>>>>>> configuration file and since use_virsh=0 it should be able to use this xen >>>>>>>>>>>>>>>> format config file. This is my vm configuration: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> --- >>>>>>>>>>>>>>>> name = "guest1" >>>>>>>>>>>>>>>> maxmem = 2048 >>>>>>>>>>>>>>>> memory = 512 >>>>>>>>>>>>>>>> vcpus = 1 >>>>>>>>>>>>>>>> #cpus="1-2" >>>>>>>>>>>>>>>> bootloader = "/usr/bin/pygrub" >>>>>>>>>>>>>>>> on_poweroff = "destroy" >>>>>>>>>>>>>>>> on_reboot = "restart" >>>>>>>>>>>>>>>> on_crash = "restart" >>>>>>>>>>>>>>>> vfb = [ ] >>>>>>>>>>>>>>>> disk = [ "tap:aio:/vms_c/guest1.img,xvda,w", >>>>>>>>>>>>>>>> "tap:aio:/vms_c/guest1-disk.img,xvdb,w" ] >>>>>>>>>>>>>>>> vif = [ "rate=10MB/s,mac=00:16:3e:6b:be:71,bridge=xenbr0" ] >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> --- >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Thanks for you help Emmanuel ! Really appreciate it. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On Fri, Nov 22, 2013 at 11:10 AM, emmanuel segura < >>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> ok, but your vm doesn't start on others nodes, i think, >>>>>>>>>>>>>>>>> for configuration problems >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> ================================================================ >>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: start on >>>>>>>>>>>>>>>>> vm "guest1" returned 1 (generic error) >>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: #68: >>>>>>>>>>>>>>>>> Failed to start vm:guest1; return value: 1 >>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: Stopping >>>>>>>>>>>>>>>>> service vm:guest1 >>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: Service >>>>>>>>>>>>>>>>> vm:guest1 is recovering >>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: #71: >>>>>>>>>>>>>>>>> Relocating failed service vm:guest1 >>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: Service >>>>>>>>>>>>>>>>> vm:guest1 is stopped >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> ================================================================ >>>>>>>>>>>>>>>>> in few words, try in every cluster node >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> export OCF_RESKEY_name="guest1" ; OCF_RESKEY_path="/vms_c" >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> set -x >>>>>>>>>>>>>>>>> /usr/share/cluster/vm.sh start >>>>>>>>>>>>>>>>> /usr/share/cluster/vm.sh stop >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> after you check if your vm can start and stop on every >>>>>>>>>>>>>>>>> cluster node, >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> /usr/share/cluster/vm.sh start >>>>>>>>>>>>>>>>> /usr/share/cluster/vm.sh migrate name_of_a_cluster_node >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> can you show me your vm configuration under /vms_c? >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Thanks >>>>>>>>>>>>>>>>> Emmanuel >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> And also to test I made use_virsh=1 , same problem. The >>>>>>>>>>>>>>>>>> vm does not start up if the FD domains are offline. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> On Fri, Nov 22, 2013 at 10:37 AM, Paras pradhan < >>>>>>>>>>>>>>>>>> pradhanparas at gmail.com> wrote: >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Well thats seems to theoretically correct. But right now >>>>>>>>>>>>>>>>>>> my cluser has use_virsh=0 and I don't have any issue untill my mebmers on >>>>>>>>>>>>>>>>>>> the failover domains are offline. So wondering what is it that clusvcadm -e >>>>>>>>>>>>>>>>>>> is looking when I don't use virsh . >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> On Fri, Nov 22, 2013 at 10:05 AM, emmanuel segura < >>>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> If you used virt-install, i think you need use virsh, >>>>>>>>>>>>>>>>>>>> the cluster uses xm xen command if you got use_virsh=0 and virsh if you got >>>>>>>>>>>>>>>>>>>> use_virsh=1 in your cluster config >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> I use virt-install to create virtual machines. Is >>>>>>>>>>>>>>>>>>>>> there a way to debug why clusvcadm -e vm:guest1 is failing? vm.sh seems to >>>>>>>>>>>>>>>>>>>>> use virsh and my cluster.conf has use_virsh=0 >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> Thanks >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> On Nov 21, 2013 5:53 PM, "emmanuel segura" < >>>>>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> but did you configure your vm with xen tools or using >>>>>>>>>>>>>>>>>>>>>> virt-manager? >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> Well no i don't want to use virsh. But as we are >>>>>>>>>>>>>>>>>>>>>>> debugging with virsh now i found a strange issue. >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> I exported an xml file and imported to all nodes . >>>>>>>>>>>>>>>>>>>>>>> Ran >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> --- >>>>>>>>>>>>>>>>>>>>>>> name="guest1" path="/vms_c" >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> export OCF_RESKEY_name="guest1" ; >>>>>>>>>>>>>>>>>>>>>>> OCF_RESKEY_path="/vms_c" >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> set -x >>>>>>>>>>>>>>>>>>>>>>> /usr/share/cluster/vm.sh start >>>>>>>>>>>>>>>>>>>>>>> set +x >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>> vm starts now. BUT from a cluster service : cluvscam >>>>>>>>>>>>>>>>>>>>>>> -e vm:guest1 , same error. >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> So if i populate all my domains' config files to all >>>>>>>>>>>>>>>>>>>>>>> my cluser nodes and make use_virsh=1, then the issue is resolved. But this >>>>>>>>>>>>>>>>>>>>>>> is a lot of work for those who have hundreds of vm. >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> vm.start uses virsh . Is there a way to tell him not >>>>>>>>>>>>>>>>>>>>>>> use virsh? >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> Thanks >>>>>>>>>>>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 5:19 PM, emmanuel segura < >>>>>>>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> if you are using virsh for manage your vms, change >>>>>>>>>>>>>>>>>>>>>>>> this in your cluster.conf >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> from >>>>>>>>>>>>>>>>>>>>>>>> use_virsh="0" >>>>>>>>>>>>>>>>>>>>>>>> to >>>>>>>>>>>>>>>>>>>>>>>> use_virsh="1" >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> I think i found the problem. >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> virsh list --all does not show my vm . This is >>>>>>>>>>>>>>>>>>>>>>>>> because it was created on another node. and another node has it. Now I want >>>>>>>>>>>>>>>>>>>>>>>>> to start the service on a different node in which it was not created or >>>>>>>>>>>>>>>>>>>>>>>>> where virsh list --all does not have an entry. Is it possible to create >>>>>>>>>>>>>>>>>>>>>>>>> this entry using a xen config file?Looks like this is now a Xen issue >>>>>>>>>>>>>>>>>>>>>>>>> rather than a linux-cluster issue . :) >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:58 PM, emmanuel segura < >>>>>>>>>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> 1:did you verify your xen livemigration >>>>>>>>>>>>>>>>>>>>>>>>>> configuration? >>>>>>>>>>>>>>>>>>>>>>>>>> 2: where you vm disk reside? >>>>>>>>>>>>>>>>>>>>>>>>>> 3: can you see your vm defined on every cluster >>>>>>>>>>>>>>>>>>>>>>>>>> node with xm list? >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> This is what I get >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> Hypervisor: xen >>>>>>>>>>>>>>>>>>>>>>>>>>> Management tool: virsh >>>>>>>>>>>>>>>>>>>>>>>>>>> Hypervisor URI: xen:/// >>>>>>>>>>>>>>>>>>>>>>>>>>> Migration URI format: xenmigr://target_host/ >>>>>>>>>>>>>>>>>>>>>>>>>>> Virtual machine guest1 is error: failed to get >>>>>>>>>>>>>>>>>>>>>>>>>>> domain 'guest1' >>>>>>>>>>>>>>>>>>>>>>>>>>> error: Domain not found: >>>>>>>>>>>>>>>>>>>>>>>>>>> xenUnifiedDomainLookupByName >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> virsh -c xen:/// start guest1 >>>>>>>>>>>>>>>>>>>>>>>>>>> error: failed to get domain 'guest1' >>>>>>>>>>>>>>>>>>>>>>>>>>> error: Domain not found: >>>>>>>>>>>>>>>>>>>>>>>>>>> xenUnifiedDomainLookupByName >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> ++ printf '\033]0;%s@%s:%s\007' root vtst3 '~' >>>>>>>>>>>>>>>>>>>>>>>>>>> [root at cvtst3 ~]# set +x >>>>>>>>>>>>>>>>>>>>>>>>>>> + set +x >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> I am wondering why it failed to get domain . >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:43 PM, emmanuel segura >>>>>>>>>>>>>>>>>>>>>>>>>>> wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> yes >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan < >>>>>>>>>>>>>>>>>>>>>>>>>>>> pradhanparas at gmail.com> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Well it is guest1. Isn't it?. >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> max_restarts="0" name="guest1" path="/vms_c" recovery="restart" >>>>>>>>>>>>>>>>>>>>>>>>>>>>> restart_expire_time="0" use_virsh="0"/> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> It is a vm service if it matters. >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:22 PM, emmanuel >>>>>>>>>>>>>>>>>>>>>>>>>>>>> segura wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> use the servicename you defined in your >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> cluster.conf >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan < >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> pradhanparas at gmail.com> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Says: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Running in test mode. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> No resource guest1 of type service found >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:07 PM, emmanuel >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> segura wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> rg_test test /etc/cluster/cluster.conf >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> start service guest1 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan < >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> pradhanparas at gmail.com> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> My failover domain looks like this: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> nofailback="1" ordered="1" restricted="0"> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I have vm service that uses this failover >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> domain. If my node vtst1 is offline, the service doesnot start on vtst3 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> which is 2nd in the priority. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I tried to start it with: clusvcadm -e >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> vm:guest1 and even with -F and -m option. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> All i see is this error: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> start on vm "guest1" returned 1 (generic error) >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> #68: Failed to start vm:guest1; return value: 1 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Stopping service vm:guest1 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Service vm:guest1 is recovering >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> #71: Relocating failed service vm:guest1 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Service vm:guest1 is stopped >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> How do I debug? >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks! >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> quiera >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> quiera >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios >>>>>>>>>>>>>>>>>>>>>>>>>>>> quiera >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios >>>>>>>>>>>>>>>>>>>>>>>>>> quiera >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Linux-cluster mailing list >>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Linux-cluster mailing list >>>>>>>> Linux-cluster at redhat.com >>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>> >>>>>>> -- >>>>>>> Linux-cluster mailing list >>>>>>> Linux-cluster at redhat.com >>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Linux-cluster mailing list >>>>>> Linux-cluster at redhat.com >>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>> >>>>> -- >>>>> Linux-cluster mailing list >>>>> Linux-cluster at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>> >>>> >>>> >>>> -- >>>> Linux-cluster mailing list >>>> Linux-cluster at redhat.com >>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>> >>> >>> >>> >>> -- >>> esta es mi vida e me la vivo hasta que dios quiera >>> >>> -- >>> Linux-cluster mailing list >>> Linux-cluster at redhat.com >>> https://www.redhat.com/mailman/listinfo/linux-cluster >>> >> >> >> -- >> Linux-cluster mailing list >> Linux-cluster at redhat.com >> https://www.redhat.com/mailman/listinfo/linux-cluster >> > > > > -- > esta es mi vida e me la vivo hasta que dios quiera > -- esta es mi vida e me la vivo hasta que dios quiera -------------- next part -------------- An HTML attachment was scrubbed... URL: From white.heron at yahoo.com Thu Dec 19 13:13:21 2013 From: white.heron at yahoo.com (YB Tan Sri Dato Sri' Adli a.k.a Dell) Date: Thu, 19 Dec 2013 05:13:21 -0800 (PST) Subject: [Linux-cluster] failover domain and service start In-Reply-To: Message-ID: <1387458801.43078.YahooMailIosMobile@web163403.mail.gq1.yahoo.com> please call me 60173623661 for linux system configuration.

Sent from Yahoo Mail for iPhone
-------------- next part -------------- An HTML attachment was scrubbed... URL: From white.heron at yahoo.com Thu Dec 19 13:13:21 2013 From: white.heron at yahoo.com (YB Tan Sri Dato Sri' Adli a.k.a Dell) Date: Thu, 19 Dec 2013 05:13:21 -0800 (PST) Subject: [Linux-cluster] failover domain and service start In-Reply-To: Message-ID: <1387458801.43078.YahooMailIosMobile@web163403.mail.gq1.yahoo.com> please call me 60173623661 for linux system configuration.

Sent from Yahoo Mail for iPhone
-------------- next part -------------- An HTML attachment was scrubbed... URL: From emi2fast at gmail.com Thu Dec 19 13:52:28 2013 From: emi2fast at gmail.com (emmanuel segura) Date: Thu, 19 Dec 2013 14:52:28 +0100 Subject: [Linux-cluster] failover domain and service start In-Reply-To: <1387458801.43078.YahooMailIosMobile@web163403.mail.gq1.yahoo.com> References: <1387458801.43078.YahooMailIosMobile@web163403.mail.gq1.yahoo.com> Message-ID: ? 2013/12/19 YB Tan Sri Dato Sri' Adli a.k.a Dell > please call me 60173623661 for linux system configuration. > > Sent from Yahoo Mail for iPhone > > ------------------------------ > * From: * emmanuel segura ; > * To: * linux clustering ; > * Subject: * Re: [Linux-cluster] failover domain and service start > * Sent: * Thu, Dec 19, 2013 9:05:18 AM > > other thing, change the rgmanager loglevel, for more information you > can use this link https://fedorahosted.org/cluster/wiki/RGManager > > > 2013/12/19 emmanuel segura > >> grep "/vms_c" /proc/mounts in every node >> >> >> 2013/12/18 Paras pradhan >> >>> I see this: >>> >>> Dec 18 16:17:18 cvtst3 clurgmgrd[13935]: Starting stopped >>> service vm:guest1 >>> Dec 18 16:17:19 cvtst3 clurgmgrd[13935]: start on vm "guest1" >>> returned 1 (generic error) >>> Dec 18 16:17:19 cvtst3 clurgmgrd[13935]: #68: Failed to start >>> vm:guest1; return value: 1 >>> Dec 18 16:17:19 cvtst3 clurgmgrd[13935]: Stopping service >>> vm:guest1 >>> >>> >>> >>> On Wed, Dec 18, 2013 at 4:39 PM, emmanuel segura wrote: >>> >>>> can you show the log? >>>> >>>> >>>> 2013/12/18 Paras pradhan >>>> >>>>> Added but same problem. vm does not start. >>>>> >>>>> -Paras. >>>>> >>>>> >>>>> On Wed, Dec 18, 2013 at 1:13 PM, emmanuel segura wrote: >>>>> >>>>>> from script vm.sh i saw it try to discovery the hypervisor you are >>>>>> using, with hypervisor="xen" you force the script to use xen >>>>>> >>>>>> >>>>>> 2013/12/18 Paras pradhan >>>>>> >>>>>>> The only parameter I don't have is: hypervisor="xen" >>>>>>> >>>>>>> Does it matter? >>>>>>> >>>>>>> This is what i have: >>>>>>> >>>>>>> >>>>>> name="guest1" path="/vms_c" recovery="restart" restart_expire_time="0" >>>>>>> use_virsh="0"/> >>>>>>> >>>>>>> -Paras. >>>>>>> >>>>>>> >>>>>>> On Wed, Dec 18, 2013 at 12:24 PM, emmanuel segura < >>>>>>> emi2fast at gmail.com> wrote: >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> Incriment the config version of cluster.conf and ccs_tool update >>>>>>>> /etc/cluster/cluster.conf >>>>>>>> >>>>>>>> >>>>>>>> 2013/12/18 Paras pradhan >>>>>>>> >>>>>>>>> Emmauel, >>>>>>>>> >>>>>>>>> With export OCF_RESKEY_name="guest1" ; export >>>>>>>>> OCF_RESKEY_path="/vms_c" ; export OCF_RESKEY_use_virsh=0 >>>>>>>>> >>>>>>>>> , I can start the vm using : /usr/share/cluster/vm.sh >>>>>>>>> >>>>>>>>> I am wondering how to make the changes to cluser.conf or other >>>>>>>>> files so that we can start the vm using clucvcsadm. >>>>>>>>> >>>>>>>>> -Thanks >>>>>>>>> sorry for the delay. >>>>>>>>> >>>>>>>>> Paras. >>>>>>>>> >>>>>>>>> >>>>>>>>> On Thu, Dec 5, 2013 at 12:36 PM, Paras pradhan < >>>>>>>>> pradhanparas at gmail.com> wrote: >>>>>>>>> >>>>>>>>>> Emmanue no.l I was busy on some other things I will test and let >>>>>>>>>> you know asap ! >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Thu, Dec 5, 2013 at 12:26 PM, emmanuel segura < >>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>> >>>>>>>>>>> Hello Paras >>>>>>>>>>> >>>>>>>>>>> did you solved the problem? >>>>>>>>>>> >>>>>>>>>>> Thanks >>>>>>>>>>> Emmanuel >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> 2013/11/25 emmanuel segura >>>>>>>>>>> >>>>>>>>>>>> Hello Paras >>>>>>>>>>>> >>>>>>>>>>>> Maybe i found the solution, in function validate_all we got >>>>>>>>>>>> >>>>>>>>>>>> if [ -z "$OCF_RESKEY_hypervisor" ] || >>>>>>>>>>>> [ "$OCF_RESKEY_hypervisor" = "auto" ]; then >>>>>>>>>>>> export OCF_RESKEY_hypervisor="`virsh version | >>>>>>>>>>>> grep \"Running hypervisor:\" | awk '{print $3}' | tr A-Z a-z`" >>>>>>>>>>>> if [ -z "$OCF_RESKEY_hypervisor" ]; then >>>>>>>>>>>> ocf_log err "Could not determine >>>>>>>>>>>> Hypervisor" >>>>>>>>>>>> return $OCF_ERR_ARGS >>>>>>>>>>>> fi >>>>>>>>>>>> echo Hypervisor: $OCF_RESKEY_hypervisor >>>>>>>>>>>> fi >>>>>>>>>>>> >>>>>>>>>>>> # >>>>>>>>>>>> # Xen hypervisor only for when use_virsh = 0. >>>>>>>>>>>> # >>>>>>>>>>>> if [ "$OCF_RESKEY_use_virsh" = "0" ]; then >>>>>>>>>>>> if [ "$OCF_RESKEY_hypervisor" != "xen" ]; then >>>>>>>>>>>> ocf_log err "Cannot use >>>>>>>>>>>> $OCF_RESKEY_hypervisor hypervisor without using virsh" >>>>>>>>>>>> return $OCF_ERR_ARGS >>>>>>>>>>>> fi >>>>>>>>>>>> >>>>>>>>>>>> with this following enviroment variables, when i tested by hand >>>>>>>>>>>> the angent uses xm commands >>>>>>>>>>>> >>>>>>>>>>>> env | grep OCF >>>>>>>>>>>> OCF_RESKEY_hypervisor=xen >>>>>>>>>>>> OCF_RESKEY_path=/vms_c >>>>>>>>>>>> OCF_RESKEY_name=guest1 >>>>>>>>>>>> OCF_RESKEY_use_virsh=0 >>>>>>>>>>>> >>>>>>>>>>>> [root at client ~]# /usr/share/cluster/vm.sh status >>>>>>>>>>>> Management tool: xm >>>>>>>>>>>> Cannot find 'xm'; is it installed? >>>>>>>>>>>> [vm.sh] Cannot find 'xm'; is it installed? >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> I don't have xen installed to test it >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> if [ -n "$OCF_RESKEY_xmlfile" ]; then >>>>>>>>>>>> ocf_log err "Cannot use xmlfile if >>>>>>>>>>>> use_virsh is set to 0" >>>>>>>>>>>> return $OCF_ERR_ARGS >>>>>>>>>>>> fi >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> 2013/11/25 emmanuel segura >>>>>>>>>>>> >>>>>>>>>>>>> Hello paras >>>>>>>>>>>>> >>>>>>>>>>>>> missing the export command in front of variables, the correct >>>>>>>>>>>>> way is this >>>>>>>>>>>>> >>>>>>>>>>>>> export OCF_RESKEY_name="guest1" ; export >>>>>>>>>>>>> OCF_RESKEY_path="/vms_c" ; export OCF_RESKEY_use_virsh=0 >>>>>>>>>>>>> [root at client ~]# env | grep OCF >>>>>>>>>>>>> OCF_RESKEY_path=/vms_c >>>>>>>>>>>>> OCF_RESKEY_name=guest1 >>>>>>>>>>>>> OCF_RESKEY_use_virsh=0 >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> 2013/11/25 emmanuel segura >>>>>>>>>>>>> >>>>>>>>>>>>>> Hello Paras >>>>>>>>>>>>>> >>>>>>>>>>>>>> I have a centos 6, i don't know if it is different on redhat >>>>>>>>>>>>>> 5, but i saw in the script vm.sh calls do_start function when start >>>>>>>>>>>>>> parameter is given >>>>>>>>>>>>>> >>>>>>>>>>>>>> do_start() >>>>>>>>>>>>>> { >>>>>>>>>>>>>> if [ "$OCF_RESKEY_use_virsh" = "1" ]; then >>>>>>>>>>>>>> do_virsh_start $* >>>>>>>>>>>>>> return $? >>>>>>>>>>>>>> fi >>>>>>>>>>>>>> >>>>>>>>>>>>>> do_xm_start $* >>>>>>>>>>>>>> return $? >>>>>>>>>>>>>> } >>>>>>>>>>>>>> >>>>>>>>>>>>>> i don't know why because the vm.sh uses virsh when you launch >>>>>>>>>>>>>> the script by hand :( >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> 2013/11/25 Paras pradhan >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Looks like use_virsh=0 has no effect. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> [root at cvtst3 ~]# export OCF_RESKEY_name="guest1" ; >>>>>>>>>>>>>>> OCF_RESKEY_path="/vms_c" ; OCF_RESKEY_use_virsh=0 >>>>>>>>>>>>>>> [root at cvtst3 ~]# set -x >>>>>>>>>>>>>>> ++ printf '\033]0;%s@%s:%s\007' root cvtst3 '~' >>>>>>>>>>>>>>> [root at cvtst3 ~]# /usr/share/cluster/vm.sh start >>>>>>>>>>>>>>> + /usr/share/cluster/vm.sh start >>>>>>>>>>>>>>> Hypervisor: xen >>>>>>>>>>>>>>> Management tool: virsh >>>>>>>>>>>>>>> Hypervisor URI: xen:/// >>>>>>>>>>>>>>> Migration URI format: xenmigr://target_host/ >>>>>>>>>>>>>>> Virtual machine guest1 is error: failed to get domain >>>>>>>>>>>>>>> 'guest1' >>>>>>>>>>>>>>> error: Domain not found: xenUnifiedDomainLookupByName >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> virsh -c xen:/// start guest1 >>>>>>>>>>>>>>> error: failed to get domain 'guest1' >>>>>>>>>>>>>>> error: Domain not found: xenUnifiedDomainLookupByName >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> ++ printf '\033]0;%s@%s:%s\007' root cvtst3 '~' >>>>>>>>>>>>>>> [root at cvtst3 ~]# set +x >>>>>>>>>>>>>>> + set +x >>>>>>>>>>>>>>> --- >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Fri, Nov 22, 2013 at 5:22 PM, emmanuel segura < >>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Hellos Paras >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Stop the vm and retry to start the vm with following >>>>>>>>>>>>>>>> commands and if you got some error show it >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> export OCF_RESKEY_name="guest1" ; OCF_RESKEY_path="/vms_c"; OCF_RESKEY_use_virsh=0 >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> set -x >>>>>>>>>>>>>>>> /usr/share/cluster/vm.sh start >>>>>>>>>>>>>>>> set +x >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> I found the workaround to my issue. What i did is: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> run the vm using xm and then start using cluvscadm. This >>>>>>>>>>>>>>>>> works for me for the time being but I am not sure what is causing this. >>>>>>>>>>>>>>>>> This is what I did >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> xm create /vms_c/guest1 >>>>>>>>>>>>>>>>> clusvcadm -e vm: guest1 ( This detects that guest1 is up >>>>>>>>>>>>>>>>> and quickly changes its status to success) >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Although i used virt-install, it also create a xem format >>>>>>>>>>>>>>>>> configuration file and since use_virsh=0 it should be able to use this xen >>>>>>>>>>>>>>>>> format config file. This is my vm configuration: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> --- >>>>>>>>>>>>>>>>> name = "guest1" >>>>>>>>>>>>>>>>> maxmem = 2048 >>>>>>>>>>>>>>>>> memory = 512 >>>>>>>>>>>>>>>>> vcpus = 1 >>>>>>>>>>>>>>>>> #cpus="1-2" >>>>>>>>>>>>>>>>> bootloader = "/usr/bin/pygrub" >>>>>>>>>>>>>>>>> on_poweroff = "destroy" >>>>>>>>>>>>>>>>> on_reboot = "restart" >>>>>>>>>>>>>>>>> on_crash = "restart" >>>>>>>>>>>>>>>>> vfb = [ ] >>>>>>>>>>>>>>>>> disk = [ "tap:aio:/vms_c/guest1.img,xvda,w", >>>>>>>>>>>>>>>>> "tap:aio:/vms_c/guest1-disk.img,xvdb,w" ] >>>>>>>>>>>>>>>>> vif = [ "rate=10MB/s,mac=00:16:3e:6b:be:71,bridge=xenbr0" ] >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> --- >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Thanks for you help Emmanuel ! Really appreciate it. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> On Fri, Nov 22, 2013 at 11:10 AM, emmanuel segura < >>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> ok, but your vm doesn't start on others nodes, i think, >>>>>>>>>>>>>>>>>> for configuration problems >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> ================================================================ >>>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: start on >>>>>>>>>>>>>>>>>> vm "guest1" returned 1 (generic error) >>>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: #68: >>>>>>>>>>>>>>>>>> Failed to start vm:guest1; return value: 1 >>>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: Stopping >>>>>>>>>>>>>>>>>> service vm:guest1 >>>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: Service >>>>>>>>>>>>>>>>>> vm:guest1 is recovering >>>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: #71: >>>>>>>>>>>>>>>>>> Relocating failed service vm:guest1 >>>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: Service >>>>>>>>>>>>>>>>>> vm:guest1 is stopped >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> ================================================================ >>>>>>>>>>>>>>>>>> in few words, try in every cluster node >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> export OCF_RESKEY_name="guest1" ; OCF_RESKEY_path="/vms_c" >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> set -x >>>>>>>>>>>>>>>>>> /usr/share/cluster/vm.sh start >>>>>>>>>>>>>>>>>> /usr/share/cluster/vm.sh stop >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> after you check if your vm can start and stop on every >>>>>>>>>>>>>>>>>> cluster node, >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> /usr/share/cluster/vm.sh start >>>>>>>>>>>>>>>>>> /usr/share/cluster/vm.sh migrate name_of_a_cluster_node >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> can you show me your vm configuration under /vms_c? >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Thanks >>>>>>>>>>>>>>>>>> Emmanuel >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> And also to test I made use_virsh=1 , same problem. The >>>>>>>>>>>>>>>>>>> vm does not start up if the FD domains are offline. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> On Fri, Nov 22, 2013 at 10:37 AM, Paras pradhan < >>>>>>>>>>>>>>>>>>> pradhanparas at gmail.com> wrote: >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> Well thats seems to theoretically correct. But right >>>>>>>>>>>>>>>>>>>> now my cluser has use_virsh=0 and I don't have any issue untill my mebmers >>>>>>>>>>>>>>>>>>>> on the failover domains are offline. So wondering what is it that clusvcadm >>>>>>>>>>>>>>>>>>>> -e is looking when I don't use virsh . >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> On Fri, Nov 22, 2013 at 10:05 AM, emmanuel segura < >>>>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> If you used virt-install, i think you need use virsh, >>>>>>>>>>>>>>>>>>>>> the cluster uses xm xen command if you got use_virsh=0 and virsh if you got >>>>>>>>>>>>>>>>>>>>> use_virsh=1 in your cluster config >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> I use virt-install to create virtual machines. Is >>>>>>>>>>>>>>>>>>>>>> there a way to debug why clusvcadm -e vm:guest1 is failing? vm.sh seems to >>>>>>>>>>>>>>>>>>>>>> use virsh and my cluster.conf has use_virsh=0 >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> Thanks >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> On Nov 21, 2013 5:53 PM, "emmanuel segura" < >>>>>>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> but did you configure your vm with xen tools or >>>>>>>>>>>>>>>>>>>>>>> using virt-manager? >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> Well no i don't want to use virsh. But as we are >>>>>>>>>>>>>>>>>>>>>>>> debugging with virsh now i found a strange issue. >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> I exported an xml file and imported to all nodes . >>>>>>>>>>>>>>>>>>>>>>>> Ran >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> --- >>>>>>>>>>>>>>>>>>>>>>>> name="guest1" path="/vms_c" >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> export OCF_RESKEY_name="guest1" ; >>>>>>>>>>>>>>>>>>>>>>>> OCF_RESKEY_path="/vms_c" >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> set -x >>>>>>>>>>>>>>>>>>>>>>>> /usr/share/cluster/vm.sh start >>>>>>>>>>>>>>>>>>>>>>>> set +x >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>> vm starts now. BUT from a cluster service : >>>>>>>>>>>>>>>>>>>>>>>> cluvscam -e vm:guest1 , same error. >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> So if i populate all my domains' config files to >>>>>>>>>>>>>>>>>>>>>>>> all my cluser nodes and make use_virsh=1, then the issue is resolved. But >>>>>>>>>>>>>>>>>>>>>>>> this is a lot of work for those who have hundreds of vm. >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> vm.start uses virsh . Is there a way to tell him >>>>>>>>>>>>>>>>>>>>>>>> not use virsh? >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> Thanks >>>>>>>>>>>>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 5:19 PM, emmanuel segura < >>>>>>>>>>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> if you are using virsh for manage your vms, change >>>>>>>>>>>>>>>>>>>>>>>>> this in your cluster.conf >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> from >>>>>>>>>>>>>>>>>>>>>>>>> use_virsh="0" >>>>>>>>>>>>>>>>>>>>>>>>> to >>>>>>>>>>>>>>>>>>>>>>>>> use_virsh="1" >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> 2013/11/22 Paras pradhan >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> I think i found the problem. >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> virsh list --all does not show my vm . This is >>>>>>>>>>>>>>>>>>>>>>>>>> because it was created on another node. and another node has it. Now I want >>>>>>>>>>>>>>>>>>>>>>>>>> to start the service on a different node in which it was not created or >>>>>>>>>>>>>>>>>>>>>>>>>> where virsh list --all does not have an entry. Is it possible to create >>>>>>>>>>>>>>>>>>>>>>>>>> this entry using a xen config file?Looks like this is now a Xen issue >>>>>>>>>>>>>>>>>>>>>>>>>> rather than a linux-cluster issue . :) >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:58 PM, emmanuel segura >>>>>>>>>>>>>>>>>>>>>>>>>> wrote: >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> 1:did you verify your xen livemigration >>>>>>>>>>>>>>>>>>>>>>>>>>> configuration? >>>>>>>>>>>>>>>>>>>>>>>>>>> 2: where you vm disk reside? >>>>>>>>>>>>>>>>>>>>>>>>>>> 3: can you see your vm defined on every cluster >>>>>>>>>>>>>>>>>>>>>>>>>>> node with xm list? >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan >>>>>>>>>>>>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> This is what I get >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> Hypervisor: xen >>>>>>>>>>>>>>>>>>>>>>>>>>>> Management tool: virsh >>>>>>>>>>>>>>>>>>>>>>>>>>>> Hypervisor URI: xen:/// >>>>>>>>>>>>>>>>>>>>>>>>>>>> Migration URI format: xenmigr://target_host/ >>>>>>>>>>>>>>>>>>>>>>>>>>>> Virtual machine guest1 is error: failed to get >>>>>>>>>>>>>>>>>>>>>>>>>>>> domain 'guest1' >>>>>>>>>>>>>>>>>>>>>>>>>>>> error: Domain not found: >>>>>>>>>>>>>>>>>>>>>>>>>>>> xenUnifiedDomainLookupByName >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> virsh -c xen:/// start guest1 >>>>>>>>>>>>>>>>>>>>>>>>>>>> error: failed to get domain 'guest1' >>>>>>>>>>>>>>>>>>>>>>>>>>>> error: Domain not found: >>>>>>>>>>>>>>>>>>>>>>>>>>>> xenUnifiedDomainLookupByName >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> ++ printf '\033]0;%s@%s:%s\007' root vtst3 '~' >>>>>>>>>>>>>>>>>>>>>>>>>>>> [root at cvtst3 ~]# set +x >>>>>>>>>>>>>>>>>>>>>>>>>>>> + set +x >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> I am wondering why it failed to get domain . >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:43 PM, emmanuel >>>>>>>>>>>>>>>>>>>>>>>>>>>> segura wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> yes >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan < >>>>>>>>>>>>>>>>>>>>>>>>>>>>> pradhanparas at gmail.com> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Well it is guest1. Isn't it?. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> exclusive="0" max_restarts="0" name="guest1" path="/vms_c" >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> recovery="restart" restart_expire_time="0" use_virsh="0"/> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> It is a vm service if it matters. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:22 PM, emmanuel >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> segura wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> use the servicename you defined in your >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> cluster.conf >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan < >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> pradhanparas at gmail.com> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Says: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Running in test mode. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> No resource guest1 of type service found >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Thu, Nov 21, 2013 at 4:07 PM, emmanuel >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> segura wrote: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> rg_test test /etc/cluster/cluster.conf >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> start service guest1 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 2013/11/21 Paras pradhan < >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> pradhanparas at gmail.com> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> My failover domain looks like this: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> nofailback="1" ordered="1" restricted="0"> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I have vm service that uses this failover >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> domain. If my node vtst1 is offline, the service doesnot start on vtst3 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> which is 2nd in the priority. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I tried to start it with: clusvcadm -e >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> vm:guest1 and even with -F and -m option. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> All i see is this error: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> start on vm "guest1" returned 1 (generic error) >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> #68: Failed to start vm:guest1; return value: 1 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:29 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Stopping service vm:guest1 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Service vm:guest1 is recovering >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> #71: Relocating failed service vm:guest1 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Nov 21 15:40:35 vtst3 clurgmgrd[13911]: >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Service vm:guest1 is stopped >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> How do I debug? >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Thanks! >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Paras. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> dios quiera >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> quiera >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios >>>>>>>>>>>>>>>>>>>>>>>>>>>>> quiera >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios >>>>>>>>>>>>>>>>>>>>>>>>>>> quiera >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Linux-cluster mailing list >>>>>>>>> Linux-cluster at redhat.com >>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>> >>>>>>>> -- >>>>>>>> Linux-cluster mailing list >>>>>>>> Linux-cluster at redhat.com >>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Linux-cluster mailing list >>>>>>> Linux-cluster at redhat.com >>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>> >>>>>> -- >>>>>> Linux-cluster mailing list >>>>>> Linux-cluster at redhat.com >>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>> >>>>> >>>>> >>>>> -- >>>>> Linux-cluster mailing list >>>>> Linux-cluster at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>> >>>> >>>> >>>> >>>> -- >>>> esta es mi vida e me la vivo hasta que dios quiera >>>> >>>> -- >>>> Linux-cluster mailing list >>>> Linux-cluster at redhat.com >>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>> >>> >>> >>> -- >>> Linux-cluster mailing list >>> Linux-cluster at redhat.com >>> https://www.redhat.com/mailman/listinfo/linux-cluster >>> >> >> >> >> -- >> esta es mi vida e me la vivo hasta que dios quiera >> > > > > -- > esta es mi vida e me la vivo hasta que dios quiera > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -- esta es mi vida e me la vivo hasta que dios quiera -------------- next part -------------- An HTML attachment was scrubbed... URL: From jplorier at gmail.com Thu Dec 19 15:08:37 2013 From: jplorier at gmail.com (jplorier at gmail.com) Date: Thu, 19 Dec 2013 13:08:37 -0200 Subject: [Linux-cluster] Can someone fix the digest Message-ID: Hi, The list has been sending several 1 message digest a day and it's getting annoying. As I'd like to stay in the list, I ask someone with the power to fix the config. Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at alteeve.ca Thu Dec 19 16:51:41 2013 From: lists at alteeve.ca (Digimer) Date: Thu, 19 Dec 2013 11:51:41 -0500 Subject: [Linux-cluster] failover domain and service start In-Reply-To: References: <1387458801.43078.YahooMailIosMobile@web163403.mail.gq1.yahoo.com> Message-ID: <52B3241D.5040700@alteeve.ca> On 19/12/13 08:52 AM, emmanuel segura wrote: > ? Sounds like classic spam to me. -- Digimer Papers and Projects: https://alteeve.ca/w/ What if the cure for cancer is trapped in the mind of a person without access to education? From emi2fast at gmail.com Thu Dec 19 17:18:47 2013 From: emi2fast at gmail.com (emmanuel segura) Date: Thu, 19 Dec 2013 18:18:47 +0100 Subject: [Linux-cluster] failover domain and service start In-Reply-To: <52B3241D.5040700@alteeve.ca> References: <1387458801.43078.YahooMailIosMobile@web163403.mail.gq1.yahoo.com> <52B3241D.5040700@alteeve.ca> Message-ID: yes i think that too 2013/12/19 Digimer > On 19/12/13 08:52 AM, emmanuel segura wrote: > >> ? >> > > Sounds like classic spam to me. > > -- > Digimer > Papers and Projects: https://alteeve.ca/w/ > What if the cure for cancer is trapped in the mind of a person without > access to education? > > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -- esta es mi vida e me la vivo hasta que dios quiera -------------- next part -------------- An HTML attachment was scrubbed... URL: From pradhanparas at gmail.com Thu Dec 19 20:58:30 2013 From: pradhanparas at gmail.com (Paras pradhan) Date: Thu, 19 Dec 2013 14:58:30 -0600 Subject: [Linux-cluster] failover domain and service start In-Reply-To: References: <1387458801.43078.YahooMailIosMobile@web163403.mail.gq1.yahoo.com> <52B3241D.5040700@alteeve.ca> Message-ID: Even If i set the log level to 0 , I can't see more logs than what I've pasted earlier. May be something wrong with my setup or what. /proc/mounts is fine . -Paras. On Thu, Dec 19, 2013 at 11:18 AM, emmanuel segura wrote: > yes i think that too > > > 2013/12/19 Digimer > >> On 19/12/13 08:52 AM, emmanuel segura wrote: >> >>> ? >>> >> >> Sounds like classic spam to me. >> >> -- >> Digimer >> Papers and Projects: https://alteeve.ca/w/ >> What if the cure for cancer is trapped in the mind of a person without >> access to education? >> >> >> -- >> Linux-cluster mailing list >> Linux-cluster at redhat.com >> https://www.redhat.com/mailman/listinfo/linux-cluster >> > > > > -- > esta es mi vida e me la vivo hasta que dios quiera > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emi2fast at gmail.com Thu Dec 19 21:26:10 2013 From: emi2fast at gmail.com (emmanuel segura) Date: Thu, 19 Dec 2013 22:26:10 +0100 Subject: [Linux-cluster] failover domain and service start In-Reply-To: References: <1387458801.43078.YahooMailIosMobile@web163403.mail.gq1.yahoo.com> <52B3241D.5040700@alteeve.ca> Message-ID: set the log level to 7 and after incriment the config version and ccs_tool update /etc/cluster/cluster.conf 2013/12/19 Paras pradhan > Even If i set the log level to 0 , I can't see more logs than what I've > pasted earlier. May be something wrong with my setup or what. > > /proc/mounts is fine . > > -Paras. > > > On Thu, Dec 19, 2013 at 11:18 AM, emmanuel segura wrote: > >> yes i think that too >> >> >> 2013/12/19 Digimer >> >>> On 19/12/13 08:52 AM, emmanuel segura wrote: >>> >>>> ? >>>> >>> >>> Sounds like classic spam to me. >>> >>> -- >>> Digimer >>> Papers and Projects: https://alteeve.ca/w/ >>> What if the cure for cancer is trapped in the mind of a person without >>> access to education? >>> >>> >>> -- >>> Linux-cluster mailing list >>> Linux-cluster at redhat.com >>> https://www.redhat.com/mailman/listinfo/linux-cluster >>> >> >> >> >> -- >> esta es mi vida e me la vivo hasta que dios quiera >> >> -- >> Linux-cluster mailing list >> Linux-cluster at redhat.com >> https://www.redhat.com/mailman/listinfo/linux-cluster >> > > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -- esta es mi vida e me la vivo hasta que dios quiera -------------- next part -------------- An HTML attachment was scrubbed... URL: From pradhanparas at gmail.com Thu Dec 19 21:35:39 2013 From: pradhanparas at gmail.com (Paras pradhan) Date: Thu, 19 Dec 2013 15:35:39 -0600 Subject: [Linux-cluster] failover domain and service start In-Reply-To: References: <1387458801.43078.YahooMailIosMobile@web163403.mail.gq1.yahoo.com> <52B3241D.5040700@alteeve.ca> Message-ID: emmanuel that is what I've done. I don;t see anything other than start domian failure. On Thu, Dec 19, 2013 at 3:26 PM, emmanuel segura wrote: > set the log level to 7 and after incriment the config version and ccs_tool > update /etc/cluster/cluster.conf > > > > > 2013/12/19 Paras pradhan > >> Even If i set the log level to 0 , I can't see more logs than what I've >> pasted earlier. May be something wrong with my setup or what. >> >> /proc/mounts is fine . >> >> -Paras. >> >> >> On Thu, Dec 19, 2013 at 11:18 AM, emmanuel segura wrote: >> >>> yes i think that too >>> >>> >>> 2013/12/19 Digimer >>> >>>> On 19/12/13 08:52 AM, emmanuel segura wrote: >>>> >>>>> ? >>>>> >>>> >>>> Sounds like classic spam to me. >>>> >>>> -- >>>> Digimer >>>> Papers and Projects: https://alteeve.ca/w/ >>>> What if the cure for cancer is trapped in the mind of a person without >>>> access to education? >>>> >>>> >>>> -- >>>> Linux-cluster mailing list >>>> Linux-cluster at redhat.com >>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>> >>> >>> >>> >>> -- >>> esta es mi vida e me la vivo hasta que dios quiera >>> >>> -- >>> Linux-cluster mailing list >>> Linux-cluster at redhat.com >>> https://www.redhat.com/mailman/listinfo/linux-cluster >>> >> >> >> -- >> Linux-cluster mailing list >> Linux-cluster at redhat.com >> https://www.redhat.com/mailman/listinfo/linux-cluster >> > > > > -- > esta es mi vida e me la vivo hasta que dios quiera > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emi2fast at gmail.com Thu Dec 19 21:59:09 2013 From: emi2fast at gmail.com (emmanuel segura) Date: Thu, 19 Dec 2013 22:59:09 +0100 Subject: [Linux-cluster] failover domain and service start In-Reply-To: References: <1387458801.43078.YahooMailIosMobile@web163403.mail.gq1.yahoo.com> <52B3241D.5040700@alteeve.ca> Message-ID: *rg_test test /etc/cluster/cluster.conf start vm guest1* 2013/12/19 Paras pradhan > emmanuel that is what I've done. I don;t see anything other than start > domian failure. > > > > > On Thu, Dec 19, 2013 at 3:26 PM, emmanuel segura wrote: > >> set the log level to 7 and after incriment the config version and >> ccs_tool update /etc/cluster/cluster.conf >> >> >> >> >> 2013/12/19 Paras pradhan >> >>> Even If i set the log level to 0 , I can't see more logs than what I've >>> pasted earlier. May be something wrong with my setup or what. >>> >>> /proc/mounts is fine . >>> >>> -Paras. >>> >>> >>> On Thu, Dec 19, 2013 at 11:18 AM, emmanuel segura wrote: >>> >>>> yes i think that too >>>> >>>> >>>> 2013/12/19 Digimer >>>> >>>>> On 19/12/13 08:52 AM, emmanuel segura wrote: >>>>> >>>>>> ? >>>>>> >>>>> >>>>> Sounds like classic spam to me. >>>>> >>>>> -- >>>>> Digimer >>>>> Papers and Projects: https://alteeve.ca/w/ >>>>> What if the cure for cancer is trapped in the mind of a person without >>>>> access to education? >>>>> >>>>> >>>>> -- >>>>> Linux-cluster mailing list >>>>> Linux-cluster at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>> >>>> >>>> >>>> >>>> -- >>>> esta es mi vida e me la vivo hasta que dios quiera >>>> >>>> -- >>>> Linux-cluster mailing list >>>> Linux-cluster at redhat.com >>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>> >>> >>> >>> -- >>> Linux-cluster mailing list >>> Linux-cluster at redhat.com >>> https://www.redhat.com/mailman/listinfo/linux-cluster >>> >> >> >> >> -- >> esta es mi vida e me la vivo hasta que dios quiera >> >> -- >> Linux-cluster mailing list >> Linux-cluster at redhat.com >> https://www.redhat.com/mailman/listinfo/linux-cluster >> > > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -- esta es mi vida e me la vivo hasta que dios quiera -------------- next part -------------- An HTML attachment was scrubbed... URL: From pradhanparas at gmail.com Thu Dec 19 22:19:09 2013 From: pradhanparas at gmail.com (Paras pradhan) Date: Thu, 19 Dec 2013 16:19:09 -0600 Subject: [Linux-cluster] failover domain and service start In-Reply-To: References: <1387458801.43078.YahooMailIosMobile@web163403.mail.gq1.yahoo.com> <52B3241D.5040700@alteeve.ca> Message-ID: Thats a success. -- [root at cvtst3 log]# rg_test test /etc/cluster/cluster.conf start vm guest1 Running in test mode. Starting guest1... Management tool: xm Virtual machine guest1 is not running xm create --path="/vms_c" guest1 on_shutdown="destroy" on_reboot="destroy" on_crash="destroy" Using config file "/vms_c/guest1". Using to parse /grub/menu.lst Started domain guest1 Start of guest1 complete -- On Thu, Dec 19, 2013 at 3:59 PM, emmanuel segura wrote: > *rg_test test /etc/cluster/cluster.conf start vm guest1* > > > 2013/12/19 Paras pradhan > >> emmanuel that is what I've done. I don;t see anything other than start >> domian failure. >> >> >> >> >> On Thu, Dec 19, 2013 at 3:26 PM, emmanuel segura wrote: >> >>> set the log level to 7 and after incriment the config version and >>> ccs_tool update /etc/cluster/cluster.conf >>> >>> >>> >>> >>> 2013/12/19 Paras pradhan >>> >>>> Even If i set the log level to 0 , I can't see more logs than what I've >>>> pasted earlier. May be something wrong with my setup or what. >>>> >>>> /proc/mounts is fine . >>>> >>>> -Paras. >>>> >>>> >>>> On Thu, Dec 19, 2013 at 11:18 AM, emmanuel segura wrote: >>>> >>>>> yes i think that too >>>>> >>>>> >>>>> 2013/12/19 Digimer >>>>> >>>>>> On 19/12/13 08:52 AM, emmanuel segura wrote: >>>>>> >>>>>>> ? >>>>>>> >>>>>> >>>>>> Sounds like classic spam to me. >>>>>> >>>>>> -- >>>>>> Digimer >>>>>> Papers and Projects: https://alteeve.ca/w/ >>>>>> What if the cure for cancer is trapped in the mind of a person >>>>>> without access to education? >>>>>> >>>>>> >>>>>> -- >>>>>> Linux-cluster mailing list >>>>>> Linux-cluster at redhat.com >>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>> >>>>> -- >>>>> Linux-cluster mailing list >>>>> Linux-cluster at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>> >>>> >>>> >>>> -- >>>> Linux-cluster mailing list >>>> Linux-cluster at redhat.com >>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>> >>> >>> >>> >>> -- >>> esta es mi vida e me la vivo hasta que dios quiera >>> >>> -- >>> Linux-cluster mailing list >>> Linux-cluster at redhat.com >>> https://www.redhat.com/mailman/listinfo/linux-cluster >>> >> >> >> -- >> Linux-cluster mailing list >> Linux-cluster at redhat.com >> https://www.redhat.com/mailman/listinfo/linux-cluster >> > > > > -- > esta es mi vida e me la vivo hasta que dios quiera > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emi2fast at gmail.com Thu Dec 19 22:28:20 2013 From: emi2fast at gmail.com (emmanuel segura) Date: Thu, 19 Dec 2013 23:28:20 +0100 Subject: [Linux-cluster] failover domain and service start In-Reply-To: References: <1387458801.43078.YahooMailIosMobile@web163403.mail.gq1.yahoo.com> <52B3241D.5040700@alteeve.ca> Message-ID: clusvcadm -e -M vm:guest1 2013/12/19 Paras pradhan > Thats a success. > > -- > [root at cvtst3 log]# rg_test test /etc/cluster/cluster.conf start vm guest1 > Running in test mode. > Starting guest1... > Management tool: xm > Virtual machine guest1 is not running > xm create --path="/vms_c" guest1 on_shutdown="destroy" > on_reboot="destroy" on_crash="destroy" > Using config file "/vms_c/guest1". > Using to parse /grub/menu.lst > Started domain guest1 > Start of guest1 complete > -- > > > > > On Thu, Dec 19, 2013 at 3:59 PM, emmanuel segura wrote: > >> *rg_test test /etc/cluster/cluster.conf start vm guest1* >> >> >> 2013/12/19 Paras pradhan >> >>> emmanuel that is what I've done. I don;t see anything other than start >>> domian failure. >>> >>> >>> >>> >>> On Thu, Dec 19, 2013 at 3:26 PM, emmanuel segura wrote: >>> >>>> set the log level to 7 and after incriment the config version and >>>> ccs_tool update /etc/cluster/cluster.conf >>>> >>>> >>>> >>>> >>>> 2013/12/19 Paras pradhan >>>> >>>>> Even If i set the log level to 0 , I can't see more logs than what >>>>> I've pasted earlier. May be something wrong with my setup or what. >>>>> >>>>> /proc/mounts is fine . >>>>> >>>>> -Paras. >>>>> >>>>> >>>>> On Thu, Dec 19, 2013 at 11:18 AM, emmanuel segura wrote: >>>>> >>>>>> yes i think that too >>>>>> >>>>>> >>>>>> 2013/12/19 Digimer >>>>>> >>>>>>> On 19/12/13 08:52 AM, emmanuel segura wrote: >>>>>>> >>>>>>>> ? >>>>>>>> >>>>>>> >>>>>>> Sounds like classic spam to me. >>>>>>> >>>>>>> -- >>>>>>> Digimer >>>>>>> Papers and Projects: https://alteeve.ca/w/ >>>>>>> What if the cure for cancer is trapped in the mind of a person >>>>>>> without access to education? >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Linux-cluster mailing list >>>>>>> Linux-cluster at redhat.com >>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>> >>>>>> -- >>>>>> Linux-cluster mailing list >>>>>> Linux-cluster at redhat.com >>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>> >>>>> >>>>> >>>>> -- >>>>> Linux-cluster mailing list >>>>> Linux-cluster at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>> >>>> >>>> >>>> >>>> -- >>>> esta es mi vida e me la vivo hasta que dios quiera >>>> >>>> -- >>>> Linux-cluster mailing list >>>> Linux-cluster at redhat.com >>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>> >>> >>> >>> -- >>> Linux-cluster mailing list >>> Linux-cluster at redhat.com >>> https://www.redhat.com/mailman/listinfo/linux-cluster >>> >> >> >> >> -- >> esta es mi vida e me la vivo hasta que dios quiera >> >> -- >> Linux-cluster mailing list >> Linux-cluster at redhat.com >> https://www.redhat.com/mailman/listinfo/linux-cluster >> > > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -- esta es mi vida e me la vivo hasta que dios quiera -------------- next part -------------- An HTML attachment was scrubbed... URL: From pradhanparas at gmail.com Thu Dec 19 22:39:37 2013 From: pradhanparas at gmail.com (Paras pradhan) Date: Thu, 19 Dec 2013 16:39:37 -0600 Subject: [Linux-cluster] failover domain and service start In-Reply-To: References: <1387458801.43078.YahooMailIosMobile@web163403.mail.gq1.yahoo.com> <52B3241D.5040700@alteeve.ca> Message-ID: -M is for migrate. I think you meant this: clusvcadm -M vm:guest1 -m cvtst3.uark.edu but that failed. Trying to migrate vm:guest1 to cvtst3.uark.edu...Temporary failure; try again On Thu, Dec 19, 2013 at 4:28 PM, emmanuel segura wrote: > clusvcadm -e -M vm:guest1 > > > 2013/12/19 Paras pradhan > >> Thats a success. >> >> -- >> [root at cvtst3 log]# rg_test test /etc/cluster/cluster.conf start vm guest1 >> Running in test mode. >> Starting guest1... >> Management tool: xm >> Virtual machine guest1 is not running >> xm create --path="/vms_c" guest1 on_shutdown="destroy" >> on_reboot="destroy" on_crash="destroy" >> Using config file "/vms_c/guest1". >> Using to parse /grub/menu.lst >> Started domain guest1 >> Start of guest1 complete >> -- >> >> >> >> >> On Thu, Dec 19, 2013 at 3:59 PM, emmanuel segura wrote: >> >>> *rg_test test /etc/cluster/cluster.conf start vm guest1* >>> >>> >>> 2013/12/19 Paras pradhan >>> >>>> emmanuel that is what I've done. I don;t see anything other than start >>>> domian failure. >>>> >>>> >>>> >>>> >>>> On Thu, Dec 19, 2013 at 3:26 PM, emmanuel segura wrote: >>>> >>>>> set the log level to 7 and after incriment the config version and >>>>> ccs_tool update /etc/cluster/cluster.conf >>>>> >>>>> >>>>> >>>>> >>>>> 2013/12/19 Paras pradhan >>>>> >>>>>> Even If i set the log level to 0 , I can't see more logs than what >>>>>> I've pasted earlier. May be something wrong with my setup or what. >>>>>> >>>>>> /proc/mounts is fine . >>>>>> >>>>>> -Paras. >>>>>> >>>>>> >>>>>> On Thu, Dec 19, 2013 at 11:18 AM, emmanuel segura >>>>> > wrote: >>>>>> >>>>>>> yes i think that too >>>>>>> >>>>>>> >>>>>>> 2013/12/19 Digimer >>>>>>> >>>>>>>> On 19/12/13 08:52 AM, emmanuel segura wrote: >>>>>>>> >>>>>>>>> ? >>>>>>>>> >>>>>>>> >>>>>>>> Sounds like classic spam to me. >>>>>>>> >>>>>>>> -- >>>>>>>> Digimer >>>>>>>> Papers and Projects: https://alteeve.ca/w/ >>>>>>>> What if the cure for cancer is trapped in the mind of a person >>>>>>>> without access to education? >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Linux-cluster mailing list >>>>>>>> Linux-cluster at redhat.com >>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>> >>>>>>> -- >>>>>>> Linux-cluster mailing list >>>>>>> Linux-cluster at redhat.com >>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Linux-cluster mailing list >>>>>> Linux-cluster at redhat.com >>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>> >>>>> -- >>>>> Linux-cluster mailing list >>>>> Linux-cluster at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>> >>>> >>>> >>>> -- >>>> Linux-cluster mailing list >>>> Linux-cluster at redhat.com >>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>> >>> >>> >>> >>> -- >>> esta es mi vida e me la vivo hasta que dios quiera >>> >>> -- >>> Linux-cluster mailing list >>> Linux-cluster at redhat.com >>> https://www.redhat.com/mailman/listinfo/linux-cluster >>> >> >> >> -- >> Linux-cluster mailing list >> Linux-cluster at redhat.com >> https://www.redhat.com/mailman/listinfo/linux-cluster >> > > > > -- > esta es mi vida e me la vivo hasta que dios quiera > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ghostsarespooky at gmail.com Thu Dec 19 23:21:47 2013 From: ghostsarespooky at gmail.com (Tim) Date: Thu, 19 Dec 2013 16:21:47 -0700 Subject: [Linux-cluster] VMware Licencing for SOAP fence agent Message-ID: Hello, I am hoping there is someone on this mailing list that can confirm that the "vSphere API" licence feature is what the fence_vmware_soap agent uses in order to fence VMs in ESX 5.1. (RHEL 6.4 w/Cluster Suite) Thanks! Timothy Graham RHCE 111-195-376 -------------- next part -------------- An HTML attachment was scrubbed... URL: From emi2fast at gmail.com Thu Dec 19 23:23:41 2013 From: emi2fast at gmail.com (emmanuel segura) Date: Fri, 20 Dec 2013 00:23:41 +0100 Subject: [Linux-cluster] failover domain and service start In-Reply-To: References: <1387458801.43078.YahooMailIosMobile@web163403.mail.gq1.yahoo.com> <52B3241D.5040700@alteeve.ca> Message-ID: ok is clear, now the problem is that you can't start vm using the cluster in /etc/sysconfig/cluster if doesn't exist create it, RGMGR_OPTS="-d" and restart rgmanager, after that, do the test 2013/12/19 Paras pradhan > -M is for migrate. > > I think you meant this: > > clusvcadm -M vm:guest1 -m cvtst3.uark.edu > > but that failed. > Trying to migrate vm:guest1 to cvtst3.uark.edu...Temporary failure; try > again > > > > On Thu, Dec 19, 2013 at 4:28 PM, emmanuel segura wrote: > >> clusvcadm -e -M vm:guest1 >> >> >> 2013/12/19 Paras pradhan >> >>> Thats a success. >>> >>> -- >>> [root at cvtst3 log]# rg_test test /etc/cluster/cluster.conf start vm >>> guest1 >>> Running in test mode. >>> Starting guest1... >>> Management tool: xm >>> Virtual machine guest1 is not running >>> xm create --path="/vms_c" guest1 on_shutdown="destroy" >>> on_reboot="destroy" on_crash="destroy" >>> Using config file "/vms_c/guest1". >>> Using to parse /grub/menu.lst >>> Started domain guest1 >>> Start of guest1 complete >>> -- >>> >>> >>> >>> >>> On Thu, Dec 19, 2013 at 3:59 PM, emmanuel segura wrote: >>> >>>> *rg_test test /etc/cluster/cluster.conf start vm guest1* >>>> >>>> >>>> 2013/12/19 Paras pradhan >>>> >>>>> emmanuel that is what I've done. I don;t see anything other than start >>>>> domian failure. >>>>> >>>>> >>>>> >>>>> >>>>> On Thu, Dec 19, 2013 at 3:26 PM, emmanuel segura wrote: >>>>> >>>>>> set the log level to 7 and after incriment the config version and >>>>>> ccs_tool update /etc/cluster/cluster.conf >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> 2013/12/19 Paras pradhan >>>>>> >>>>>>> Even If i set the log level to 0 , I can't see more logs than what >>>>>>> I've pasted earlier. May be something wrong with my setup or what. >>>>>>> >>>>>>> /proc/mounts is fine . >>>>>>> >>>>>>> -Paras. >>>>>>> >>>>>>> >>>>>>> On Thu, Dec 19, 2013 at 11:18 AM, emmanuel segura < >>>>>>> emi2fast at gmail.com> wrote: >>>>>>> >>>>>>>> yes i think that too >>>>>>>> >>>>>>>> >>>>>>>> 2013/12/19 Digimer >>>>>>>> >>>>>>>>> On 19/12/13 08:52 AM, emmanuel segura wrote: >>>>>>>>> >>>>>>>>>> ? >>>>>>>>>> >>>>>>>>> >>>>>>>>> Sounds like classic spam to me. >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Digimer >>>>>>>>> Papers and Projects: https://alteeve.ca/w/ >>>>>>>>> What if the cure for cancer is trapped in the mind of a person >>>>>>>>> without access to education? >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Linux-cluster mailing list >>>>>>>>> Linux-cluster at redhat.com >>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>> >>>>>>>> -- >>>>>>>> Linux-cluster mailing list >>>>>>>> Linux-cluster at redhat.com >>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Linux-cluster mailing list >>>>>>> Linux-cluster at redhat.com >>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>> >>>>>> -- >>>>>> Linux-cluster mailing list >>>>>> Linux-cluster at redhat.com >>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>> >>>>> >>>>> >>>>> -- >>>>> Linux-cluster mailing list >>>>> Linux-cluster at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>> >>>> >>>> >>>> >>>> -- >>>> esta es mi vida e me la vivo hasta que dios quiera >>>> >>>> -- >>>> Linux-cluster mailing list >>>> Linux-cluster at redhat.com >>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>> >>> >>> >>> -- >>> Linux-cluster mailing list >>> Linux-cluster at redhat.com >>> https://www.redhat.com/mailman/listinfo/linux-cluster >>> >> >> >> >> -- >> esta es mi vida e me la vivo hasta que dios quiera >> >> -- >> Linux-cluster mailing list >> Linux-cluster at redhat.com >> https://www.redhat.com/mailman/listinfo/linux-cluster >> > > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -- esta es mi vida e me la vivo hasta que dios quiera -------------- next part -------------- An HTML attachment was scrubbed... URL: From pradhanparas at gmail.com Fri Dec 20 17:50:15 2013 From: pradhanparas at gmail.com (Paras pradhan) Date: Fri, 20 Dec 2013 11:50:15 -0600 Subject: [Linux-cluster] failover domain and service start In-Reply-To: References: <1387458801.43078.YahooMailIosMobile@web163403.mail.gq1.yahoo.com> <52B3241D.5040700@alteeve.ca> Message-ID: I have it in place. I found out this issue is only on one node. This is what I've tried. All three cluster nodes are in a single failover domain case1: stop all cluster services on node1 and node2. start vm service on node3. Fails case2: stop all services on node2 and node 3. start vm service on node 1. Service starts ! case3: stop all services on node1 and node3: start service on node2. Service starts All the nodes are running the same kernel and same cluster packages. I changed the failover domain to this service but its the same . May be node3 has some unknow issue and I am not sure if I should rebuild it. -Paras. On Thu, Dec 19, 2013 at 5:23 PM, emmanuel segura wrote: > ok is clear, now the problem is that you can't start vm using the cluster > > in /etc/sysconfig/cluster if doesn't exist create it, RGMGR_OPTS="-d" and > restart rgmanager, after that, do the test > > > 2013/12/19 Paras pradhan > >> -M is for migrate. >> >> I think you meant this: >> >> clusvcadm -M vm:guest1 -m cvtst3.uark.edu >> >> but that failed. >> Trying to migrate vm:guest1 to cvtst3.uark.edu...Temporary failure; try >> again >> >> >> >> On Thu, Dec 19, 2013 at 4:28 PM, emmanuel segura wrote: >> >>> clusvcadm -e -M vm:guest1 >>> >>> >>> 2013/12/19 Paras pradhan >>> >>>> Thats a success. >>>> >>>> -- >>>> [root at cvtst3 log]# rg_test test /etc/cluster/cluster.conf start vm >>>> guest1 >>>> Running in test mode. >>>> Starting guest1... >>>> Management tool: xm >>>> Virtual machine guest1 is not running >>>> xm create --path="/vms_c" guest1 on_shutdown="destroy" >>>> on_reboot="destroy" on_crash="destroy" >>>> Using config file "/vms_c/guest1". >>>> Using to parse /grub/menu.lst >>>> Started domain guest1 >>>> Start of guest1 complete >>>> -- >>>> >>>> >>>> >>>> >>>> On Thu, Dec 19, 2013 at 3:59 PM, emmanuel segura wrote: >>>> >>>>> *rg_test test /etc/cluster/cluster.conf start vm guest1* >>>>> >>>>> >>>>> 2013/12/19 Paras pradhan >>>>> >>>>>> emmanuel that is what I've done. I don;t see anything other than >>>>>> start domian failure. >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Thu, Dec 19, 2013 at 3:26 PM, emmanuel segura wrote: >>>>>> >>>>>>> set the log level to 7 and after incriment the config version and >>>>>>> ccs_tool update /etc/cluster/cluster.conf >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> 2013/12/19 Paras pradhan >>>>>>> >>>>>>>> Even If i set the log level to 0 , I can't see more logs than what >>>>>>>> I've pasted earlier. May be something wrong with my setup or what. >>>>>>>> >>>>>>>> /proc/mounts is fine . >>>>>>>> >>>>>>>> -Paras. >>>>>>>> >>>>>>>> >>>>>>>> On Thu, Dec 19, 2013 at 11:18 AM, emmanuel segura < >>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>> >>>>>>>>> yes i think that too >>>>>>>>> >>>>>>>>> >>>>>>>>> 2013/12/19 Digimer >>>>>>>>> >>>>>>>>>> On 19/12/13 08:52 AM, emmanuel segura wrote: >>>>>>>>>> >>>>>>>>>>> ? >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Sounds like classic spam to me. >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Digimer >>>>>>>>>> Papers and Projects: https://alteeve.ca/w/ >>>>>>>>>> What if the cure for cancer is trapped in the mind of a person >>>>>>>>>> without access to education? >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Linux-cluster mailing list >>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Linux-cluster mailing list >>>>>>>>> Linux-cluster at redhat.com >>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Linux-cluster mailing list >>>>>>>> Linux-cluster at redhat.com >>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>> >>>>>>> -- >>>>>>> Linux-cluster mailing list >>>>>>> Linux-cluster at redhat.com >>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Linux-cluster mailing list >>>>>> Linux-cluster at redhat.com >>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>> >>>>> -- >>>>> Linux-cluster mailing list >>>>> Linux-cluster at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>> >>>> >>>> >>>> -- >>>> Linux-cluster mailing list >>>> Linux-cluster at redhat.com >>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>> >>> >>> >>> >>> -- >>> esta es mi vida e me la vivo hasta que dios quiera >>> >>> -- >>> Linux-cluster mailing list >>> Linux-cluster at redhat.com >>> https://www.redhat.com/mailman/listinfo/linux-cluster >>> >> >> >> -- >> Linux-cluster mailing list >> Linux-cluster at redhat.com >> https://www.redhat.com/mailman/listinfo/linux-cluster >> > > > > -- > esta es mi vida e me la vivo hasta que dios quiera > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emi2fast at gmail.com Fri Dec 20 18:02:41 2013 From: emi2fast at gmail.com (emmanuel segura) Date: Fri, 20 Dec 2013 19:02:41 +0100 Subject: [Linux-cluster] failover domain and service start In-Reply-To: References: <1387458801.43078.YahooMailIosMobile@web163403.mail.gq1.yahoo.com> <52B3241D.5040700@alteeve.ca> Message-ID: Hello Paras, what kind of filesystem are you using for this /vms_c? sorry maybe i missing Thanks 2013/12/20 Paras pradhan > I have it in place. I found out this issue is only on one node. This is > what I've tried. All three cluster nodes are in a single failover domain > > case1: > stop all cluster services on node1 and node2. start vm service on node3. > Fails > > case2: > stop all services on node2 and node 3. start vm service on node 1. Service > starts ! > > case3: > stop all services on node1 and node3: start service on node2. Service > starts > > All the nodes are running the same kernel and same cluster packages. I > changed the failover domain to this service but its the same . > > May be node3 has some unknow issue and I am not sure if I should rebuild > it. > > -Paras. > > > On Thu, Dec 19, 2013 at 5:23 PM, emmanuel segura wrote: > >> ok is clear, now the problem is that you can't start vm using the cluster >> >> in /etc/sysconfig/cluster if doesn't exist create it, RGMGR_OPTS="-d" and >> restart rgmanager, after that, do the test >> >> >> 2013/12/19 Paras pradhan >> >>> -M is for migrate. >>> >>> I think you meant this: >>> >>> clusvcadm -M vm:guest1 -m cvtst3.uark.edu >>> >>> but that failed. >>> Trying to migrate vm:guest1 to cvtst3.uark.edu...Temporary failure; try >>> again >>> >>> >>> >>> On Thu, Dec 19, 2013 at 4:28 PM, emmanuel segura wrote: >>> >>>> clusvcadm -e -M vm:guest1 >>>> >>>> >>>> 2013/12/19 Paras pradhan >>>> >>>>> Thats a success. >>>>> >>>>> -- >>>>> [root at cvtst3 log]# rg_test test /etc/cluster/cluster.conf start vm >>>>> guest1 >>>>> Running in test mode. >>>>> Starting guest1... >>>>> Management tool: xm >>>>> Virtual machine guest1 is not running >>>>> xm create --path="/vms_c" guest1 on_shutdown="destroy" >>>>> on_reboot="destroy" on_crash="destroy" >>>>> Using config file "/vms_c/guest1". >>>>> Using to parse /grub/menu.lst >>>>> Started domain guest1 >>>>> Start of guest1 complete >>>>> -- >>>>> >>>>> >>>>> >>>>> >>>>> On Thu, Dec 19, 2013 at 3:59 PM, emmanuel segura wrote: >>>>> >>>>>> *rg_test test /etc/cluster/cluster.conf start vm guest1* >>>>>> >>>>>> >>>>>> 2013/12/19 Paras pradhan >>>>>> >>>>>>> emmanuel that is what I've done. I don;t see anything other than >>>>>>> start domian failure. >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Thu, Dec 19, 2013 at 3:26 PM, emmanuel segura >>>>>> > wrote: >>>>>>> >>>>>>>> set the log level to 7 and after incriment the config version and >>>>>>>> ccs_tool update /etc/cluster/cluster.conf >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> 2013/12/19 Paras pradhan >>>>>>>> >>>>>>>>> Even If i set the log level to 0 , I can't see more logs than what >>>>>>>>> I've pasted earlier. May be something wrong with my setup or what. >>>>>>>>> >>>>>>>>> /proc/mounts is fine . >>>>>>>>> >>>>>>>>> -Paras. >>>>>>>>> >>>>>>>>> >>>>>>>>> On Thu, Dec 19, 2013 at 11:18 AM, emmanuel segura < >>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>> >>>>>>>>>> yes i think that too >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> 2013/12/19 Digimer >>>>>>>>>> >>>>>>>>>>> On 19/12/13 08:52 AM, emmanuel segura wrote: >>>>>>>>>>> >>>>>>>>>>>> ? >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Sounds like classic spam to me. >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> Digimer >>>>>>>>>>> Papers and Projects: https://alteeve.ca/w/ >>>>>>>>>>> What if the cure for cancer is trapped in the mind of a person >>>>>>>>>>> without access to education? >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Linux-cluster mailing list >>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Linux-cluster mailing list >>>>>>>>> Linux-cluster at redhat.com >>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>> >>>>>>>> -- >>>>>>>> Linux-cluster mailing list >>>>>>>> Linux-cluster at redhat.com >>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Linux-cluster mailing list >>>>>>> Linux-cluster at redhat.com >>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>> >>>>>> -- >>>>>> Linux-cluster mailing list >>>>>> Linux-cluster at redhat.com >>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>> >>>>> >>>>> >>>>> -- >>>>> Linux-cluster mailing list >>>>> Linux-cluster at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>> >>>> >>>> >>>> >>>> -- >>>> esta es mi vida e me la vivo hasta que dios quiera >>>> >>>> -- >>>> Linux-cluster mailing list >>>> Linux-cluster at redhat.com >>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>> >>> >>> >>> -- >>> Linux-cluster mailing list >>> Linux-cluster at redhat.com >>> https://www.redhat.com/mailman/listinfo/linux-cluster >>> >> >> >> >> -- >> esta es mi vida e me la vivo hasta que dios quiera >> >> -- >> Linux-cluster mailing list >> Linux-cluster at redhat.com >> https://www.redhat.com/mailman/listinfo/linux-cluster >> > > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -- esta es mi vida e me la vivo hasta que dios quiera -------------- next part -------------- An HTML attachment was scrubbed... URL: From pradhanparas at gmail.com Fri Dec 20 18:10:33 2013 From: pradhanparas at gmail.com (Paras pradhan) Date: Fri, 20 Dec 2013 12:10:33 -0600 Subject: [Linux-cluster] failover domain and service start In-Reply-To: References: <1387458801.43078.YahooMailIosMobile@web163403.mail.gq1.yahoo.com> <52B3241D.5040700@alteeve.ca> Message-ID: that is gfs2 On Fri, Dec 20, 2013 at 12:02 PM, emmanuel segura wrote: > Hello Paras, > > what kind of filesystem are you using for this /vms_c? sorry maybe i > missing > > Thanks > > > 2013/12/20 Paras pradhan > >> I have it in place. I found out this issue is only on one node. This is >> what I've tried. All three cluster nodes are in a single failover domain >> >> case1: >> stop all cluster services on node1 and node2. start vm service on node3. >> Fails >> >> case2: >> stop all services on node2 and node 3. start vm service on node 1. >> Service starts ! >> >> case3: >> stop all services on node1 and node3: start service on node2. Service >> starts >> >> All the nodes are running the same kernel and same cluster packages. I >> changed the failover domain to this service but its the same . >> >> May be node3 has some unknow issue and I am not sure if I should rebuild >> it. >> >> -Paras. >> >> >> On Thu, Dec 19, 2013 at 5:23 PM, emmanuel segura wrote: >> >>> ok is clear, now the problem is that you can't start vm using the cluster >>> >>> in /etc/sysconfig/cluster if doesn't exist create it, RGMGR_OPTS="-d" >>> and restart rgmanager, after that, do the test >>> >>> >>> 2013/12/19 Paras pradhan >>> >>>> -M is for migrate. >>>> >>>> I think you meant this: >>>> >>>> clusvcadm -M vm:guest1 -m cvtst3.uark.edu >>>> >>>> but that failed. >>>> Trying to migrate vm:guest1 to cvtst3.uark.edu...Temporary failure; try >>>> again >>>> >>>> >>>> >>>> On Thu, Dec 19, 2013 at 4:28 PM, emmanuel segura wrote: >>>> >>>>> clusvcadm -e -M vm:guest1 >>>>> >>>>> >>>>> 2013/12/19 Paras pradhan >>>>> >>>>>> Thats a success. >>>>>> >>>>>> -- >>>>>> [root at cvtst3 log]# rg_test test /etc/cluster/cluster.conf start vm >>>>>> guest1 >>>>>> Running in test mode. >>>>>> Starting guest1... >>>>>> Management tool: xm >>>>>> Virtual machine guest1 is not running >>>>>> xm create --path="/vms_c" guest1 on_shutdown="destroy" >>>>>> on_reboot="destroy" on_crash="destroy" >>>>>> Using config file "/vms_c/guest1". >>>>>> Using to parse /grub/menu.lst >>>>>> Started domain guest1 >>>>>> Start of guest1 complete >>>>>> -- >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Thu, Dec 19, 2013 at 3:59 PM, emmanuel segura wrote: >>>>>> >>>>>>> *rg_test test /etc/cluster/cluster.conf start vm guest1* >>>>>>> >>>>>>> >>>>>>> 2013/12/19 Paras pradhan >>>>>>> >>>>>>>> emmanuel that is what I've done. I don;t see anything other than >>>>>>>> start domian failure. >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Thu, Dec 19, 2013 at 3:26 PM, emmanuel segura < >>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>> >>>>>>>>> set the log level to 7 and after incriment the config version and >>>>>>>>> ccs_tool update /etc/cluster/cluster.conf >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> 2013/12/19 Paras pradhan >>>>>>>>> >>>>>>>>>> Even If i set the log level to 0 , I can't see more logs than >>>>>>>>>> what I've pasted earlier. May be something wrong with my setup or what. >>>>>>>>>> >>>>>>>>>> /proc/mounts is fine . >>>>>>>>>> >>>>>>>>>> -Paras. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Thu, Dec 19, 2013 at 11:18 AM, emmanuel segura < >>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>> >>>>>>>>>>> yes i think that too >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> 2013/12/19 Digimer >>>>>>>>>>> >>>>>>>>>>>> On 19/12/13 08:52 AM, emmanuel segura wrote: >>>>>>>>>>>> >>>>>>>>>>>>> ? >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Sounds like classic spam to me. >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> Digimer >>>>>>>>>>>> Papers and Projects: https://alteeve.ca/w/ >>>>>>>>>>>> What if the cure for cancer is trapped in the mind of a person >>>>>>>>>>>> without access to education? >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Linux-cluster mailing list >>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Linux-cluster mailing list >>>>>>>>> Linux-cluster at redhat.com >>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Linux-cluster mailing list >>>>>>>> Linux-cluster at redhat.com >>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>> >>>>>>> -- >>>>>>> Linux-cluster mailing list >>>>>>> Linux-cluster at redhat.com >>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Linux-cluster mailing list >>>>>> Linux-cluster at redhat.com >>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>> >>>>> -- >>>>> Linux-cluster mailing list >>>>> Linux-cluster at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>> >>>> >>>> >>>> -- >>>> Linux-cluster mailing list >>>> Linux-cluster at redhat.com >>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>> >>> >>> >>> >>> -- >>> esta es mi vida e me la vivo hasta que dios quiera >>> >>> -- >>> Linux-cluster mailing list >>> Linux-cluster at redhat.com >>> https://www.redhat.com/mailman/listinfo/linux-cluster >>> >> >> >> -- >> Linux-cluster mailing list >> Linux-cluster at redhat.com >> https://www.redhat.com/mailman/listinfo/linux-cluster >> > > > > -- > esta es mi vida e me la vivo hasta que dios quiera > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emi2fast at gmail.com Fri Dec 20 18:23:36 2013 From: emi2fast at gmail.com (emmanuel segura) Date: Fri, 20 Dec 2013 19:23:36 +0100 Subject: [Linux-cluster] failover domain and service start In-Reply-To: References: <1387458801.43078.YahooMailIosMobile@web163403.mail.gq1.yahoo.com> <52B3241D.5040700@alteeve.ca> Message-ID: did you setting RGMGR_OPTS="-d" and restarted rgmanager in every cluster node? 2013/12/20 Paras pradhan > that is gfs2 > > > On Fri, Dec 20, 2013 at 12:02 PM, emmanuel segura wrote: > >> Hello Paras, >> >> what kind of filesystem are you using for this /vms_c? sorry maybe i >> missing >> >> Thanks >> >> >> 2013/12/20 Paras pradhan >> >>> I have it in place. I found out this issue is only on one node. This is >>> what I've tried. All three cluster nodes are in a single failover domain >>> >>> case1: >>> stop all cluster services on node1 and node2. start vm service on node3. >>> Fails >>> >>> case2: >>> stop all services on node2 and node 3. start vm service on node 1. >>> Service starts ! >>> >>> case3: >>> stop all services on node1 and node3: start service on node2. Service >>> starts >>> >>> All the nodes are running the same kernel and same cluster packages. I >>> changed the failover domain to this service but its the same . >>> >>> May be node3 has some unknow issue and I am not sure if I should rebuild >>> it. >>> >>> -Paras. >>> >>> >>> On Thu, Dec 19, 2013 at 5:23 PM, emmanuel segura wrote: >>> >>>> ok is clear, now the problem is that you can't start vm using the >>>> cluster >>>> >>>> in /etc/sysconfig/cluster if doesn't exist create it, RGMGR_OPTS="-d" >>>> and restart rgmanager, after that, do the test >>>> >>>> >>>> 2013/12/19 Paras pradhan >>>> >>>>> -M is for migrate. >>>>> >>>>> I think you meant this: >>>>> >>>>> clusvcadm -M vm:guest1 -m cvtst3.uark.edu >>>>> >>>>> but that failed. >>>>> Trying to migrate vm:guest1 to cvtst3.uark.edu...Temporary failure; >>>>> try again >>>>> >>>>> >>>>> >>>>> On Thu, Dec 19, 2013 at 4:28 PM, emmanuel segura wrote: >>>>> >>>>>> clusvcadm -e -M vm:guest1 >>>>>> >>>>>> >>>>>> 2013/12/19 Paras pradhan >>>>>> >>>>>>> Thats a success. >>>>>>> >>>>>>> -- >>>>>>> [root at cvtst3 log]# rg_test test /etc/cluster/cluster.conf start vm >>>>>>> guest1 >>>>>>> Running in test mode. >>>>>>> Starting guest1... >>>>>>> Management tool: xm >>>>>>> Virtual machine guest1 is not running >>>>>>> xm create --path="/vms_c" guest1 on_shutdown="destroy" >>>>>>> on_reboot="destroy" on_crash="destroy" >>>>>>> Using config file "/vms_c/guest1". >>>>>>> Using to parse /grub/menu.lst >>>>>>> Started domain guest1 >>>>>>> Start of guest1 complete >>>>>>> -- >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Thu, Dec 19, 2013 at 3:59 PM, emmanuel segura >>>>>> > wrote: >>>>>>> >>>>>>>> *rg_test test /etc/cluster/cluster.conf start vm guest1* >>>>>>>> >>>>>>>> >>>>>>>> 2013/12/19 Paras pradhan >>>>>>>> >>>>>>>>> emmanuel that is what I've done. I don;t see anything other than >>>>>>>>> start domian failure. >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Thu, Dec 19, 2013 at 3:26 PM, emmanuel segura < >>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>> >>>>>>>>>> set the log level to 7 and after incriment the config version and >>>>>>>>>> ccs_tool update /etc/cluster/cluster.conf >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> 2013/12/19 Paras pradhan >>>>>>>>>> >>>>>>>>>>> Even If i set the log level to 0 , I can't see more logs than >>>>>>>>>>> what I've pasted earlier. May be something wrong with my setup or what. >>>>>>>>>>> >>>>>>>>>>> /proc/mounts is fine . >>>>>>>>>>> >>>>>>>>>>> -Paras. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Thu, Dec 19, 2013 at 11:18 AM, emmanuel segura < >>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> yes i think that too >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> 2013/12/19 Digimer >>>>>>>>>>>> >>>>>>>>>>>>> On 19/12/13 08:52 AM, emmanuel segura wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> ? >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Sounds like classic spam to me. >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> Digimer >>>>>>>>>>>>> Papers and Projects: https://alteeve.ca/w/ >>>>>>>>>>>>> What if the cure for cancer is trapped in the mind of a person >>>>>>>>>>>>> without access to education? >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Linux-cluster mailing list >>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Linux-cluster mailing list >>>>>>>>> Linux-cluster at redhat.com >>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>> >>>>>>>> -- >>>>>>>> Linux-cluster mailing list >>>>>>>> Linux-cluster at redhat.com >>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Linux-cluster mailing list >>>>>>> Linux-cluster at redhat.com >>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>> >>>>>> -- >>>>>> Linux-cluster mailing list >>>>>> Linux-cluster at redhat.com >>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>> >>>>> >>>>> >>>>> -- >>>>> Linux-cluster mailing list >>>>> Linux-cluster at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>> >>>> >>>> >>>> >>>> -- >>>> esta es mi vida e me la vivo hasta que dios quiera >>>> >>>> -- >>>> Linux-cluster mailing list >>>> Linux-cluster at redhat.com >>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>> >>> >>> >>> -- >>> Linux-cluster mailing list >>> Linux-cluster at redhat.com >>> https://www.redhat.com/mailman/listinfo/linux-cluster >>> >> >> >> >> -- >> esta es mi vida e me la vivo hasta que dios quiera >> >> -- >> Linux-cluster mailing list >> Linux-cluster at redhat.com >> https://www.redhat.com/mailman/listinfo/linux-cluster >> > > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -- esta es mi vida e me la vivo hasta que dios quiera -------------- next part -------------- An HTML attachment was scrubbed... URL: From pradhanparas at gmail.com Fri Dec 20 18:38:30 2013 From: pradhanparas at gmail.com (Paras pradhan) Date: Fri, 20 Dec 2013 12:38:30 -0600 Subject: [Linux-cluster] failover domain and service start In-Reply-To: References: <1387458801.43078.YahooMailIosMobile@web163403.mail.gq1.yahoo.com> <52B3241D.5040700@alteeve.ca> Message-ID: Yes On Fri, Dec 20, 2013 at 12:23 PM, emmanuel segura wrote: > did you setting RGMGR_OPTS="-d" and restarted rgmanager in every cluster > node? > > > 2013/12/20 Paras pradhan > >> that is gfs2 >> >> >> On Fri, Dec 20, 2013 at 12:02 PM, emmanuel segura wrote: >> >>> Hello Paras, >>> >>> what kind of filesystem are you using for this /vms_c? sorry maybe i >>> missing >>> >>> Thanks >>> >>> >>> 2013/12/20 Paras pradhan >>> >>>> I have it in place. I found out this issue is only on one node. This is >>>> what I've tried. All three cluster nodes are in a single failover domain >>>> >>>> case1: >>>> stop all cluster services on node1 and node2. start vm service on >>>> node3. Fails >>>> >>>> case2: >>>> stop all services on node2 and node 3. start vm service on node 1. >>>> Service starts ! >>>> >>>> case3: >>>> stop all services on node1 and node3: start service on node2. Service >>>> starts >>>> >>>> All the nodes are running the same kernel and same cluster packages. I >>>> changed the failover domain to this service but its the same . >>>> >>>> May be node3 has some unknow issue and I am not sure if I should >>>> rebuild it. >>>> >>>> -Paras. >>>> >>>> >>>> On Thu, Dec 19, 2013 at 5:23 PM, emmanuel segura wrote: >>>> >>>>> ok is clear, now the problem is that you can't start vm using the >>>>> cluster >>>>> >>>>> in /etc/sysconfig/cluster if doesn't exist create it, RGMGR_OPTS="-d" >>>>> and restart rgmanager, after that, do the test >>>>> >>>>> >>>>> 2013/12/19 Paras pradhan >>>>> >>>>>> -M is for migrate. >>>>>> >>>>>> I think you meant this: >>>>>> >>>>>> clusvcadm -M vm:guest1 -m cvtst3.uark.edu >>>>>> >>>>>> but that failed. >>>>>> Trying to migrate vm:guest1 to cvtst3.uark.edu...Temporary failure; >>>>>> try again >>>>>> >>>>>> >>>>>> >>>>>> On Thu, Dec 19, 2013 at 4:28 PM, emmanuel segura wrote: >>>>>> >>>>>>> clusvcadm -e -M vm:guest1 >>>>>>> >>>>>>> >>>>>>> 2013/12/19 Paras pradhan >>>>>>> >>>>>>>> Thats a success. >>>>>>>> >>>>>>>> -- >>>>>>>> [root at cvtst3 log]# rg_test test /etc/cluster/cluster.conf start vm >>>>>>>> guest1 >>>>>>>> Running in test mode. >>>>>>>> Starting guest1... >>>>>>>> Management tool: xm >>>>>>>> Virtual machine guest1 is not running >>>>>>>> xm create --path="/vms_c" guest1 on_shutdown="destroy" >>>>>>>> on_reboot="destroy" on_crash="destroy" >>>>>>>> Using config file "/vms_c/guest1". >>>>>>>> Using to parse /grub/menu.lst >>>>>>>> Started domain guest1 >>>>>>>> Start of guest1 complete >>>>>>>> -- >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Thu, Dec 19, 2013 at 3:59 PM, emmanuel segura < >>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>> >>>>>>>>> *rg_test test /etc/cluster/cluster.conf start vm guest1* >>>>>>>>> >>>>>>>>> >>>>>>>>> 2013/12/19 Paras pradhan >>>>>>>>> >>>>>>>>>> emmanuel that is what I've done. I don;t see anything other than >>>>>>>>>> start domian failure. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Thu, Dec 19, 2013 at 3:26 PM, emmanuel segura < >>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>> >>>>>>>>>>> set the log level to 7 and after incriment the config version >>>>>>>>>>> and ccs_tool update /etc/cluster/cluster.conf >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> 2013/12/19 Paras pradhan >>>>>>>>>>> >>>>>>>>>>>> Even If i set the log level to 0 , I can't see more logs than >>>>>>>>>>>> what I've pasted earlier. May be something wrong with my setup or what. >>>>>>>>>>>> >>>>>>>>>>>> /proc/mounts is fine . >>>>>>>>>>>> >>>>>>>>>>>> -Paras. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Thu, Dec 19, 2013 at 11:18 AM, emmanuel segura < >>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> yes i think that too >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> 2013/12/19 Digimer >>>>>>>>>>>>> >>>>>>>>>>>>>> On 19/12/13 08:52 AM, emmanuel segura wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> ? >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> Sounds like classic spam to me. >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> Digimer >>>>>>>>>>>>>> Papers and Projects: https://alteeve.ca/w/ >>>>>>>>>>>>>> What if the cure for cancer is trapped in the mind of a >>>>>>>>>>>>>> person without access to education? >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Linux-cluster mailing list >>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Linux-cluster mailing list >>>>>>>>> Linux-cluster at redhat.com >>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Linux-cluster mailing list >>>>>>>> Linux-cluster at redhat.com >>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>> >>>>>>> -- >>>>>>> Linux-cluster mailing list >>>>>>> Linux-cluster at redhat.com >>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Linux-cluster mailing list >>>>>> Linux-cluster at redhat.com >>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>> >>>>> -- >>>>> Linux-cluster mailing list >>>>> Linux-cluster at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>> >>>> >>>> >>>> -- >>>> Linux-cluster mailing list >>>> Linux-cluster at redhat.com >>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>> >>> >>> >>> >>> -- >>> esta es mi vida e me la vivo hasta que dios quiera >>> >>> -- >>> Linux-cluster mailing list >>> Linux-cluster at redhat.com >>> https://www.redhat.com/mailman/listinfo/linux-cluster >>> >> >> >> -- >> Linux-cluster mailing list >> Linux-cluster at redhat.com >> https://www.redhat.com/mailman/listinfo/linux-cluster >> > > > > -- > esta es mi vida e me la vivo hasta que dios quiera > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emi2fast at gmail.com Fri Dec 20 18:54:16 2013 From: emi2fast at gmail.com (emmanuel segura) Date: Fri, 20 Dec 2013 19:54:16 +0100 Subject: [Linux-cluster] failover domain and service start In-Reply-To: References: <1387458801.43078.YahooMailIosMobile@web163403.mail.gq1.yahoo.com> <52B3241D.5040700@alteeve.ca> Message-ID: What do you see in your log? 2013/12/20 Paras pradhan > Yes > > > On Fri, Dec 20, 2013 at 12:23 PM, emmanuel segura wrote: > >> did you setting RGMGR_OPTS="-d" and restarted rgmanager in every cluster >> node? >> >> >> 2013/12/20 Paras pradhan >> >>> that is gfs2 >>> >>> >>> On Fri, Dec 20, 2013 at 12:02 PM, emmanuel segura wrote: >>> >>>> Hello Paras, >>>> >>>> what kind of filesystem are you using for this /vms_c? sorry maybe i >>>> missing >>>> >>>> Thanks >>>> >>>> >>>> 2013/12/20 Paras pradhan >>>> >>>>> I have it in place. I found out this issue is only on one node. This >>>>> is what I've tried. All three cluster nodes are in a single failover domain >>>>> >>>>> case1: >>>>> stop all cluster services on node1 and node2. start vm service on >>>>> node3. Fails >>>>> >>>>> case2: >>>>> stop all services on node2 and node 3. start vm service on node 1. >>>>> Service starts ! >>>>> >>>>> case3: >>>>> stop all services on node1 and node3: start service on node2. Service >>>>> starts >>>>> >>>>> All the nodes are running the same kernel and same cluster packages. I >>>>> changed the failover domain to this service but its the same . >>>>> >>>>> May be node3 has some unknow issue and I am not sure if I should >>>>> rebuild it. >>>>> >>>>> -Paras. >>>>> >>>>> >>>>> On Thu, Dec 19, 2013 at 5:23 PM, emmanuel segura wrote: >>>>> >>>>>> ok is clear, now the problem is that you can't start vm using the >>>>>> cluster >>>>>> >>>>>> in /etc/sysconfig/cluster if doesn't exist create it, RGMGR_OPTS="-d" >>>>>> and restart rgmanager, after that, do the test >>>>>> >>>>>> >>>>>> 2013/12/19 Paras pradhan >>>>>> >>>>>>> -M is for migrate. >>>>>>> >>>>>>> I think you meant this: >>>>>>> >>>>>>> clusvcadm -M vm:guest1 -m cvtst3.uark.edu >>>>>>> >>>>>>> but that failed. >>>>>>> Trying to migrate vm:guest1 to cvtst3.uark.edu...Temporary failure; >>>>>>> try again >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Thu, Dec 19, 2013 at 4:28 PM, emmanuel segura >>>>>> > wrote: >>>>>>> >>>>>>>> clusvcadm -e -M vm:guest1 >>>>>>>> >>>>>>>> >>>>>>>> 2013/12/19 Paras pradhan >>>>>>>> >>>>>>>>> Thats a success. >>>>>>>>> >>>>>>>>> -- >>>>>>>>> [root at cvtst3 log]# rg_test test /etc/cluster/cluster.conf start >>>>>>>>> vm guest1 >>>>>>>>> Running in test mode. >>>>>>>>> Starting guest1... >>>>>>>>> Management tool: xm >>>>>>>>> Virtual machine guest1 is not running >>>>>>>>> xm create --path="/vms_c" guest1 on_shutdown="destroy" >>>>>>>>> on_reboot="destroy" on_crash="destroy" >>>>>>>>> Using config file "/vms_c/guest1". >>>>>>>>> Using to parse >>>>>>>>> /grub/menu.lst >>>>>>>>> Started domain guest1 >>>>>>>>> Start of guest1 complete >>>>>>>>> -- >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Thu, Dec 19, 2013 at 3:59 PM, emmanuel segura < >>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>> >>>>>>>>>> *rg_test test /etc/cluster/cluster.conf start vm guest1* >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> 2013/12/19 Paras pradhan >>>>>>>>>> >>>>>>>>>>> emmanuel that is what I've done. I don;t see anything other than >>>>>>>>>>> start domian failure. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Thu, Dec 19, 2013 at 3:26 PM, emmanuel segura < >>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> set the log level to 7 and after incriment the config version >>>>>>>>>>>> and ccs_tool update /etc/cluster/cluster.conf >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> 2013/12/19 Paras pradhan >>>>>>>>>>>> >>>>>>>>>>>>> Even If i set the log level to 0 , I can't see more logs than >>>>>>>>>>>>> what I've pasted earlier. May be something wrong with my setup or what. >>>>>>>>>>>>> >>>>>>>>>>>>> /proc/mounts is fine . >>>>>>>>>>>>> >>>>>>>>>>>>> -Paras. >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On Thu, Dec 19, 2013 at 11:18 AM, emmanuel segura < >>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> yes i think that too >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> 2013/12/19 Digimer >>>>>>>>>>>>>> >>>>>>>>>>>>>>> On 19/12/13 08:52 AM, emmanuel segura wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> ? >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Sounds like classic spam to me. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> Digimer >>>>>>>>>>>>>>> Papers and Projects: https://alteeve.ca/w/ >>>>>>>>>>>>>>> What if the cure for cancer is trapped in the mind of a >>>>>>>>>>>>>>> person without access to education? >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Linux-cluster mailing list >>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Linux-cluster mailing list >>>>>>>>> Linux-cluster at redhat.com >>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>> >>>>>>>> -- >>>>>>>> Linux-cluster mailing list >>>>>>>> Linux-cluster at redhat.com >>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Linux-cluster mailing list >>>>>>> Linux-cluster at redhat.com >>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>> >>>>>> -- >>>>>> Linux-cluster mailing list >>>>>> Linux-cluster at redhat.com >>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>> >>>>> >>>>> >>>>> -- >>>>> Linux-cluster mailing list >>>>> Linux-cluster at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>> >>>> >>>> >>>> >>>> -- >>>> esta es mi vida e me la vivo hasta que dios quiera >>>> >>>> -- >>>> Linux-cluster mailing list >>>> Linux-cluster at redhat.com >>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>> >>> >>> >>> -- >>> Linux-cluster mailing list >>> Linux-cluster at redhat.com >>> https://www.redhat.com/mailman/listinfo/linux-cluster >>> >> >> >> >> -- >> esta es mi vida e me la vivo hasta que dios quiera >> >> -- >> Linux-cluster mailing list >> Linux-cluster at redhat.com >> https://www.redhat.com/mailman/listinfo/linux-cluster >> > > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -- esta es mi vida e me la vivo hasta que dios quiera -------------- next part -------------- An HTML attachment was scrubbed... URL: From pradhanparas at gmail.com Fri Dec 20 20:16:32 2013 From: pradhanparas at gmail.com (Paras pradhan) Date: Fri, 20 Dec 2013 14:16:32 -0600 Subject: [Linux-cluster] failover domain and service start In-Reply-To: References: <1387458801.43078.YahooMailIosMobile@web163403.mail.gq1.yahoo.com> <52B3241D.5040700@alteeve.ca> Message-ID: I get the following error. Looks like my node3 can't manage services. When all nodes are up and I try to start a service on node 3, the service will start on other nodes even the priority on fd is number 1. -- Dec 20 14:13:38 cvtst3 clurgmgrd[19883]: Starting stopped service vm:guest1 Dec 20 14:13:38 cvtst3 clurgmgrd[19883]: start on vm "guest1" returned 1 (generic error) Dec 20 14:13:38 cvtst3 clurgmgrd[19883]: #68: Failed to start vm:guest1; return value: 1 Dec 20 14:13:38 cvtst3 clurgmgrd[19883]: Stopping service vm:guest1 Dec 20 14:13:45 cvtst3 clurgmgrd[19883]: Service vm:guest1 is recovering Dec 20 14:13:45 cvtst3 clurgmgrd[19883]: #71: Relocating failed service vm:guest1 Dec 20 14:13:45 cvtst3 clurgmgrd[19883]: Service vm:guest1 is stopped - On Fri, Dec 20, 2013 at 12:54 PM, emmanuel segura wrote: > What do you see in your log? > > > 2013/12/20 Paras pradhan > >> Yes >> >> >> On Fri, Dec 20, 2013 at 12:23 PM, emmanuel segura wrote: >> >>> did you setting RGMGR_OPTS="-d" and restarted rgmanager in every cluster >>> node? >>> >>> >>> 2013/12/20 Paras pradhan >>> >>>> that is gfs2 >>>> >>>> >>>> On Fri, Dec 20, 2013 at 12:02 PM, emmanuel segura wrote: >>>> >>>>> Hello Paras, >>>>> >>>>> what kind of filesystem are you using for this /vms_c? sorry maybe i >>>>> missing >>>>> >>>>> Thanks >>>>> >>>>> >>>>> 2013/12/20 Paras pradhan >>>>> >>>>>> I have it in place. I found out this issue is only on one node. This >>>>>> is what I've tried. All three cluster nodes are in a single failover domain >>>>>> >>>>>> case1: >>>>>> stop all cluster services on node1 and node2. start vm service on >>>>>> node3. Fails >>>>>> >>>>>> case2: >>>>>> stop all services on node2 and node 3. start vm service on node 1. >>>>>> Service starts ! >>>>>> >>>>>> case3: >>>>>> stop all services on node1 and node3: start service on node2. Service >>>>>> starts >>>>>> >>>>>> All the nodes are running the same kernel and same cluster packages. >>>>>> I changed the failover domain to this service but its the same . >>>>>> >>>>>> May be node3 has some unknow issue and I am not sure if I should >>>>>> rebuild it. >>>>>> >>>>>> -Paras. >>>>>> >>>>>> >>>>>> On Thu, Dec 19, 2013 at 5:23 PM, emmanuel segura wrote: >>>>>> >>>>>>> ok is clear, now the problem is that you can't start vm using the >>>>>>> cluster >>>>>>> >>>>>>> in /etc/sysconfig/cluster if doesn't exist create it, >>>>>>> RGMGR_OPTS="-d" and restart rgmanager, after that, do the test >>>>>>> >>>>>>> >>>>>>> 2013/12/19 Paras pradhan >>>>>>> >>>>>>>> -M is for migrate. >>>>>>>> >>>>>>>> I think you meant this: >>>>>>>> >>>>>>>> clusvcadm -M vm:guest1 -m cvtst3.uark.edu >>>>>>>> >>>>>>>> but that failed. >>>>>>>> Trying to migrate vm:guest1 to cvtst3.uark.edu...Temporary failure; >>>>>>>> try again >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Thu, Dec 19, 2013 at 4:28 PM, emmanuel segura < >>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>> >>>>>>>>> clusvcadm -e -M vm:guest1 >>>>>>>>> >>>>>>>>> >>>>>>>>> 2013/12/19 Paras pradhan >>>>>>>>> >>>>>>>>>> Thats a success. >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> [root at cvtst3 log]# rg_test test /etc/cluster/cluster.conf start >>>>>>>>>> vm guest1 >>>>>>>>>> Running in test mode. >>>>>>>>>> Starting guest1... >>>>>>>>>> Management tool: xm >>>>>>>>>> Virtual machine guest1 is not running >>>>>>>>>> xm create --path="/vms_c" guest1 on_shutdown="destroy" >>>>>>>>>> on_reboot="destroy" on_crash="destroy" >>>>>>>>>> Using config file "/vms_c/guest1". >>>>>>>>>> Using to parse >>>>>>>>>> /grub/menu.lst >>>>>>>>>> Started domain guest1 >>>>>>>>>> Start of guest1 complete >>>>>>>>>> -- >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Thu, Dec 19, 2013 at 3:59 PM, emmanuel segura < >>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>> >>>>>>>>>>> *rg_test test /etc/cluster/cluster.conf start vm guest1* >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> 2013/12/19 Paras pradhan >>>>>>>>>>> >>>>>>>>>>>> emmanuel that is what I've done. I don;t see anything other >>>>>>>>>>>> than start domian failure. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Thu, Dec 19, 2013 at 3:26 PM, emmanuel segura < >>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> set the log level to 7 and after incriment the config version >>>>>>>>>>>>> and ccs_tool update /etc/cluster/cluster.conf >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> 2013/12/19 Paras pradhan >>>>>>>>>>>>> >>>>>>>>>>>>>> Even If i set the log level to 0 , I can't see more logs than >>>>>>>>>>>>>> what I've pasted earlier. May be something wrong with my setup or what. >>>>>>>>>>>>>> >>>>>>>>>>>>>> /proc/mounts is fine . >>>>>>>>>>>>>> >>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Thu, Dec 19, 2013 at 11:18 AM, emmanuel segura < >>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> yes i think that too >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> 2013/12/19 Digimer >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On 19/12/13 08:52 AM, emmanuel segura wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> ? >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Sounds like classic spam to me. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>> Digimer >>>>>>>>>>>>>>>> Papers and Projects: https://alteeve.ca/w/ >>>>>>>>>>>>>>>> What if the cure for cancer is trapped in the mind of a >>>>>>>>>>>>>>>> person without access to education? >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Linux-cluster mailing list >>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Linux-cluster mailing list >>>>>>>>> Linux-cluster at redhat.com >>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Linux-cluster mailing list >>>>>>>> Linux-cluster at redhat.com >>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>> >>>>>>> -- >>>>>>> Linux-cluster mailing list >>>>>>> Linux-cluster at redhat.com >>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Linux-cluster mailing list >>>>>> Linux-cluster at redhat.com >>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>> >>>>> -- >>>>> Linux-cluster mailing list >>>>> Linux-cluster at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>> >>>> >>>> >>>> -- >>>> Linux-cluster mailing list >>>> Linux-cluster at redhat.com >>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>> >>> >>> >>> >>> -- >>> esta es mi vida e me la vivo hasta que dios quiera >>> >>> -- >>> Linux-cluster mailing list >>> Linux-cluster at redhat.com >>> https://www.redhat.com/mailman/listinfo/linux-cluster >>> >> >> >> -- >> Linux-cluster mailing list >> Linux-cluster at redhat.com >> https://www.redhat.com/mailman/listinfo/linux-cluster >> > > > > -- > esta es mi vida e me la vivo hasta que dios quiera > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emi2fast at gmail.com Fri Dec 20 21:03:08 2013 From: emi2fast at gmail.com (emmanuel segura) Date: Fri, 20 Dec 2013 22:03:08 +0100 Subject: [Linux-cluster] failover domain and service start In-Reply-To: References: <1387458801.43078.YahooMailIosMobile@web163403.mail.gq1.yahoo.com> <52B3241D.5040700@alteeve.ca> Message-ID: Paras :) 1: rg_test on node3 to see what is the error 2: *xm* migrate --live guest1 work? If vm work on two nodes and one not, i think this xen configuration error 2013/12/20 Paras pradhan > I get the following error. Looks like my node3 can't manage services. When > all nodes are up and I try to start a service on node 3, the service will > start on other nodes even the priority on fd is number 1. > > -- > Dec 20 14:13:38 cvtst3 clurgmgrd[19883]: Starting stopped service > vm:guest1 > Dec 20 14:13:38 cvtst3 clurgmgrd[19883]: start on vm "guest1" > returned 1 (generic error) > Dec 20 14:13:38 cvtst3 clurgmgrd[19883]: #68: Failed to start > vm:guest1; return value: 1 > Dec 20 14:13:38 cvtst3 clurgmgrd[19883]: Stopping service > vm:guest1 > Dec 20 14:13:45 cvtst3 clurgmgrd[19883]: Service vm:guest1 is > recovering > Dec 20 14:13:45 cvtst3 clurgmgrd[19883]: #71: Relocating failed > service vm:guest1 > Dec 20 14:13:45 cvtst3 clurgmgrd[19883]: Service vm:guest1 is > stopped > - > > > > > On Fri, Dec 20, 2013 at 12:54 PM, emmanuel segura wrote: > >> What do you see in your log? >> >> >> 2013/12/20 Paras pradhan >> >>> Yes >>> >>> >>> On Fri, Dec 20, 2013 at 12:23 PM, emmanuel segura wrote: >>> >>>> did you setting RGMGR_OPTS="-d" and restarted rgmanager in every >>>> cluster node? >>>> >>>> >>>> 2013/12/20 Paras pradhan >>>> >>>>> that is gfs2 >>>>> >>>>> >>>>> On Fri, Dec 20, 2013 at 12:02 PM, emmanuel segura wrote: >>>>> >>>>>> Hello Paras, >>>>>> >>>>>> what kind of filesystem are you using for this /vms_c? sorry maybe i >>>>>> missing >>>>>> >>>>>> Thanks >>>>>> >>>>>> >>>>>> 2013/12/20 Paras pradhan >>>>>> >>>>>>> I have it in place. I found out this issue is only on one node. This >>>>>>> is what I've tried. All three cluster nodes are in a single failover domain >>>>>>> >>>>>>> case1: >>>>>>> stop all cluster services on node1 and node2. start vm service on >>>>>>> node3. Fails >>>>>>> >>>>>>> case2: >>>>>>> stop all services on node2 and node 3. start vm service on node 1. >>>>>>> Service starts ! >>>>>>> >>>>>>> case3: >>>>>>> stop all services on node1 and node3: start service on node2. >>>>>>> Service starts >>>>>>> >>>>>>> All the nodes are running the same kernel and same cluster packages. >>>>>>> I changed the failover domain to this service but its the same . >>>>>>> >>>>>>> May be node3 has some unknow issue and I am not sure if I should >>>>>>> rebuild it. >>>>>>> >>>>>>> -Paras. >>>>>>> >>>>>>> >>>>>>> On Thu, Dec 19, 2013 at 5:23 PM, emmanuel segura >>>>>> > wrote: >>>>>>> >>>>>>>> ok is clear, now the problem is that you can't start vm using the >>>>>>>> cluster >>>>>>>> >>>>>>>> in /etc/sysconfig/cluster if doesn't exist create it, >>>>>>>> RGMGR_OPTS="-d" and restart rgmanager, after that, do the test >>>>>>>> >>>>>>>> >>>>>>>> 2013/12/19 Paras pradhan >>>>>>>> >>>>>>>>> -M is for migrate. >>>>>>>>> >>>>>>>>> I think you meant this: >>>>>>>>> >>>>>>>>> clusvcadm -M vm:guest1 -m cvtst3.uark.edu >>>>>>>>> >>>>>>>>> but that failed. >>>>>>>>> Trying to migrate vm:guest1 to cvtst3.uark.edu...Temporary >>>>>>>>> failure; try again >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Thu, Dec 19, 2013 at 4:28 PM, emmanuel segura < >>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>> >>>>>>>>>> clusvcadm -e -M vm:guest1 >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> 2013/12/19 Paras pradhan >>>>>>>>>> >>>>>>>>>>> Thats a success. >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> [root at cvtst3 log]# rg_test test /etc/cluster/cluster.conf start >>>>>>>>>>> vm guest1 >>>>>>>>>>> Running in test mode. >>>>>>>>>>> Starting guest1... >>>>>>>>>>> Management tool: xm >>>>>>>>>>> Virtual machine guest1 is not running >>>>>>>>>>> xm create --path="/vms_c" guest1 on_shutdown="destroy" >>>>>>>>>>> on_reboot="destroy" on_crash="destroy" >>>>>>>>>>> Using config file "/vms_c/guest1". >>>>>>>>>>> Using to parse >>>>>>>>>>> /grub/menu.lst >>>>>>>>>>> Started domain guest1 >>>>>>>>>>> Start of guest1 complete >>>>>>>>>>> -- >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Thu, Dec 19, 2013 at 3:59 PM, emmanuel segura < >>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> *rg_test test /etc/cluster/cluster.conf start vm guest1* >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> 2013/12/19 Paras pradhan >>>>>>>>>>>> >>>>>>>>>>>>> emmanuel that is what I've done. I don;t see anything other >>>>>>>>>>>>> than start domian failure. >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On Thu, Dec 19, 2013 at 3:26 PM, emmanuel segura < >>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> set the log level to 7 and after incriment the config version >>>>>>>>>>>>>> and ccs_tool update /etc/cluster/cluster.conf >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> 2013/12/19 Paras pradhan >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Even If i set the log level to 0 , I can't see more logs >>>>>>>>>>>>>>> than what I've pasted earlier. May be something wrong with my setup or what. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> /proc/mounts is fine . >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Thu, Dec 19, 2013 at 11:18 AM, emmanuel segura < >>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> yes i think that too >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> 2013/12/19 Digimer >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> On 19/12/13 08:52 AM, emmanuel segura wrote: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> ? >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Sounds like classic spam to me. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>> Digimer >>>>>>>>>>>>>>>>> Papers and Projects: https://alteeve.ca/w/ >>>>>>>>>>>>>>>>> What if the cure for cancer is trapped in the mind of a >>>>>>>>>>>>>>>>> person without access to education? >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Linux-cluster mailing list >>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Linux-cluster mailing list >>>>>>>>> Linux-cluster at redhat.com >>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>> >>>>>>>> -- >>>>>>>> Linux-cluster mailing list >>>>>>>> Linux-cluster at redhat.com >>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Linux-cluster mailing list >>>>>>> Linux-cluster at redhat.com >>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>> >>>>>> -- >>>>>> Linux-cluster mailing list >>>>>> Linux-cluster at redhat.com >>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>> >>>>> >>>>> >>>>> -- >>>>> Linux-cluster mailing list >>>>> Linux-cluster at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>> >>>> >>>> >>>> >>>> -- >>>> esta es mi vida e me la vivo hasta que dios quiera >>>> >>>> -- >>>> Linux-cluster mailing list >>>> Linux-cluster at redhat.com >>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>> >>> >>> >>> -- >>> Linux-cluster mailing list >>> Linux-cluster at redhat.com >>> https://www.redhat.com/mailman/listinfo/linux-cluster >>> >> >> >> >> -- >> esta es mi vida e me la vivo hasta que dios quiera >> >> -- >> Linux-cluster mailing list >> Linux-cluster at redhat.com >> https://www.redhat.com/mailman/listinfo/linux-cluster >> > > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -- esta es mi vida e me la vivo hasta que dios quiera -------------- next part -------------- An HTML attachment was scrubbed... URL: From pradhanparas at gmail.com Fri Dec 20 21:34:50 2013 From: pradhanparas at gmail.com (Paras pradhan) Date: Fri, 20 Dec 2013 15:34:50 -0600 Subject: [Linux-cluster] failover domain and service start In-Reply-To: References: <1387458801.43078.YahooMailIosMobile@web163403.mail.gq1.yahoo.com> <52B3241D.5040700@alteeve.ca> Message-ID: both rg_test and xm migrate works no issue at all. I don't think its a xen error. I think it is rgmanger or cluster tools. -Paras. On Fri, Dec 20, 2013 at 3:03 PM, emmanuel segura wrote: > Paras :) > > > 1: rg_test on node3 to see what is the error > 2: *xm* migrate --live guest1 work? > > If vm work on two nodes and one not, i think this xen configuration error > > > 2013/12/20 Paras pradhan > >> I get the following error. Looks like my node3 can't manage services. >> When all nodes are up and I try to start a service on node 3, the service >> will start on other nodes even the priority on fd is number 1. >> >> -- >> Dec 20 14:13:38 cvtst3 clurgmgrd[19883]: Starting stopped >> service vm:guest1 >> Dec 20 14:13:38 cvtst3 clurgmgrd[19883]: start on vm "guest1" >> returned 1 (generic error) >> Dec 20 14:13:38 cvtst3 clurgmgrd[19883]: #68: Failed to start >> vm:guest1; return value: 1 >> Dec 20 14:13:38 cvtst3 clurgmgrd[19883]: Stopping service >> vm:guest1 >> Dec 20 14:13:45 cvtst3 clurgmgrd[19883]: Service vm:guest1 is >> recovering >> Dec 20 14:13:45 cvtst3 clurgmgrd[19883]: #71: Relocating failed >> service vm:guest1 >> Dec 20 14:13:45 cvtst3 clurgmgrd[19883]: Service vm:guest1 is >> stopped >> - >> >> >> >> >> On Fri, Dec 20, 2013 at 12:54 PM, emmanuel segura wrote: >> >>> What do you see in your log? >>> >>> >>> 2013/12/20 Paras pradhan >>> >>>> Yes >>>> >>>> >>>> On Fri, Dec 20, 2013 at 12:23 PM, emmanuel segura wrote: >>>> >>>>> did you setting RGMGR_OPTS="-d" and restarted rgmanager in every >>>>> cluster node? >>>>> >>>>> >>>>> 2013/12/20 Paras pradhan >>>>> >>>>>> that is gfs2 >>>>>> >>>>>> >>>>>> On Fri, Dec 20, 2013 at 12:02 PM, emmanuel segura >>>>> > wrote: >>>>>> >>>>>>> Hello Paras, >>>>>>> >>>>>>> what kind of filesystem are you using for this /vms_c? sorry maybe i >>>>>>> missing >>>>>>> >>>>>>> Thanks >>>>>>> >>>>>>> >>>>>>> 2013/12/20 Paras pradhan >>>>>>> >>>>>>>> I have it in place. I found out this issue is only on one node. >>>>>>>> This is what I've tried. All three cluster nodes are in a single failover >>>>>>>> domain >>>>>>>> >>>>>>>> case1: >>>>>>>> stop all cluster services on node1 and node2. start vm service on >>>>>>>> node3. Fails >>>>>>>> >>>>>>>> case2: >>>>>>>> stop all services on node2 and node 3. start vm service on node 1. >>>>>>>> Service starts ! >>>>>>>> >>>>>>>> case3: >>>>>>>> stop all services on node1 and node3: start service on node2. >>>>>>>> Service starts >>>>>>>> >>>>>>>> All the nodes are running the same kernel and same cluster >>>>>>>> packages. I changed the failover domain to this service but its the same . >>>>>>>> >>>>>>>> May be node3 has some unknow issue and I am not sure if I should >>>>>>>> rebuild it. >>>>>>>> >>>>>>>> -Paras. >>>>>>>> >>>>>>>> >>>>>>>> On Thu, Dec 19, 2013 at 5:23 PM, emmanuel segura < >>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>> >>>>>>>>> ok is clear, now the problem is that you can't start vm using the >>>>>>>>> cluster >>>>>>>>> >>>>>>>>> in /etc/sysconfig/cluster if doesn't exist create it, >>>>>>>>> RGMGR_OPTS="-d" and restart rgmanager, after that, do the test >>>>>>>>> >>>>>>>>> >>>>>>>>> 2013/12/19 Paras pradhan >>>>>>>>> >>>>>>>>>> -M is for migrate. >>>>>>>>>> >>>>>>>>>> I think you meant this: >>>>>>>>>> >>>>>>>>>> clusvcadm -M vm:guest1 -m cvtst3.uark.edu >>>>>>>>>> >>>>>>>>>> but that failed. >>>>>>>>>> Trying to migrate vm:guest1 to cvtst3.uark.edu...Temporary >>>>>>>>>> failure; try again >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Thu, Dec 19, 2013 at 4:28 PM, emmanuel segura < >>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>> >>>>>>>>>>> clusvcadm -e -M vm:guest1 >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> 2013/12/19 Paras pradhan >>>>>>>>>>> >>>>>>>>>>>> Thats a success. >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> [root at cvtst3 log]# rg_test test /etc/cluster/cluster.conf >>>>>>>>>>>> start vm guest1 >>>>>>>>>>>> Running in test mode. >>>>>>>>>>>> Starting guest1... >>>>>>>>>>>> Management tool: xm >>>>>>>>>>>> Virtual machine guest1 is not running >>>>>>>>>>>> xm create --path="/vms_c" guest1 on_shutdown="destroy" >>>>>>>>>>>> on_reboot="destroy" on_crash="destroy" >>>>>>>>>>>> Using config file "/vms_c/guest1". >>>>>>>>>>>> Using to parse >>>>>>>>>>>> /grub/menu.lst >>>>>>>>>>>> Started domain guest1 >>>>>>>>>>>> Start of guest1 complete >>>>>>>>>>>> -- >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Thu, Dec 19, 2013 at 3:59 PM, emmanuel segura < >>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> *rg_test test /etc/cluster/cluster.conf start vm guest1* >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> 2013/12/19 Paras pradhan >>>>>>>>>>>>> >>>>>>>>>>>>>> emmanuel that is what I've done. I don;t see anything other >>>>>>>>>>>>>> than start domian failure. >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Thu, Dec 19, 2013 at 3:26 PM, emmanuel segura < >>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> set the log level to 7 and after incriment the config >>>>>>>>>>>>>>> version and ccs_tool update /etc/cluster/cluster.conf >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> 2013/12/19 Paras pradhan >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Even If i set the log level to 0 , I can't see more logs >>>>>>>>>>>>>>>> than what I've pasted earlier. May be something wrong with my setup or what. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> /proc/mounts is fine . >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> -Paras. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On Thu, Dec 19, 2013 at 11:18 AM, emmanuel segura < >>>>>>>>>>>>>>>> emi2fast at gmail.com> wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> yes i think that too >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> 2013/12/19 Digimer >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> On 19/12/13 08:52 AM, emmanuel segura wrote: >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> ? >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Sounds like classic spam to me. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>> Digimer >>>>>>>>>>>>>>>>>> Papers and Projects: https://alteeve.ca/w/ >>>>>>>>>>>>>>>>>> What if the cure for cancer is trapped in the mind of a >>>>>>>>>>>>>>>>>> person without access to education? >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> Linux-cluster mailing list >>>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Linux-cluster mailing list >>>>>>>>>> Linux-cluster at redhat.com >>>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Linux-cluster mailing list >>>>>>>>> Linux-cluster at redhat.com >>>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Linux-cluster mailing list >>>>>>>> Linux-cluster at redhat.com >>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>>>> >>>>>>> -- >>>>>>> Linux-cluster mailing list >>>>>>> Linux-cluster at redhat.com >>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Linux-cluster mailing list >>>>>> Linux-cluster at redhat.com >>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> esta es mi vida e me la vivo hasta que dios quiera >>>>> >>>>> -- >>>>> Linux-cluster mailing list >>>>> Linux-cluster at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>>> >>>> >>>> >>>> -- >>>> Linux-cluster mailing list >>>> Linux-cluster at redhat.com >>>> https://www.redhat.com/mailman/listinfo/linux-cluster >>>> >>> >>> >>> >>> -- >>> esta es mi vida e me la vivo hasta que dios quiera >>> >>> -- >>> Linux-cluster mailing list >>> Linux-cluster at redhat.com >>> https://www.redhat.com/mailman/listinfo/linux-cluster >>> >> >> >> -- >> Linux-cluster mailing list >> Linux-cluster at redhat.com >> https://www.redhat.com/mailman/listinfo/linux-cluster >> > > > > -- > esta es mi vida e me la vivo hasta que dios quiera > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -------------- next part -------------- An HTML attachment was scrubbed... URL: From neale at sinenomine.net Mon Dec 23 17:42:37 2013 From: neale at sinenomine.net (Neale Ferguson) Date: Mon, 23 Dec 2013 17:42:37 +0000 Subject: [Linux-cluster] cluster.conf not being propagated to all nodes Message-ID: <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C60CA@ORD2MBX03C.mex05.mlsrvr.com> I created a simple two-node cluster (cnode1, cnode2). I also created a linux instance from where I run luci (cmanager). I was able to create the two nodes successfully. On both nodes /etc/cluster/cluster.conf is created. However, any updates to the configuration only are being reflected on one of the nodes (cnode1). The other node (cnode2) is aware that the configuration has been updated but the file is never updated so corosync complains: Dec 23 11:29:08 corosync [CMAN ] Unable to load new config in corosync: New configuration version has to be newer than current running configuration Dec 23 11:29:08 corosync [CMAN ] Can't get updated config version 2: New configuration version has to be newer than current running configuration . Dec 23 11:29:08 corosync [CMAN ] Activity suspended on this node Dec 23 11:29:08 corosync [CMAN ] Error reloading the configuration, will retry every second I can't see anything in luci.log on cmanager or any of the other log files on cnode2 that gives me any clue as to what's happening. I have run tcpdump on cmanager and can see traffic flowing to cnode1 but not to cnode2. How are the changes to cluster.conf propagated to the nodes within a cluster? What type of things should I look at to help diagnose this problem? Neale From lists at alteeve.ca Mon Dec 23 17:46:19 2013 From: lists at alteeve.ca (Digimer) Date: Mon, 23 Dec 2013 12:46:19 -0500 Subject: [Linux-cluster] cluster.conf not being propagated to all nodes In-Reply-To: <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C60CA@ORD2MBX03C.mex05.mlsrvr.com> References: <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C60CA@ORD2MBX03C.mex05.mlsrvr.com> Message-ID: <52B876EB.1020603@alteeve.ca> On 23/12/13 12:42 PM, Neale Ferguson wrote: > I created a simple two-node cluster (cnode1, cnode2). I also created a linux instance from where I run luci (cmanager). I was able to create the two nodes successfully. On both nodes /etc/cluster/cluster.conf is created. However, any updates to the configuration only are being reflected on one of the nodes (cnode1). The other node (cnode2) is aware that the configuration has been updated but the file is never updated so corosync complains: > > Dec 23 11:29:08 corosync [CMAN ] Unable to load new config in corosync: New configuration version has to be newer than current running configuration > > Dec 23 11:29:08 corosync [CMAN ] Can't get updated config version 2: New configuration version has to be newer than current running configuration > . > Dec 23 11:29:08 corosync [CMAN ] Activity suspended on this node > Dec 23 11:29:08 corosync [CMAN ] Error reloading the configuration, will retry every second > > I can't see anything in luci.log on cmanager or any of the other log files on cnode2 that gives me any clue as to what's happening. > > I have run tcpdump on cmanager and can see traffic flowing to cnode1 but not to cnode2. > > How are the changes to cluster.conf propagated to the nodes within a cluster? What type of things should I look at to help diagnose this problem? > > Neale Couple of questions; * What OS/cman version? * Are the modclusterd and ricci daemons running? * Did you set the local 'ricci' user password on your nodes? * Is ccs installed? digimer -- Digimer Papers and Projects: https://alteeve.ca/w/ What if the cure for cancer is trapped in the mind of a person without access to education? From neale at sinenomine.net Mon Dec 23 18:01:34 2013 From: neale at sinenomine.net (Neale Ferguson) Date: Mon, 23 Dec 2013 18:01:34 +0000 Subject: [Linux-cluster] cluster.conf not being propagated to all nodes In-Reply-To: <52B876EB.1020603@alteeve.ca> References: <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C60CA@ORD2MBX03C.mex05.mlsrvr.com>, <52B876EB.1020603@alteeve.ca> Message-ID: <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C612E@ORD2MBX03C.mex05.mlsrvr.com> Couple of questions; * What OS/cman version? Linux / cman-3.0.12.1-5 * Are the modclusterd and ricci daemons running? ricci 1439 1 0 11:18 ? 00:00:16 ricci -u ricci root 1486 1 2 11:18 ? 00:02:03 modclusterd * Did you set the local 'ricci' user password on your nodes? yes * Is ccs installed? No Thanks... Neale From lists at alteeve.ca Mon Dec 23 18:40:40 2013 From: lists at alteeve.ca (Digimer) Date: Mon, 23 Dec 2013 13:40:40 -0500 Subject: [Linux-cluster] cluster.conf not being propagated to all nodes In-Reply-To: <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C612E@ORD2MBX03C.mex05.mlsrvr.com> References: <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C60CA@ORD2MBX03C.mex05.mlsrvr.com>, <52B876EB.1020603@alteeve.ca> <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C612E@ORD2MBX03C.mex05.mlsrvr.com> Message-ID: <52B883A8.30507@alteeve.ca> On 23/12/13 01:01 PM, Neale Ferguson wrote: > > > Couple of questions; > > * What OS/cman version? > Linux / cman-3.0.12.1-5 Sorry, what *distro* and version? Sounds like RHEL / CentOS 6.something though. > * Are the modclusterd and ricci daemons running? > ricci 1439 1 0 11:18 ? 00:00:16 ricci -u ricci > root 1486 1 2 11:18 ? 00:02:03 modclusterd > > * Did you set the local 'ricci' user password on your nodes? > yes > > * Is ccs installed? > No > > Thanks... Neale Try installing ccs. I'm not sure it will help here, but it's worth a try. I don't use luci myself so I push out updates using 'cman_tool version -r'... Do you know if that works if you try from the node you edited cluster.conf on? -- Digimer Papers and Projects: https://alteeve.ca/w/ What if the cure for cancer is trapped in the mind of a person without access to education? From neale at sinenomine.net Mon Dec 23 18:42:33 2013 From: neale at sinenomine.net (Neale Ferguson) Date: Mon, 23 Dec 2013 18:42:33 +0000 Subject: [Linux-cluster] cluster.conf not being propagated to all nodes In-Reply-To: <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C612E@ORD2MBX03C.mex05.mlsrvr.com> References: <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C60CA@ORD2MBX03C.mex05.mlsrvr.com>, <52B876EB.1020603@alteeve.ca>, <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C612E@ORD2MBX03C.mex05.mlsrvr.com> Message-ID: <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C61B6@ORD2MBX03C.mex05.mlsrvr.com> Also, an strace shows cmanager connecting to cnode1 but never to cnode2. ________________________________________ Couple of questions; * What OS/cman version? Linux / cman-3.0.12.1-5 * Are the modclusterd and ricci daemons running? ricci 1439 1 0 11:18 ? 00:00:16 ricci -u ricci root 1486 1 2 11:18 ? 00:02:03 modclusterd * Did you set the local 'ricci' user password on your nodes? yes * Is ccs installed? No Thanks... Neale From neale at sinenomine.net Mon Dec 23 18:57:19 2013 From: neale at sinenomine.net (Neale Ferguson) Date: Mon, 23 Dec 2013 18:57:19 +0000 Subject: [Linux-cluster] cluster.conf not being propagated to all nodes In-Reply-To: <52B883A8.30507@alteeve.ca> References: <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C60CA@ORD2MBX03C.mex05.mlsrvr.com>, <52B876EB.1020603@alteeve.ca> <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C612E@ORD2MBX03C.mex05.mlsrvr.com>, <52B883A8.30507@alteeve.ca> Message-ID: <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C61FA@ORD2MBX03C.mex05.mlsrvr.com> Sorry, what *distro* and version? Sounds like RHEL / CentOS 6.something though. CentOS 6.4 > * Are the modclusterd and ricci daemons running? > ricci 1439 1 0 11:18 ? 00:00:16 ricci -u ricci > root 1486 1 2 11:18 ? 00:02:03 modclusterd > > * Did you set the local 'ricci' user password on your nodes? > yes > > * Is ccs installed? > No > > Thanks... Neale Try installing ccs. I'm not sure it will help here, but it's worth a try. I don't use luci myself so I push out updates using 'cman_tool version -r'... Do you know if that works if you try from the node you edited cluster.conf on? luci is the one which updates cluster.conf and attempts to propagate. It's that propagation to cnode2 that doesn't happen (it doesn't fail - it just doesn't attempt it). From neale at sinenomine.net Tue Dec 24 18:21:34 2013 From: neale at sinenomine.net (Neale Ferguson) Date: Tue, 24 Dec 2013 18:21:34 +0000 Subject: [Linux-cluster] cluster.conf not being propagated to all nodes In-Reply-To: <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C61FA@ORD2MBX03C.mex05.mlsrvr.com> References: <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C60CA@ORD2MBX03C.mex05.mlsrvr.com>, <52B876EB.1020603@alteeve.ca> <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C612E@ORD2MBX03C.mex05.mlsrvr.com>, <52B883A8.30507@alteeve.ca>, <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C61FA@ORD2MBX03C.mex05.mlsrvr.com> Message-ID: <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C6E48@ORD2MBX03C.mex05.mlsrvr.com> What I'm really after is confirmation that it's luci which is propagating the cluster.conf via the ricci service running on the nodes. If that's the case, I want to determine why luci only opts to send to one node and not the other. From lists at alteeve.ca Tue Dec 24 18:31:04 2013 From: lists at alteeve.ca (Digimer) Date: Tue, 24 Dec 2013 13:31:04 -0500 Subject: [Linux-cluster] cluster.conf not being propagated to all nodes In-Reply-To: <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C6E48@ORD2MBX03C.mex05.mlsrvr.com> References: <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C60CA@ORD2MBX03C.mex05.mlsrvr.com>, <52B876EB.1020603@alteeve.ca> <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C612E@ORD2MBX03C.mex05.mlsrvr.com>, <52B883A8.30507@alteeve.ca>, <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C61FA@ORD2MBX03C.mex05.mlsrvr.com> <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C6E48@ORD2MBX03C.mex05.mlsrvr.com> Message-ID: <52B9D2E8.8020601@alteeve.ca> On 24/12/13 01:21 PM, Neale Ferguson wrote: > What I'm really after is confirmation that it's luci which is propagating the cluster.conf via the ricci service running on the nodes. If that's the case, I want to determine why luci only opts to send to one node and not the other. > Log into a node, manually edit cluster.conf to increase config_version="x", save and exit, run 'ccs_config_validate' and if there are no errors, run 'cman_tool version -r'. Enter the 'ricci' password(s) if prompted. Then on both nodes run 'cman_tool version'. If the returned version is the new one you set, then the config pushed out successfully and it would indicate that luci is the problem. Please run 'tail -f -n 0 /var/log/messages' on both nodes prior to 'cman_tool version -r'. If the config fails to push out, please copy the log output from both nodes to here. Cheers -- Digimer Papers and Projects: https://alteeve.ca/w/ What if the cure for cancer is trapped in the mind of a person without access to education? From neale at sinenomine.net Tue Dec 24 19:09:17 2013 From: neale at sinenomine.net (Neale Ferguson) Date: Tue, 24 Dec 2013 19:09:17 +0000 Subject: [Linux-cluster] cluster.conf not being propagated to all nodes In-Reply-To: <52B9D2E8.8020601@alteeve.ca> References: <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C60CA@ORD2MBX03C.mex05.mlsrvr.com>, <52B876EB.1020603@alteeve.ca> <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C612E@ORD2MBX03C.mex05.mlsrvr.com>, <52B883A8.30507@alteeve.ca>, <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C61FA@ORD2MBX03C.mex05.mlsrvr.com> <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C6E48@ORD2MBX03C.mex05.mlsrvr.com>, <52B9D2E8.8020601@alteeve.ca> Message-ID: <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C6E98@ORD2MBX03C.mex05.mlsrvr.com> It pushed out correctly from cnode2. ________________________________________ Log into a node, manually edit cluster.conf to increase config_version="x", save and exit, run 'ccs_config_validate' and if there are no errors, run 'cman_tool version -r'. Enter the 'ricci' password(s) if prompted. Then on both nodes run 'cman_tool version'. If the returned version is the new one you set, then the config pushed out successfully and it would indicate that luci is the problem. From giulio.dippolito at gmail.com Tue Dec 24 19:20:25 2013 From: giulio.dippolito at gmail.com (Giulio D'Ippolito) Date: Tue, 24 Dec 2013 20:20:25 +0100 Subject: [Linux-cluster] cluster.conf not being propagated to all nodes In-Reply-To: <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C6E98@ORD2MBX03C.mex05.mlsrvr.com> References: <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C60CA@ORD2MBX03C.mex05.mlsrvr.com> <52B876EB.1020603@alteeve.ca> <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C612E@ORD2MBX03C.mex05.mlsrvr.com> <52B883A8.30507@alteeve.ca> <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C61FA@ORD2MBX03C.mex05.mlsrvr.com> <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C6E48@ORD2MBX03C.mex05.mlsrvr.com> <52B9D2E8.8020601@alteeve.ca> <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C6E98@ORD2MBX03C.mex05.mlsrvr.com> Message-ID: Hi, I got the same error when i modify the configuration with luci and one node was down. In a two node cluster you can copy the correct /etc/cluster/cluster.conf file to the failed node. Start the cluster services and it will normally work. Cheers 2013/12/24 Neale Ferguson > It pushed out correctly from cnode2. > ________________________________________ > Log into a node, manually edit cluster.conf to increase > config_version="x", save and exit, run 'ccs_config_validate' and if > there are no errors, run 'cman_tool version -r'. Enter the 'ricci' > password(s) if prompted. Then on both nodes run 'cman_tool version'. If > the returned version is the new one you set, then the config pushed out > successfully and it would indicate that luci is the problem. > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at alteeve.ca Tue Dec 24 19:55:05 2013 From: lists at alteeve.ca (Digimer) Date: Tue, 24 Dec 2013 14:55:05 -0500 Subject: [Linux-cluster] cluster.conf not being propagated to all nodes In-Reply-To: <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C6E98@ORD2MBX03C.mex05.mlsrvr.com> References: <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C60CA@ORD2MBX03C.mex05.mlsrvr.com>, <52B876EB.1020603@alteeve.ca> <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C612E@ORD2MBX03C.mex05.mlsrvr.com>, <52B883A8.30507@alteeve.ca>, <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C61FA@ORD2MBX03C.mex05.mlsrvr.com> <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C6E48@ORD2MBX03C.mex05.mlsrvr.com>, <52B9D2E8.8020601@alteeve.ca> <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C6E98@ORD2MBX03C.mex05.mlsrvr.com> Message-ID: <52B9E699.6030803@alteeve.ca> That does seem to indicate an issue with luci then. Have you opened a rhbz bug? Digimer https://bugzilla.redhat.com/enter_bug.cgi?product=Red%20Hat%20Enterprise%20Linux%206 On 24/12/13 02:09 PM, Neale Ferguson wrote: > It pushed out correctly from cnode2. > ________________________________________ > Log into a node, manually edit cluster.conf to increase > config_version="x", save and exit, run 'ccs_config_validate' and if > there are no errors, run 'cman_tool version -r'. Enter the 'ricci' > password(s) if prompted. Then on both nodes run 'cman_tool version'. If > the returned version is the new one you set, then the config pushed out > successfully and it would indicate that luci is the problem. > -- Digimer Papers and Projects: https://alteeve.ca/w/ What if the cure for cancer is trapped in the mind of a person without access to education? From neale at sinenomine.net Tue Dec 24 21:32:21 2013 From: neale at sinenomine.net (Neale Ferguson) Date: Tue, 24 Dec 2013 21:32:21 +0000 Subject: [Linux-cluster] cluster.conf not being propagated to all nodes In-Reply-To: <52B9E699.6030803@alteeve.ca> References: <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C60CA@ORD2MBX03C.mex05.mlsrvr.com>, <52B876EB.1020603@alteeve.ca> <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C612E@ORD2MBX03C.mex05.mlsrvr.com>, <52B883A8.30507@alteeve.ca>, <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C61FA@ORD2MBX03C.mex05.mlsrvr.com> <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C6E48@ORD2MBX03C.mex05.mlsrvr.com>, <52B9D2E8.8020601@alteeve.ca> <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C6E98@ORD2MBX03C.mex05.mlsrvr.com>, <52B9E699.6030803@alteeve.ca> Message-ID: <92605D8BD73E5B47AF6C2ADC2BD9554D0D6C6FEC@ORD2MBX03C.mex05.mlsrvr.com> Not yet, I'm trying to gather information and verify it's not a configuration problem on my side. ________________________________________ That does seem to indicate an issue with luci then. Have you opened a rhbz bug? From akinoztopuz at yahoo.com Thu Dec 26 07:35:09 2013 From: akinoztopuz at yahoo.com (=?utf-8?B?QUtJTiDDv2ZmZmZmZmZmZmZkNlpUT1BVWg==?=) Date: Wed, 25 Dec 2013 23:35:09 -0800 (PST) Subject: [Linux-cluster] (no subject) Message-ID: <1388043309.17705.YahooMailNeo@web162106.mail.bf1.yahoo.com> Hello I need some information about how can I? change the status of failed service? to started state? in cluster ? I mean that clustered service normally is running but? cluster seems that as failed when ? check it from clustat . I dont wan to stop/start that service because it is already operational?? but cluster is not aware because of failed relocation. how can I provide the cluster is aware of its service status ? Could you advice me ? -------------- next part -------------- An HTML attachment was scrubbed... URL: