[Linux-cluster] clusvcadm -U returns "Temporary failure" on vm service

Aaron Benner tfrumbacher at gmail.com
Tue Nov 3 20:03:47 UTC 2009


All,

I have a problem that I can't find documentation on and has me baffled.

I have a 3 node cluster running xen with multiple domU enabled as  
cluster services.  The individual services are set to have a node  
affinity using resource groups (see cluster.conf below) and live  
migration is enabled.

I had migrated two domU off of one of the cluster nodes in  
anticipation of a power-cycle and network reconfig.  Before bringing  
up the node that had been reconfigured I froze (clusvcadm -Z ...) the  
domU in question so that when the newly reconfigured node came up they  
would not migrate back to their preferred host, or at least that's  
what I *THOUGHT* -Z would do.

I booted up reconfigured node, and ignoring their frozen state the  
rgmanager on the rebooting node initiated a migration of the domUs.   
The migration finished and the virtuals resumed operation on the  
reconfigured host.  The problem is now rgmanager is showing those  
resrouce groups as having state "migrating" (even though there are no  
migration processes still active) and clusvcadm -U ... returns the  
following:

"Local machine unfreezing vm:SaturnE...Temporary failure; try again"

I get this message on all of the cluster nodes.  I'm not sure if this  
is coming from clusvcadm, vm.sh, or some other piece of the cluster  
puzzle.  Is there any way to get rgmanager to realize that these  
resource groups are no longer migrating and as such can be unfrozen?   
Is that even my problem?  Can I fix this with anything other than a  
complete power down of the cluster (disaster)?

--AB
<?xml version="1.0"?>
<cluster alias="plieadies" config_version="66" name="plieadies">
         <fence_daemon clean_start="0" post_fail_delay="0"  
post_join_delay="180"/>
         <clusternodes>
                 <clusternode name="plieadies3.atmexpress.com"  
nodeid="1" votes="1">
                         <fence>
                                 <method name="1">
                                         <device name="switchedpdu2"  
port="6"/>
                                 </method>
                         </fence>
                 </clusternode>
                 <clusternode name="plieadies2.atmexpress.com"  
nodeid="2" votes="1">
                         <fence>
                                 <method name="1">
                                         <device name="switchedpdu1"  
port="13"/>
                                 </method>
                         </fence>
                 </clusternode>
                 <clusternode name="plieadies1.atmexpress.com"  
nodeid="3" votes="1">
                         <fence>
                                 <method name="1">
                                         <device name="switchedpdu2"  
port="12"/>
                                 </method>
                         </fence>
                 </clusternode>
         </clusternodes>
         <cman/>
         <fencedevices>
                 <fencedevice agent="fence_apc" [snip]/>
                 <fencedevice agent="fence_apc" [snip]/>
         </fencedevices>
         <rm>
                 <failoverdomains>
                         <failoverdomain name="bias-plieadies2"  
nofailback="0" ordered="0" restricted="0">
                                 <failoverdomainnode name="plieadies2.atmexpress.com 
" priority="1"/>
                         </failoverdomain>
                         <failoverdomain name="bias-plieadies1"  
nofailback="0" ordered="0" restricted="0">
                                 <failoverdomainnode name="plieadies1.atmexpress.com 
" priority="1"/>
                         </failoverdomain>
                         <failoverdomain name="bias-plieadies3"  
nofailback="0" ordered="0" restricted="0">
                                 <failoverdomainnode name="plieadies3.atmexpress.com 
" priority="1"/>
                         </failoverdomain>
                 </failoverdomains>
                 <resources/>
                 <vm autostart="0" domain="bias-plieadies3"  
exclusive="0" max_restarts="0" migrate="live" name="SaturnX" path="/ 
etc/xen" recovery="restart" restart_expire_time="0"/>
                 <vm autostart="1" domain="bias-plieadies2"  
exclusive="0" max_restarts="0" migrate="live" name="SaturnC" path="/ 
etc/xen" recovery="restart" restart_expire_time="0"/>
                 <vm autostart="1" domain="bias-plieadies3"  
exclusive="0" max_restarts="0" migrate="live" name="SaturnE" path="/ 
etc/xen" recovery="restart" restart_expire_time="0"/>
                 <vm autostart="1" domain="bias-plieadies3"  
exclusive="0" max_restarts="0" migrate="live" name="SaturnF" path="/ 
etc/xen" recovery="restart" restart_expire_time="0"/>
                 <vm autostart="1" domain="bias-plieadies2"  
exclusive="0" max_restarts="0" migrate="live" name="SaturnD" path="/ 
etc/xen" recovery="restart" restart_expire_time="0"/>
                 <vm autostart="1" domain="bias-plieadies1"  
exclusive="0" max_restarts="0" migrate="live" name="SaturnA" path="/ 
etc/xen" recovery="restart" restart_expire_time="0"/>
                 <vm autostart="1" domain="bias-plieadies1"  
exclusive="0" max_restarts="0" migrate="live" name="Orion1" path="/etc/ 
xen" recovery="restart" restart_expire_time="0"/>
                 <vm autostart="1" domain="bias-plieadies2"  
exclusive="0" max_restarts="0" migrate="live" name="Orion2" path="/etc/ 
xen" recovery="restart" restart_expire_time="0"/>
                 <vm autostart="1" domain="bias-plieadies3"  
exclusive="0" max_restarts="0" migrate="live" name="Orion3" path="/etc/ 
xen" recovery="restart" restart_expire_time="0"/>
                 <vm autostart="1" domain="bias-plieadies1"  
exclusive="0" max_restarts="0" migrate="live" name="SaturnB" path="/ 
etc/xen" recovery="restart" restart_expire_time="0"/>
                 <vm autostart="1" domain="bias-plieadies1"  
exclusive="0" max_restarts="0" migrate="live" name="Pluto" path="/etc/ 
xen" recovery="restart" restart_expire_time="0"/>
         </rm>
         <fence_xvmd/>
</cluster>




More information about the Linux-cluster mailing list