[Linux-cluster] fence_cisco_ucs issue within cluster.conf

Jayesh Shinde jayesh.shinde at netcore.co.in
Mon Nov 2 14:39:50 UTC 2015


Hi ,

I am trying to configure the 2 node cluster with fence_cisco_ucs . The 
fence testing is working properly via command line but its not working 
within cluster.conf

problem  / scenario  :--
----------------------------
When I manually shutdown the Ethernet card of of mailbox1 server , then 
mailbox2 server detect network failure and trying  to fence mailbox1 ,
But its getting fail with "plug" related error ( refer below log )  i.e 
Failed: Unable to obtain correct plug status or plug is not available

I refer Redhat KB and google and older mail-thread  . As per suggestion 
I upgrade the from "fence-agents-3.1.5-35.el6"   to 
"fence-agents-4.0.15-8.el6.x86_64".
Also checked by doing few other changes in cluster.conf , but that not 
worked .     Kindly guide where I am doing wrong with "plug"  ?

I am using OS is  RHEL 6.5
-------------------------------------
cman-3.0.12.1-59.el6.x86_64
rgmanager-3.0.12.1-19.el6.x86_64
fence-virt-0.2.3-15.el6.x86_64
fence-agents-4.0.15-8.el6.x86_64

command line fencing  :--
-----------------------------
[root at mailbox2 ~]# /usr/sbin/fence_cisco_ucs -a 172.17.1.30 -l KVM -p 
'myPassword' -o status  -v  -z  --plug=mailbox1 --ipport=443 
suborg="/org-root/ls-mailbox"  ; *echo $?*
<aaaLogin inName="KVM" inPassword="P at ssword" />
  <aaaLogin cookie="" response="yes" 
outCookie="1446038319/bdd0ea5b-386f-4374-8eca-9ff7595c33f3" 
outRefreshPeriod="600" outPriv="pn-equipment,pn-maintenance,read-only" 
outDomains="" outChannel="noencssl" outEvtChannel="noencssl" 
outSessionId="web_29402_B" outVersion="2.2(3d)" outName="KVM"> </aaaLogin>
<configResolveDn 
cookie="1446038319/bdd0ea5b-386f-4374-8eca-9ff7595c33f3" 
inHierarchical="false" dn="org-root/ls-mailbox1/power"/>
  <configResolveDn dn="org-root/ls-mailbox1/power" 
cookie="1446038319/bdd0ea5b-386f-4374-8eca-9ff7595c33f3" response="yes"> 
<outConfig> <lsPower dn="org-root/ls-mailbox1/power" state="up"/> 
</outConfig> </configResolveDn>
*Status: ON*
<aaaLogout inCookie="1446038319/bdd0ea5b-386f-4374-8eca-9ff7595c33f3" />
  <aaaLogout cookie="" response="yes" outStatus="success"> </aaaLogout>

*0*

  /etc/hosts  on both mailbox1 & mailbox2 server

127.0.0.1                localhost localhost.localdomain
192.168.51.91        mailbox1.mydomain.com
192.168.51.92        mailbox2.mydomain.com

   /etc/cluster/cluster.conf   :--
------------------------------------

<?xml version="1.0"?>
<cluster config_version="69" name="cluster1">
     <clusternodes>
         <clusternode name="mailbox1.mydomain.com" nodeid="1">
             <fence>
                 <method name="CiscoFence">
                     <device name="CiscoFence" port="mailbox1"/>
                 </method>
             </fence>
         </clusternode>
         <clusternode name="mailbox2.mydomain.com" nodeid="2">
             <fence>
                 <method name="CiscoFence">
                     <device name="CiscoFence" port="mailbox2"/>
                 </method>
             </fence>
         </clusternode>
     </clusternodes>
     <cman expected_votes="1" two_node="1"/>
     <rm>
         <failoverdomains>
             <failoverdomain name="failover1" ordered="1" restricted="1">
                 <failoverdomainnode name="mailbox1.mydomain.com" 
priority="2"/>
                 <failoverdomainnode name="mailbox2.mydomain.com" 
priority="1"/>
             </failoverdomain>
             <failoverdomain name="failover2" ordered="1" restricted="1">
                 <failoverdomainnode name="mailbox1.mydomain.com" 
priority="2"/>
                 <failoverdomainnode name="mailbox2.mydomain.com" 
priority="1"/>
             </failoverdomain>
         </failoverdomains>
         <resources>
             <ip address="192.168.51.93/24" sleeptime="10"/>
             <fs device="/dev/mapper/mail_1-mailbox1" force_unmount="1" 
fsid="28418" fstype="ext4" mountpoint="/mailbox1" name="imap1_fs" 
self_fence="1"/>
             <script file="/etc/init.d/cyrus-imapd1" name="cyrus1"/>
             <ip address="192.168.51.94/24" sleeptime="10"/>
             <fs device="/dev/mapper/mail_2-mailbox2" force_unmount="1" 
fsid="49388" fstype="ext4" mountpoint="/mailbox2" name="imap2_fs" 
self_fence="1"/>
             <script file="/etc/init.d/cyrus-imapd2" name="cyrus2"/>
         </resources>
         <service domain="failover1" name="mailbox1" recovery="restart">
             <fs ref="imap1_fs"/>
             <ip ref="192.168.51.93/24"/>
             <script ref="cyrus1"/>
         </service>
         <service domain="failover2" name="mailbox2" recovery="restart">
             <ip ref="192.168.51.94/24"/>
             <fs ref="imap2_fs"/>
             <script ref="cyrus2"/>
         </service>
     </rm>
     <fencedevices>
         <fencedevice agent="fence_cisco_ucs" ipaddr="172.17.1.30" 
ipport="443" login="KVM" name="CiscoFence" passwd="myPassword" ssl="on" 
suborg="/org-root/ls-mailbox"/>
     </fencedevices>
</cluster>



tail -f /var/log/messages  :--
------------------------------

Oct 28 15:42:13 mailbox2 corosync[2376]:   [CPG   ] chosen downlist: 
sender r(0) ip(192.168.51.92) ; members(old:2 left:1)
Oct 28 15:42:13 mailbox2 corosync[2376]:   [MAIN  ] Completed service 
synchronization, ready to provide service.
Oct 28 15:42:13 mailbox2 fenced[2435]: fencing node mailbox1
Oct 28 15:42:13 mailbox2 rgmanager[2849]: State change: mailbox1 DOWN
Oct 28 15:42:14 mailbox2 python: Failed: Unable to obtain correct plug 
status or plug is not available#012
Oct 28 15:42:14 mailbox2 fenced[2435]: fence mailbox1 dev 0.0 agent 
fence_cisco_ucs result: error from agent
Oct 28 15:42:14 mailbox2 fenced[2435]: fence mailbox1 failed
Oct 28 15:42:17 mailbox2 fenced[2435]: fencing node mailbox1
Oct 28 15:42:17 mailbox2 python: Failed: Unable to obtain correct plug 
status or plug is not available#012
Oct 28 15:42:17 mailbox2 fenced[2435]: fence mailbox1 dev 0.0 agent 
fence_cisco_ucs result: error from agent
Oct 28 15:42:17 mailbox2 fenced[2435]: fence mailbox1 failed
Oct 28 15:42:20 mailbox2 fenced[2435]: fencing node mailbox1
Oct 28 15:42:21 mailbox2 python: Failed: Unable to obtain correct plug 
status or plug is not available#012
Oct 28 15:42:21 mailbox2 fenced[2435]: fence mailbox1 dev 0.0 agent 
fence_cisco_ucs result: error from agent
Oct 28 15:42:21 mailbox2 fenced[2435]: fence mailbox1 failed
Oct 28 15:42:25 mailbox2 python: Failed: Unable to obtain correct plug 
status or plug is not available#012
Oct 28 15:42:28 mailbox2 python: Failed: Unable to obtain correct plug 
status or plug is not available#012
Oct 28 15:42:31 mailbox2 python: Failed: Unable to obtain correct plug 
status or plug is not available#012
Oct 28 15:42:35 mailbox2 python: Failed: Unable to obtain correct plug 
status or plug is not available#012
Oct 28 15:42:38 mailbox2 python: Failed: Unable to obtain correct plug 
status or plug is not available#012
Oct 28 15:42:42 mailbox2 python: Failed: Unable to obtain correct plug 
status or plug is not available#012
Oct 28 15:42:45 mailbox2 python: Failed: Unable to obtain correct plug 
status or plug is not available#012
Oct 28 15:42:49 mailbox2 python: Failed: Unable to obtain correct plug 
status or plug is not available#012
Oct 28 15:42:49 mailbox2 corosync[2376]:   [TOTEM ] A processor joined 
or left the membership and a new membership was formed.
Oct 28 15:42:49 mailbox2 corosync[2376]:   [CPG   ] chosen downlist: 
sender r(0) ip(192.168.51.92) ; members(old:1 left:0)
Oct 28 15:42:49 mailbox2 corosync[2376]:   [MAIN  ] Completed service 
synchronization, ready to provide service.
Oct 28 15:42:52 mailbox2 python: Failed: Unable to obtain correct plug 
status or plug is not available#012
Oct 28 15:42:56 mailbox2 python: Failed: Unable to obtain correct plug 
status or plug is not available#012


Regards
Jayesh Shinde


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20151102/4e67066d/attachment.htm>


More information about the Linux-cluster mailing list