[Linux-cluster] cman Unfencing self... fence_node: cannot connect to cman [FAILED]

Orlando Rodríguez Muñoz orlando.rodriguez107 at alu.ulpgc.es
Thu Dec 5 16:30:40 UTC 2013


Thanks
________________________________________
De: linux-cluster-bounces at redhat.com [linux-cluster-bounces at redhat.com] en nombre de Digimer [lists at alteeve.ca]
Enviado: miércoles, 04 de diciembre de 2013 14:26
Para: linux clustering
Asunto: Re: [Linux-cluster] cman Unfencing self... fence_node: cannot connect to cman [FAILED]

If your nodes are VMs then, use fence_virsh.

On 04/12/13 04:51, Orlando Rodríguez Muñoz wrote:
> Hi,
>
> I'm using KVM Qemu.
>
> Shared disks: iSCSI
> File System: GFS2
>
> ________________________________________
> De: linux-cluster-bounces at redhat.com [linux-cluster-bounces at redhat.com] en nombre de Digimer [lists at alteeve.ca]
> Enviado: lunes, 02 de diciembre de 2013 19:47
> Para: linux clustering
> Asunto: Re: [Linux-cluster] cman Unfencing self... fence_node: cannot connect to cman [FAILED]
>
> You haven't configured fencing at all. Do your nodes have
> IPMI/iLO/iDRAC/etc?
>
> On 02/12/13 14:40, Orlando Rodríguez Muñoz wrote:
>> hi,
>>
>> I have 4 nodes. In the node where I have luci, when cman start:
>>
>> # service cman start
>> Starting cluster:
>>    Checking if cluster has been disabled at boot...        [  OK  ]
>>    Checking Network Manager...                             [  OK  ]
>>    Global setup...                                         [  OK  ]
>>    Loading kernel modules...                               [  OK  ]
>>    Mounting configfs...                                    [  OK  ]
>>    Starting cman...                                        [  OK  ]
>>    Waiting for quorum...                                   [  OK  ]
>>    Starting fenced...                                      [  OK  ]
>>    Starting dlm_controld...                                [  OK  ]
>>    Starting gfs_controld...                                [  OK  ]
>>    Unfencing self... fence_node: cannot connect to cman
>>                                                            [FAILED]
>> Stopping cluster:
>>    Leaving fence domain...                                 [  OK  ]
>>    Stopping gfs_controld...                                [  OK  ]
>>    Stopping dlm_controld...                                [  OK  ]
>>    Stopping fenced...                                      [  OK  ]
>>    Stopping cman...                                        [  OK  ]
>>    Unloading kernel modules...                             [  OK  ]
>>    Unmounting configfs...                                  [  OK  ]
>>
>> And my cluster.conf is:
>>
>> <?xml version="1.0"?>
>> <cluster config_version="55" name="Cluster_prueba">
>>   <clusternodes>
>>         <clusternode name="node1" nodeid="1"/>
>>         <clusternode name="node2" nodeid="2"/>
>>         <clusternode name="node3" nodeid="3"/><!-- Storage -->
>>         <clusternode name="node4" nodeid="4"/><!-- Luci -->
>>   </clusternodes>
>>   <cman transport="udpu"/>
>>   <rm>
>>     <failoverdomains>
>>      <failoverdomain name="FDomain_servicio_web" ordered="1">
>>          <failoverdomainnode name="servidor‐1" priority="1"/>
>>          <failoverdomainnode name="servidor‐2" priority="2"/>
>>      </failoverdomain>
>>      <failoverdomain name="FDomain‐servidor‐1" restricted="1">
>>          <failoverdomainnode name="servidor‐1"/>
>>      </failoverdomain>
>>      <failoverdomain name="FDomain‐servidor‐2" restricted="1">
>>          <failoverdomainnode name="servidor‐2"/>
>>      </failoverdomain>
>>      <failoverdomain name="FDomain‐servidor‐discos" restricted="1">
>>          <failoverdomainnode name="servidor‐discos"/>
>>      </failoverdomain>
>>      <failoverdomain name="FDomain‐servidor‐cluster" restricted="1">
>>          <failoverdomainnode name="servidor‐cluster"/>
>>      </failoverdomain>
>>   </failoverdomains>
>>  <resources>
>>      <ip address="192.168.122.99/24" sleeptime="10"/>
>>      <script file="/etc/init.d/clvmd" name="Cluster LVM"/>
>>      <script file="/etc/init.d/gfs2" name="Global File System"/>
>>      <script file="/etc/init.d/tgtd" name="Iscsi target"/>
>>      <script file="/etc/init.d/iscsi" name="Iscsi initiator"/>
>>      <ip address="192.168.122.98/24" sleeptime="10"/>
>>      <apache config_file="/lv_cluster_1/servicio‐A/etc/httpd/conf/httpd.conf" name="Apache‐servicio‐A"
>> server_root="/etc/httpd" shutdown_wait="5"/>
>>      <clusterfs device="UUID=e0e3d8be‐5348‐87e1‐56ae‐297ed00f690f" fsid="19900" fstype="gfs2" mountpoint="/lv_cluster_1"
>> name="Lv_cluster_1"/>
>>  </resources>
>>
>>  <service domain="FDomain‐servidor‐discos" name="SGroup‐servidor‐discos" recovery="restart">
>>      <script ref="Iscsi target"/>
>>  </service>
>>
>>  <service autostart="0" domain="FDomain_servicio_web" name="SGroup_servicio_web" recovery="relocate">
>>      <ip ref="192.168.122.99/24">
>>          <script file="/etc/init.d/httpd" name="Httpd"/>
>>      </ip>
>>  </service>
>>  <service autostart="0" domain="FDomain_servicio_web" name="SGroup‐apache‐servicio‐A" recovery="relocate">
>>      <ip ref="192.168.122.98/24">
>>        <apache ref="Apache‐servicio‐A"/>
>>      </ip>
>>  </service>
>>
>>  <service domain="FDomain‐servidor‐cluster" name="SGroup‐servidor‐cluster‐srv" recovery="restart">
>>    <script ref="Iscsi initiator">
>>         <script ref="Cluster LVM">
>>             <clusterfs ref="Lv_cluster_1"/>
>>         </script>
>>    </script>
>>  </service>
>>
>>  <service domain="FDomain‐servidor‐1" name="SGroup‐servidor‐1" recovery="restart">
>>     <script ref="Iscsi initiator">
>>          <script ref="Cluster LVM">
>>              <clusterfs ref="Lv_cluster_1"/>
>>          </script>
>>      </script>
>>  </service>
>>
>>  <service domain="FDomain‐servidor‐2" name="SGroup‐servidor‐2" recovery="restart">
>>      <script ref="Iscsi initiator">
>>          <script ref="Cluster LVM">
>>             <clusterfs ref="Lv_cluster_1"/>
>>          </script>
>>      </script>
>> </service>
>>
>> </rm>
>>
>> </cluster>
>>
>> I'm newer, I'd like to know about /cman Unfencing self... fence_node:
>> cannot connect to cman/.
>>
>> #service cman status
>>
>> Found stale pid file
>>
>> ¿?
>>
>> THANKS.
>>
>>
>
>
> --
> Digimer
> Papers and Projects: https://alteeve.ca/w/
> What if the cure for cancer is trapped in the mind of a person without
> access to education?
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>


--
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?

--
Linux-cluster mailing list
Linux-cluster at redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster




More information about the Linux-cluster mailing list