<div dir="ltr"><div>Hello Delphine<br><br></div>your problem as you know is here <br>===============================================================<br><fs device="LABEL=postfix"
      mountpoint="/var/spool/<div>postfix" force_unmount="1" fstype="ext3"
      name="mgmtha5" options=""/><br>===============================================================<br><br></div><div>I don't know if you are using lvm or partition, but you should look for the device corresponding to that LABEL, if you are using lvm use vgs and lvs to see if your volume are actived<br>
<br></div><div>Thanks<br></div><br></div><div class="gmail_extra"><br><br><div class="gmail_quote">2013/5/13 Delphine Ramalingom <span dir="ltr"><<a href="mailto:delphine.ramalingom@univ-reunion.fr" target="_blank">delphine.ramalingom@univ-reunion.fr</a>></span><br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
  
    
  
  <div bgcolor="#FFFFFF" text="#000000">
    <div>Hi,<br>
      <br>
      I used it :<div class="im"><br>
      rg_test test /etc/cluster/cluster.conf start service HA_MGMT<br></div>
      Running in test mode.<br>
      Starting HA_MGMT...<div class="im"><br>
      <err>    startFilesystem: Could not match LABEL=postfix with
      a real device<br></div>
      Failed to start HA_MGMT<br>
      <br>
      But it gives me the same message.<br>
      <br>
      Regards <br>
      Delphine<br>
      <br>
      Le 13/05/13 11:47, emmanuel segura a écrit :<br>
    </div><div><div class="h5">
    <blockquote type="cite">
      <div dir="ltr">
        <div>Hello<br>
          <br>
        </div>
        If you would like see why your service doens't start, you should
        use "rg_test test /etc/cluster/cluster.conf start service
        HA_MGMT"<br>
        <br>
        <br>
      </div>
      <div class="gmail_extra">
        <br>
        <br>
        <div class="gmail_quote">2013/5/13 Delphine Ramalingom <span dir="ltr"><<a href="mailto:delphine.ramalingom@univ-reunion.fr" target="_blank">delphine.ramalingom@univ-reunion.fr</a>></span><br>
          <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
            <div bgcolor="#FFFFFF" text="#000000">
              <div>Hi,<br>
                <br>
                This is the cluster.conf :<br>
                <br>
                [root@titan0 11:29:14 ~]# cat /etc/cluster/cluster.conf<br>
                <?xml version="1.0" ?><br>
                <cluster config_version="7" name="HA_MGMT"><br>
                        <fence_daemon clean_start="1"
                post_fail_delay="0" post_join_delay="60"/><br>
                        <clusternodes><br>
                                <clusternode name="titan0" 
                nodeid="1" votes="1"><br>
                                        <fence><br>
                                                <method name="1"><br>
                                                        <device
                name="titan0fence" option="reboot"/><br>
                                                </method><br>
                                        </fence><br>
                                </clusternode><br>
                                <clusternode name="titan1" nodeid="2"
                votes="1"><br>
                                        <fence><br>
                                                <method name="1"><br>
                                                        <device
                name="titan1fence" option="reboot"/><br>
                                                </method><br>
                                        </fence><br>
                                </clusternode><br>
                        </clusternodes><br>
                        <cman  cluster_id="0" expected_votes="1"
                two_node="1"/><br>
                        <fencedevices><br>
                                <fencedevice agent="fence_ipmilan"
                ipaddr="172.17.0.101" login="administrator"
                name="titan0fence" passwd="administrator"/><br>
                                <fencedevice agent="fence_ipmilan"
                ipaddr="172.17.0.102" login="administrator"
                name="titan1fence" passwd="administrator"/><br>
                        </fencedevices><br>
                        <rm><br>
                                <failoverdomains><br>
                                        <failoverdomain
                name="titan0_heuristic" ordered="0" restricted="1"> <br>
                                                <failoverdomainnode
                name="titan0" priority="1"/> <br>
                                        </failoverdomain> <br>
                                        <failoverdomain
                name="titan1_heuristic" ordered="0" restricted="1"> <br>
                                                <failoverdomainnode
                name="titan1" priority="1"/> <br>
                                        </failoverdomain><br>
                                        <failoverdomain
                name="MgmtNodes" ordered="0" restricted="0"><br>
                                                <failoverdomainnode
                name="titan0" priority="1"/><br>
                                                <failoverdomainnode
                name="titan1" priority="2"/><br>
                                        </failoverdomain><br>
                            <failoverdomain name="NFSHA" ordered="0"
                restricted="0"><br>
                                <failoverdomainnode name="titan0"
                priority="2"/><br>
                                <failoverdomainnode name="titan1"
                priority="1"/><br>
                            </failoverdomain><br>
                                </failoverdomains><br>
                            <service domain="titan0_heuristic"
                name="ha_titan0_check" autostart="1"
                checkinterval="10"> <br>
                                    <script
                file="/usr/sbin/ha_titan0_check"
                name="ha_titan0_check"/> <br>
                            </service> <br>
                            <service domain="titan1_heuristic"
                name="ha_titan1_check" autostart="1"
                checkinterval="10"> <br>
                                    <script
                file="/usr/sbin/ha_titan1_check"
                name="ha_titan1_check"/> <br>
                            </service><br>
                                <service domain="MgmtNodes"
                name="HA_MGMT" autostart="0" recovery="relocate"><br>
                            <!-- ip addresses lines mgmt --><br>
                                                <ip address="<a href="http://172.17.0.99/16" target="_blank">172.17.0.99/16</a>"
                monitor_link="1"/><br>
                                                <ip address="<a href="http://10.90.0.99/24" target="_blank">10.90.0.99/24</a>"
                monitor_link="1"/><br>
                            <!-- devices lines mgmt --><br>
                                       <fs device="LABEL=postfix"
                mountpoint="/var/spool/postfix" force_unmount="1"
                fstype="ext3" name="mgmtha5" options=""/><br>
                                       <fs device="LABEL=bigimage"
                mountpoint="/var/lib/systemimager" force_unmount="1"
                fstype="ext3" name="mgmtha4" options=""/><br>
                                       <clusterfs
                device="LABEL=HA_MGMT:conman"
                mountpoint="/var/log/conman" force_unmount="0"
                fstype="gfs2" name="mgmtha3" options=""/><br>
                                       <clusterfs
                device="LABEL=HA_MGMT:ganglia"
                mountpoint="/var/lib/ganglia/rrds" force_unmount="0"
                fstype="gfs2" name="mgmtha2" options=""/><br>
                                       <clusterfs
                device="LABEL=HA_MGMT:syslog"
                mountpoint="/var/log/HOSTS" force_unmount="0"
                fstype="gfs2" name="mgmtha1" options=""/><br>
                                       <clusterfs
                device="LABEL=HA_MGMT:cdb"
                mountpoint="/var/lib/pgsql/data" force_unmount="0"
                fstype="gfs2" name="mgmtha0" options=""/><br>
                                        <script
                file="/usr/sbin/haservices" name="haservices"/><br>
                                </service><br>
                        <service domain="NFSHA" name="HA_NFS"
                autostart="0" checkinterval="60"><br>
                            <!-- ip addresses lines nfs --><br>
                                                <ip address="<a href="http://10.31.0.99/16" target="_blank">10.31.0.99/16</a>"
                monitor_link="1"/><br>
                                                <ip address="<a href="http://10.90.0.88/24" target="_blank">10.90.0.88/24</a>"
                monitor_link="1"/><br>
                                                <ip address="<a href="http://172.17.0.88/16" target="_blank">172.17.0.88/16</a>"
                monitor_link="1"/><br>
                            <!-- devices lines nfs --><br>
                                       <fs device="LABEL=PROGS"
                mountpoint="/programs" force_unmount="1" fstype="ext3"
                name="nfsha4" options=""/><br>
                                       <fs device="LABEL=WRKTMP"
                mountpoint="/worktmp" force_unmount="1" fstype="ext3"
                name="nfsha3" options=""/><br>
                                       <fs device="LABEL=LABOS"
                mountpoint="/labos" force_unmount="1" fstype="xfs"
                name="nfsha2" options="ikeep"/><br>
                                       <fs device="LABEL=OPTINTEL"
                mountpoint="/opt/intel" force_unmount="1" fstype="ext3"
                name="nfsha1" options=""/><br>
                                       <fs device="LABEL=HOMENFS"
                mountpoint="/home_nfs" force_unmount="1" fstype="ext3"
                name="nfsha0" options=""/><br>
                            <script file="/etc/init.d/nfs"
                name="nfs_service"/><br>
                        </service><br>
                        </rm><br>
                    <totem token="21000" /><br>
                </cluster><br>
                <!-- !!!!! DON'T REMOVE OR CHANGE ANYTHING IN
                PARAMETERS SECTION BELOW <br>
                node_name=titan0<br>
                node_ipmi_ipaddr=172.17.0.101<br>
                node_hwmanager_login=administrator<br>
                node_hwmanager_passwd=administrator<br>
                ipaddr1_for_heuristics=172.17.0.200<br>
                node_ha_name=titan1<br>
                node_ha_ipmi_ipaddr=172.17.0.102<br>
                node_ha_hwmanager_login=administrator<br>
                node_ha_hwmanager_passwd=administrator<br>
                ipaddr2_for_heuristics=172.17.0.200<br>
                mngt_virt_ipaddr_for_heuristics=not used on this type of
                node<br>
                END OF SECTION !!!!! --> <br>
                <br>
                <br>
                The var/log/messages is too long and have some messages
                repeated :<br>
                May 13 11:30:33 s_sys@titan0 snmpd[4584]: Connection
                from UDP: [10.40.20.30]:39198<br>
                May 13 11:30:33 s_sys@titan0 snmpd[4584]: Connection
                from UDP: [10.40.20.30]:39198<br>
                May 13 11:30:33 s_sys@titan0 snmpd[4584]: Connection
                from UDP: [10.40.20.30]:39198<br>
                May 13 11:30:33 s_sys@titan0 snmpd[4584]: Connection
                from UDP: [10.40.20.30]:39198<br>
                May 13 11:30:33 s_sys@titan0 snmpd[4584]: Connection
                from UDP: [10.40.20.30]:39198<br>
                May 13 11:30:33 s_sys@titan0 snmpd[4584]: Connection
                from UDP: [10.40.20.30]:39198<br>
                May 13 11:30:33 s_sys@titan0 snmpd[4584]: Connection
                from UDP: [10.40.20.30]:39198<br>
                May 13 11:30:34 s_sys@titan0 snmpd[4584]: Connection
                from UDP: [10.40.20.30]:53030<br>
                May 13 11:30:34 s_sys@titan0 snmpd[4584]: Received SNMP
                packet(s) from UDP: [10.40.20.30]:53030<br>
                May 13 11:30:34 s_sys@titan0 snmpd[4584]: Connection
                from UDP: [10.40.20.30]:41083<br>
                May 13 11:30:34 s_sys@titan0 snmpd[4584]: Received SNMP
                packet(s) from UDP: [10.40.20.30]:41083<br>
                <br>
                Regards<br>
                Delphine<br>
                <br>
                <br>
                <br>
                Le 13/05/13 10:37, Rajveer Singh a écrit :<br>
              </div>
              <blockquote type="cite">
                <div dir="ltr">
                  <div>Hi Delphine,<br>
                    It seems there is some filesystem crash. Please
                    share your /var/log/messages and
                    /etc/cluster/cluster.conf file to help you futher.<br>
                    <br>
                  </div>
                  Regards,<br>
                  Rajveer Singh<br>
                </div>
                <div class="gmail_extra"> <br>
                  <br>
                  <div class="gmail_quote">On Mon, May 13, 2013 at 11:58
                    AM, Delphine Ramalingom <span dir="ltr"><<a href="mailto:delphine.ramalingom@univ-reunion.fr" target="_blank">delphine.ramalingom@univ-reunion.fr</a>></span>
                    wrote:<br>
                    <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hello,<br>
                      <br>
                      I have a problem and I need some help.<br>
                      <br>
                      Our cluster linux have been stopped for
                      maintenance in the room server butr, an error was
                      occured during the stopping procedure :<br>
                      Local machine disabling service:HA_MGMT...Failure<br>
                      <br>
                      The cluster was electrically stopped. But since
                      the restart, I don't succed to restart services
                      with command clussvcadm.<br>
                      I have this message :<br>
                      <br>
                      clusvcadm -e HA_MGMT<br>
                      Local machine trying to enable
                      service:HA_MGMT...Aborted; service failed<br>
                      and<br>
                      <err>    startFilesystem: Could not match
                      LABEL=postfix with a real device<br>
                      <br>
                      Do you have a solution for me ?<br>
                      <br>
                      Thanks a lot in advance.<br>
                      <br>
                      Regards<span><font color="#888888"><br>
                          Delphine<span><font color="#888888"><br>
                              <br>
                              -- <br>
                              Linux-cluster mailing list<br>
                              <a href="mailto:Linux-cluster@redhat.com" target="_blank">Linux-cluster@redhat.com</a><br>
                              <a href="https://www.redhat.com/mailman/listinfo/linux-cluster" target="_blank">https://www.redhat.com/mailman/listinfo/linux-cluster</a><br>
                            </font></span></font></span></blockquote>
                    <span><font color="#888888"> </font></span></div>
                  <span><font color="#888888"> <br>
                    </font></span></div>
                <span><font color="#888888"> <br>
                    <fieldset></fieldset>
                    <br>
                  </font></span></blockquote>
              <br>
            </div>
            <br>
            --<br>
            Linux-cluster mailing list<br>
            <a href="mailto:Linux-cluster@redhat.com" target="_blank">Linux-cluster@redhat.com</a><br>
            <a href="https://www.redhat.com/mailman/listinfo/linux-cluster" target="_blank">https://www.redhat.com/mailman/listinfo/linux-cluster</a><br>
          </blockquote>
        </div>
        <br>
        <br clear="all">
        <br>
        -- <br>
        esta es mi vida e me la vivo hasta que dios quiera
      </div>
      <br>
      <fieldset></fieldset>
      <br>
    </blockquote>
    <br>
  </div></div></div>

<br>--<br>
Linux-cluster mailing list<br>
<a href="mailto:Linux-cluster@redhat.com">Linux-cluster@redhat.com</a><br>
<a href="https://www.redhat.com/mailman/listinfo/linux-cluster" target="_blank">https://www.redhat.com/mailman/listinfo/linux-cluster</a><br></blockquote></div><br><br clear="all"><br>-- <br>esta es mi vida e me la vivo hasta que dios quiera
</div>