[Linux-cluster] RHEL 5.1 qdisk warning

Poon Suk Kit ondinap at smg.gov.mo
Wed Dec 5 09:28:11 UTC 2007


Hi, 

We have setup 2 nodes using GFS, with quorum disk and fence device.  but there is always a message
 qdiskd[3029]<warning> qdisk cycle took more than 1 second to complete (1.000000) 

what 's that mean, does it cause any problem to my running system ? 

Regards
SMG


<?xml version="1.0"?>
<cluster alias="lvs001" config_version="27" name="lvs001">
        <fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="3"/>
        <clusternodes>
                <clusternode name="has001.smg.net" nodeid="1" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="pdu2" port="1"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="has002.smg.net" nodeid="2" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="pdu2" port="3"/>
                                </method>
                        </fence>
                </clusternode>
        </clusternodes>
        <cman two_node="0"/>
        <fencedevices>
                <fencedevice agent="fence_apc" ipaddr="192.168.7.30" login="apc" name="pdu2" passwd="apc"/>
        </fencedevices>
        <rm>
                <failoverdomains>
                        <failoverdomain name="failover1" ordered="1" restricted="1">
                                <failoverdomainnode name="has001.smg.net" priority="1"/>
                                <failoverdomainnode name="has002.smg.net" priority="2"/>
                        </failoverdomain>
                </failoverdomains>
                <resources>
                        <ip address="172.16.3.2" monitor_link="1"/>
                        <clusterfs device="/dev/vg_gfs1/lv_gfs_dc" force_unmount="1" fsid="58386" fstype="gfs" mountpoint="/datacenter" name="gfs_dc"/>
                        <nfsexport name="DC"/>
                        <smb name="dc_samba" workgroup="smg"/>
                        <nfsclient allow_recover="1" name="nfs_client_dc" options="rw,insecure,async,no_root_squash" target="172.16.0.0/16"/>
                </resources>
                <service autostart="1" domain="failover1" exclusive="0" name="share_dc" recovery="relocate">
                        <ip ref="172.16.3.2"/>
                        <clusterfs ref="gfs_dc">
                                <nfsexport ref="DC">
                                        <nfsclient ref="nfs_client_dc"/>
                                </nfsexport>
                        </clusterfs>
                </service>
        </rm>
        <quorumd interval="1" label="quorum" votes="1" tko="10">
                <heuristic interval="2" program="/bin/ping has001 -c1 -t1" score="1"/>
                <heuristic interval="2" program="/bin/ping has002 -c1 -t1" score="1"/>
        </quorumd>
</cluster>


[root at has001 ~]# clustat
Member Status: Quorate

  Member Name                        ID   Status
  ------ ----                        ---- ------
  has001.smg.net                        1 Online, Local, rgmanager
  has002.smg.net                        2 Online, rgmanager
  /dev/sdb                              0 Online, Quorum Disk

  Service Name         Owner (Last)                   State         
  ------- ----         ----- ------                   -----         
  service:share_dc     has001.smg.net                 started         
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20071205/f3a15764/attachment.htm>


More information about the Linux-cluster mailing list