[Linux-cluster] lock_gulmd:failed to statr ltpx

Shirai@SystemCreateINC shirai at sc-i.co.jp
Wed May 9 13:27:52 UTC 2007


Hi!

I am constructing GFS with RHEL4U5(Kernel 2.6.9-42ELsmp).
However, lock_gulmd doesn't start even one node correctly.
It is cluster.conf as follows.

<?xml version="1.0" ?>
<cluster alias="alpha_cluster" config_version="2" name="alpha_cluster">
 <fence_daemon post_fail_delay="0" post_join_delay="3"/>
 <clusternodes>
  <clusternode name="lock02-e1" votes="1">
   <fence>
    <method name="1">
     <device name="human" nodename="lock02-e1"/>
    </method>
   </fence>
  </clusternode>
 </clusternodes>
 <gulm>
  <lockserver name="lock02-e1"/>
 </gulm>
 <fencedevices>
  <fencedevice agent="fence_manual" name="human"/>
 </fencedevices>
 <rm>
  <failoverdomains/>
  <resources/>
 </rm>
</cluster>


And, it seems to have succeeded in the start of lock_gulmd 
in/var/log/messages.

May  9 21:11:54 localhost ccsd:  succeeded
May  9 21:12:23 localhost ccsd[4474]: Unable to connect to cluster 
infrastructur
e after 30 seconds.
May  9 21:12:24 localhost ccsd[4474]: cluster.conf (cluster name = 
alpha_cluster
, version = 2) found.
May  9 21:12:35 localhost lock_gulmd_main[4551]: Forked lock_gulmd_core.
May  9 21:12:36 localhost lock_gulmd_main[4551]: Forked lock_gulmd_LT.
May  9 21:12:37 localhost lock_gulmd_main[4551]: Forked lock_gulmd_LTPX.
May  9 21:12:45 localhost lock_gulmd_core[4598]: Starting lock_gulmd_core 
1.0.8.
 (built Sep 20 2006 10:51:58) Copyright (C) 2004 Red Hat, Inc.  All rights 
reser
ved.
May  9 21:12:45 localhost lock_gulmd_core[4598]: I am running in Standard 
mode.
May  9 21:12:45 localhost lock_gulmd_core[4598]: I am (lock02-e1) with ip 
(::fff
f:192.168.102.15)
May  9 21:12:45 localhost lock_gulmd_core[4598]: This is cluster 
alpha_cluster
May  9 21:12:45 localhost lock_gulmd_core[4598]: I see no Masters, So I am 
becom
ing the Master.
May  9 21:12:45 localhost lock_gulmd_core[4598]: Could not send quorum 
update to
 slave lock02-e1
May  9 21:12:45 localhost lock_gulmd_core[4598]: New generation of server 
state.
 (1178712765578307)
May  9 21:12:45 localhost lock_gulmd_core[4598]: EOF on xdr (Magma::4475 ::1 
idx
:1 fd:6)
May  9 21:12:46 localhost lock_gulmd_LT[4602]: Starting lock_gulmd_LT 1.0.8. 
(bu
ilt Sep 20 2006 10:51:58) Copyright (C) 2004 Red Hat, Inc.  All rights 
reserved.

May  9 21:12:46 localhost lock_gulmd_LT[4602]: I am running in Standard 
mode.
May  9 21:12:46 localhost lock_gulmd_LT[4602]: I am (lock02-e1) with ip 
(::ffff:
192.168.102.15)
May  9 21:12:46 localhost lock_gulmd_LT[4602]: This is cluster alpha_cluster
May  9 21:12:46 localhost lock_gulmd_core[4598]: EOF on xdr (Magma::4475 ::1 
idx
:2 fd:7)
May  9 21:12:47 localhost lock_gulmd_LTPX[4606]: Starting lock_gulmd_LTPX 
1.0.8.
 (built Sep 20 2006 10:51:58) Copyright (C) 2004 Red Hat, Inc.  All rights 
reser
ved.
May  9 21:12:47 localhost lock_gulmd_LTPX[4606]: I am running in Standard 
mode.
May  9 21:12:47 localhost lock_gulmd_LTPX[4606]: I am (lock02-e1) with ip 
(::fff
f:192.168.102.15)
May  9 21:12:47 localhost lock_gulmd_LTPX[4606]: This is cluster 
alpha_cluster
May  9 21:12:47 localhost lock_gulmd_LTPX[4606]: New Master at lock02-e1 
::ffff:
192.168.102.15
May  9 21:12:47 localhost lock_gulmd_LT000[4602]: New Client: idx 2 fd 7 
from lo
ck02-e1 ::ffff:192.168.102.15
May  9 21:12:47 localhost lock_gulmd_LTPX[4606]: Logged into LT000 at 
lock02-e1
::ffff:192.168.102.15
May  9 21:12:47 localhost lock_gulmd_LTPX[4606]: Finished resending to LT000
May  9 21:12:47 localhost ccsd[4474]: Connected to cluster infrastruture 
via: Gu
LM Plugin v1.0.5
May  9 21:12:47 localhost ccsd[4474]: Initial status:: Quorate
May  9 21:14:37 localhost lock_gulmd: startup failed


However, it will fail before long.
What should I do?

Regards

------------------------------------------------------
Shirai Noriyuki
Chief Engineer Technical Div. System Create Inc
Kanda Toyo Bldg, 3-4-2 Kandakajicho
Chiyodaku Tokyo 101-0045 Japan
Tel81-3-5296-3775 Fax81-3-5296-3777
e-mail:shirai at sc-i.co.jp web:http://www.sc-i.co.jp
------------------------------------------------------





More information about the Linux-cluster mailing list