[Linux-cluster] Re: Starting up two of three nodes that compose a cluster

carlopmart carlopmart at gmail.com
Fri Sep 21 15:29:22 UTC 2007


David Teigland wrote:
> On Fri, Sep 21, 2007 at 05:02:18PM +0200, carlopmart wrote:
>> David Teigland wrote:
>>> On Thu, Sep 20, 2007 at 11:40:55AM +0200, carlopmart wrote:
>>>> Please, any hints??
>>>>
>>>> -------- Original Message --------
>>>> Subject: Starting up two of three nodes that compose a cluster
>>>> Date: Wed, 19 Sep 2007 14:51:46 +0200
>>>> From: carlopmart <carlopmart at gmail.com>
>>>> To: linux clustering <linux-cluster at redhat.com>
>>>>
>>>> Hi all,
>>>>
>>>> I have setup a rhel5 based cluster with three nodes. Sometimes i need
>>>> to start only two of this three nodes, but cluster services that i
>>>> configured doesn't starts (fenced fail). Is it not possible to start up
>>>> only two nodes on a three node cluster?? Maybe I need to adjust votes
>>>> param to two instead of three??
>>> Could you be more specific about what you run, where, what happens,
>>> what messages you see, etc.
>>>
>>> Dave
>>>
>>>
>> Yes,
>>
>>  First, I attached my cluster.conf. When /etc/init.d/cman starts, 
>> returns an ok, but when I try to mount my gfs partition returns this error:
>>
>> [root at haldir cluster]# service mountgfs start
>> Mounting GFS filesystems:  /sbin/mount.gfs: lock_dlm_join: gfs_controld 
>> join error: -22
>> /sbin/mount.gfs: error mounting lockproto lock_dlm
> 
> So an error is coming back from gfs_controld on mount.  Please do the
> steps manually, without init scripts or other scripts, so we know exactly
> what steps fail.  And look in /var/log/messages for anything from
> gfs_controld.  If there are none, send the output of 'group_tool -v;
> group_tool dump gfs' after the failed mount.
> 
> Dave
> 
> 
Hi Dave,

  When I try mount gfs patition fails:

  [root at thranduil log]# mount -t gfs /dev/xvdc1 /data
/sbin/mount.gfs: lock_dlm_join: gfs_controld join error: -22
/sbin/mount.gfs: error mounting lockproto lock_dlm
[root at thranduil log]#

Output of group_tool command:

[root at thranduil log]# group_tool -v; group_tool dump gfs
type             level name     id       state node id local_done
fence            0     default  00010001 JOIN_START_WAIT 1 100010001 0
[1]
1190386130 listen 1
1190386130 cpg 4
1190386130 groupd 6
1190386130 uevent 7
1190386130 plocks 10
1190386130 setup done
1190386167 client 6: join /data gfs lock_dlm XenDomUcluster:datavol01 rw 
/dev/xvdc1
1190386167 mount: /data gfs lock_dlm XenDomUcluster:datavol01 rw /dev/xvdc1
1190386167 datavol01 cluster name matches: XenDomUcluster
1190386167 mount: not in default fence domain
1190386167 datavol01 do_mount: rv -22
1190386167 client 6 fd 11 dead
1190386167 client 6 fd -1 dead
1190386228 client 6: join /data gfs lock_dlm XenDomUcluster:datavol01 rw 
/dev/xvdc1
1190386228 mount: /data gfs lock_dlm XenDomUcluster:datavol01 rw /dev/xvdc1
1190386228 datavol01 cluster name matches: XenDomUcluster
1190386228 mount: not in default fence domain
1190386228 datavol01 do_mount: rv -22
1190386228 client 6 fd 11 dead
1190386228 client 6 fd -1 dead
1190388485 client 6: join /data gfs lock_dlm XenDomUcluster:datavol01 rw 
/dev/xvdc1
1190388485 mount: /data gfs lock_dlm XenDomUcluster:datavol01 rw /dev/xvdc1
1190388485 datavol01 cluster name matches: XenDomUcluster
1190388485 mount: not in default fence domain
1190388485 datavol01 do_mount: rv -22
1190388485 client 6 fd 11 dead
1190388485 client 6 fd -1 dead
1190388530 client 6: dump
[root at thranduil log]#

  Thanks David.


-- 
CL Martinez
carlopmart {at} gmail {d0t} com




More information about the Linux-cluster mailing list