[Linux-cluster] unable to mount gfs partition
Derek Anderson
danderso at redhat.com
Wed Jul 21 15:21:48 UTC 2004
On Wednesday 21 July 2004 10:14, Derek Anderson wrote:
> > hi,
> > i checked the dmesg, the error is :
> >
> > lock_gulm: fsid=cluster1:gfs1: Exiting gulm_mount with errors -111
> > GFS: can't mount proto = lock_gulm, table = cluster1:gfs1, hostdata =
> >
> > where as in /var/log/messages the error is :
> >
> > lock_gulm: ERROR Got a -111 trying to login to lock_gulmd. Is it runni
> > ng?
> > lock_gulm: ERROR cm_login failed. -111
> > lock_gulm: ERROR Got a -111 trying to start the threads.
> > lock_gulm: fsid=cluster1:gfs1: Exiting gulm_mount with errors -111
> > GFS: can't mount proto = lock_gulm, table = cluster1:gfs1, hostdata =
> >
> > i got 2 nodes in the gfs cluster. 1 is the lock_gulm server and the
> > other one is not.
> > the one that not a lock_gulm server giving me mount error...
> >
> > Did i need to start the lock_gulm daemon on this server that is not the
> > lock_gulm server?
> >
> > When i start the lock_gulmd on this server it gave me this error in
> > /var/log/messages:
> >
> > lock_gulmd[18399]: You are running in Standard mode.
> > lock_gulmd[18399]: I am (clu2.abc.com) with ip (192.168.11.212)
> > lock_gulmd[18399]: Forked core [18400].
> > lock_gulmd_core[18400]: ERROR [core_io.c:1029] Got error from reply:
> > (clu1:192.
> > 168.11.211) 1006:Not Allowed
> >
> > my cluster.ccs :
> >
> > cluster {
> > name = "smsgateclu"
> > lock_gulm {
> > servers = ["clu1"]
> > heartbeat_rate = 0.3
> > allowed_misses = 1
> > }
> > }
>
> Like tadpol said in the last post, you are most likely expired. Where are
> people getting these ridiculously low examples of heartbeat_rate and
> allowed_misses? No wonder you're fenced.
Doh! Right out of the GFS 6.0 manual, huh? I think we should change that
example (Table 6.1). Anyway, you should try something closer to the defaults
of heartbeat_rate=15, allowed_misses=2 to keep your nodes from getting
unnecessarily fenced. Depending on network traffic load you can move it
down. It's one of those things you kind of have to play with.
>
> > nodes.ccs:
> >
> > nodes {
> > clu1 {
> > ip_interfaces {
> > eth2 = "192.168.11.211"
> > }
> > fence {
> > human {
> > admin {
> > ipaddr = "192.168.11.211"
> > }
> > }
> > }
> > }
> > clu2 {
> > ip_interfaces {
> > eth2 = "192.168.11.212"
> > }
> > fence {
> > human {
> > admin {
> > ipaddr = "192.168.11.212"
> > }
> > }
> > }
> > }
> > }
> >
> > fence.ccs:
> >
> > fence_devices {
> > admin {
> > agent = "fence_manual"
> > }
> > }
> >
> > Please help!
> >
> > Thanks.
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> http://www.redhat.com/mailman/listinfo/linux-cluster
More information about the Linux-cluster
mailing list