[Linux-cluster] gfs2, kvm setup
J. Bruce Fields
bfields at fieldses.org
Thu Jun 26 20:33:15 UTC 2008
On Thu, Jun 26, 2008 at 03:11:06PM -0400, bfields wrote:
> On Thu, Jun 26, 2008 at 02:35:29PM -0400, bfields wrote:
> > On Thu, Jun 26, 2008 at 10:27:33AM -0500, David Teigland wrote:
> > > This mount appears to have been successful. Usual things to collect for
> > > debugging the other problems:
> > > - any errors in /var/log/messages from all nodes
> > > - cman_tool nodes; cman_tool status from all nodes
> > > - group_tool -v from all nodes
> >
> > Thanks, I'll see what more information I can collect.
>
> So, the first mount (on "piglet1") succeeds. The second (on "piglet2")
> returns immediately without mounting, and leaves this in the logs:
>
> gfs_controld[3035]: segfault at 0 ip 08051361 sp bfd88ae0 error 4 in gfs_controld[8048000+1d000]
Looking at the object file, that appears to be in purge_plocks().
After rerunning configure with --debug and rebuilding, the second mount
hangs instead of returning immediately without mounting.
--b.
> GFS2: fsid=: Trying to join cluster "lock_dlm", "piglet:test"
> lock_dlm: no mount options, (u)mount helpers not installed
> GFS2: fsid=: can't mount proto=lock_dlm, table=piglet:test, hostdata=
>
> At this point, cman_tool nodes, status, and group_tool -v output from
> piglet 1 are:
>
> Node Sts Inc Joined Name
> 1 M 128 2008-06-26 14:49:51 piglet1
> 2 M 132 2008-06-26 14:49:51 piglet2
> 3 M 136 2008-06-26 14:49:52 piglet3
> 4 M 132 2008-06-26 14:49:51 piglet4
> Version: 6.1.0
> Config Version: 1
> Cluster Name: piglet
> Cluster Id: 6838
> Cluster Member: Yes
> Cluster Generation: 136
> Membership state: Cluster-Member
> Nodes: 4
> Expected votes: 4
> Total votes: 4
> Quorum: 3
> Active subsystems: 6
> Flags: Dirty
> Ports Bound: 0
> Node name: piglet1
> Node ID: 1
> Multicast addresses: 239.192.26.208
> Node addresses: 192.168.122.129
> type level name id state node id local_done
> fence 0 default 00010004 none
> [1 2 3 4]
> dlm 1 test 00020001 none
> [1]
> gfs 2 test 00010001 none
> [1 2]
>
> From piglet2:
>
> Node Sts Inc Joined Name
> 1 M 132 2008-06-26 14:49:52 piglet1
> 2 M 124 2008-06-26 14:49:51 piglet2
> 3 M 136 2008-06-26 14:49:52 piglet3
> 4 M 128 2008-06-26 14:49:51 piglet4
> Version: 6.1.0
> Config Version: 1
> Cluster Name: piglet
> Cluster Id: 6838
> Cluster Member: Yes
> Cluster Generation: 136
> Membership state: Cluster-Member
> Nodes: 4
> Expected votes: 4
> Total votes: 4
> Quorum: 3
> Active subsystems: 6
> Flags: Dirty
> Ports Bound: 0
> Node name: piglet2
> Node ID: 2
> Multicast addresses: 239.192.26.208
> Node addresses: 192.168.122.130
> type level name id state node id local_done
> fence 0 default 00010004 none
> [1 2 3 4]
> gfs 2 test 00010001 none
> [1 2]
>
> From piglet3:
> Node Sts Inc Joined Name
> 1 M 136 2008-06-26 14:49:52 piglet1
> 2 M 136 2008-06-26 14:49:52 piglet2
> 3 M 124 2008-06-26 14:49:52 piglet3
> 4 M 136 2008-06-26 14:49:52 piglet4
> Version: 6.1.0
> Config Version: 1
> Cluster Name: piglet
> Cluster Id: 6838
> Cluster Member: Yes
> Cluster Generation: 136
> Membership state: Cluster-Member
> Nodes: 4
> Expected votes: 4
> Total votes: 4
> Quorum: 3
> Active subsystems: 7
> Flags: Dirty
> Ports Bound: 0
> Node name: piglet3
> Node ID: 3
> Multicast addresses: 239.192.26.208
> Node addresses: 192.168.122.131
> type level name id state node id local_done
> fence 0 default 00010004 none
> [1 2 3 4]
>
> From piglet4:
> Node Sts Inc Joined Name
> 1 M 132 2008-06-26 14:49:51 piglet1
> 2 M 128 2008-06-26 14:49:51 piglet2
> 3 M 136 2008-06-26 14:49:52 piglet3
> 4 M 124 2008-06-26 14:49:50 piglet4
> Version: 6.1.0
> Config Version: 1
> Cluster Name: piglet
> Cluster Id: 6838
> Cluster Member: Yes
> Cluster Generation: 136
> Membership state: Cluster-Member
> Nodes: 4
> Expected votes: 4
> Total votes: 4
> Quorum: 3
> Active subsystems: 7
> Flags: Dirty
> Ports Bound: 0
> Node name: piglet4
> Node ID: 4
> Multicast addresses: 239.192.26.208
> Node addresses: 192.168.122.132
> type level name id state node id local_done
> fence 0 default 00010004 none
> [1 2 3 4]
>
> --b.
More information about the Linux-cluster
mailing list