[Cluster-devel] cluster/group/man dlm_controld.8 gfs_controld. ...

teigland at sourceware.org teigland at sourceware.org
Thu Aug 16 20:07:54 UTC 2007


CVSROOT:	/cvs/cluster
Module name:	cluster
Changes by:	teigland at sourceware.org	2007-08-16 20:07:53

Added files:
	group/man      : dlm_controld.8 gfs_controld.8 group_tool.8 
	                 groupd.8 

Log message:
	add man pages

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/group/man/dlm_controld.8.diff?cvsroot=cluster&r1=NONE&r2=1.1
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/group/man/gfs_controld.8.diff?cvsroot=cluster&r1=NONE&r2=1.1
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/group/man/group_tool.8.diff?cvsroot=cluster&r1=NONE&r2=1.1
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/group/man/groupd.8.diff?cvsroot=cluster&r1=NONE&r2=1.1

/cvs/cluster/cluster/group/man/dlm_controld.8,v  -->  standard output
revision 1.1
--- cluster/group/man/dlm_controld.8
+++ -	2007-08-16 20:07:54.157517000 +0000
@@ -0,0 +1,129 @@
+.\"  Copyright (C) 2007 Red Hat, Inc.  All rights reserved.
+.\"  
+.\"  This copyrighted material is made available to anyone wishing to use,
+.\"  modify, copy, or redistribute it subject to the terms and conditions
+.\"  of the GNU General Public License v.2.
+
+.TH dlm_controld 8
+
+.SH NAME
+dlm_controld - daemon that configures dlm according to cluster events
+
+.SH SYNOPSIS
+.B
+dlm_controld
+[\fIOPTION\fR]...
+
+.SH DESCRIPTION
+The dlm lives in the kernel, and the cluster infrastructure (cluster
+membership and group management) lives in user space.  The dlm in the
+kernel needs to adjust/recover for certain cluster events.  It's the job
+of dlm_controld to receive these events and reconfigure the kernel dlm as
+needed.  dlm_controld controls and configures the dlm through sysfs and
+configfs files that are considered dlm-internal interfaces; not a general
+API/ABI.
+
+The dlm also exports lock state through debugfs so that dlm_controld can
+implement deadlock detection in user space.
+
+.SH CONFIGURATION FILE
+
+Optional cluster.conf settings are placed in the <dlm> section.
+
+.SS Global settings
+The network
+.I protocol
+can be set to "tcp" or "sctp".  The default is tcp.
+
+  <dlm protocol="tcp"/>
+
+After waiting
+.I timewarn
+centiseconds, the dlm will emit a warning via netlink.  This only applies
+to lockspaces created with the DLM_LSFL_TIMEWARN flag, and is used for
+deadlock detection.  The default is 500 (5 seconds).
+
+  <dlm timewarn="500"/>
+
+DLM kernel debug messages can be enabled by setting
+.I log_debug
+to 1.  The default is 0.
+
+  <dlm log_debug="0"/>
+
+.SS Disabling resource directory
+
+Lockspaces usually use a resource directory to keep track of which node is
+the master of each resource.  The dlm can operate without the resource
+directory, though, by statically assigning the master of a resource using
+a hash of the resource name.
+
+  <dlm>
+    <lockspace name="foo" nodir="1">
+  </dlm>
+
+.SS Lock-server configuration
+
+The nodir setting can be combined with node weights to create a
+configuration where select node(s) are the master of all resources/locks.
+These "master" nodes can be viewed as "lock servers" for the other nodes.
+
+  <dlm>
+    <lockspace name="foo" nodir="1">
+      <master name="node01"/>
+    </lockspace>
+  </dlm>
+
+or,
+
+  <dlm>
+    <lockspace name="foo" nodir="1">
+      <master name="node01"/>
+      <master name="node02"/>
+    </lockspace>
+  </dlm>
+
+Lock management will be partitioned among the available masters.  There
+can be any number of masters defined.  The designated master nodes will
+master all resources/locks (according to the resource name hash).  When no
+masters are members of the lockspace, then the nodes revert to the common
+fully-distributed configuration.  Recovery is faster, with little
+disruption, when a non-master node joins/leaves.
+
+There is no special mode in the dlm for this lock server configuration,
+it's just a natural consequence of combining the "nodir" option with node
+weights.  When a lockspace has master nodes defined, the master has a
+default weight of 1 and all non-master nodes have weight of 0.  Explicit
+non-zero weights can also be assigned to master nodes, e.g.
+
+  <dlm>
+    <lockspace name="foo" nodir="1">
+      <master name="node01" weight="2"/>
+      <master name="node02" weight="1"/>
+    </lockspace>
+  </dlm>
+
+In which case node01 will master 2/3 of the total resources and node2 will
+master the other 1/3.
+
+
+.SH OPTIONS
+.TP
+\fB-d\fP <num>
+Enable (1) or disable (0) the deadlock detection code.
+.TP
+\fB-D\fP
+Run the daemon in the foreground and print debug statements to stdout.
+.TP
+\fB-K\fP
+Enable kernel dlm debugging messages.
+.TP
+\fB-V\fP
+Print the version information and exit.
+.TP
+\fB-h\fP 
+Print out a help message describing available options, then exit.
+
+.SH SEE ALSO
+groupd(8)
+
/cvs/cluster/cluster/group/man/gfs_controld.8,v  -->  standard output
revision 1.1
--- cluster/group/man/gfs_controld.8
+++ -	2007-08-16 20:07:54.236851000 +0000
@@ -0,0 +1,70 @@
+.\"  Copyright (C) 2007 Red Hat, Inc.  All rights reserved.
+.\"  
+.\"  This copyrighted material is made available to anyone wishing to use,
+.\"  modify, copy, or redistribute it subject to the terms and conditions
+.\"  of the GNU General Public License v.2.
+
+.TH gfs_controld 8
+
+.SH NAME
+gfs_controld - daemon that manages mounting, unmounting, recovery and
+posix locks
+
+.SH SYNOPSIS
+.B
+gfs_controld
+[\fIOPTION\fR]...
+
+.SH DESCRIPTION
+GFS lives in the kernel, and the cluster infrastructure (cluster
+membership and group management) lives in user space.  GFS in the kernel
+needs to adjust/recover for certain cluster events.  It's the job of
+gfs_controld to receive these events and reconfigure gfs as needed.
+gfs_controld controls and configures gfs through sysfs files that are
+considered gfs-internal interfaces; not a general API/ABI.
+
+Mounting, unmounting and node failure are the main cluster events that
+gfs_controld controls.  It also manages the assignment of journals to
+different nodes.  The mount.gfs and umount.gfs programs communicate with
+gfs_controld to join/leave the mount group and receive the necessary
+options for the kernel mount.
+
+GFS also sends all posix lock operations to gfs_controld for processing.
+gfs_controld manages cluster-wide posix locks for gfs and passes results
+back to gfs in the kernel.
+
+.SH OPTIONS
+.TP
+\fB-l\fP <num>
+Limit the rate at which posix lock messages are sent to <num> messages per
+second.  0 disables the limit and results in the maximum performance of
+posix locks.  Default is 100.
+.TP
+\fB-w\fP
+Disable the "withdraw" feature.
+.TP
+\fB-p\fP
+Disable posix lock handling.
+.TP
+\fB-D\fP
+Run the daemon in the foreground and print debug statements to stdout.
+.TP
+\fB-P\fP
+Enable posix lock debugging messages.
+.TP
+\fB-V\fP
+Print the version information and exit.
+.TP
+\fB-h\fP 
+Print out a help message describing available options, then exit.
+
+.SH DEBUGGING 
+The gfs_controld daemon keeps a circular buffer of debug messages that can
+be dumped with the 'group_tool dump gfs' command.
+
+The state of all gfs posix locks can also be dumped from gfs_controld with
+the 'group_tool dump plocks <fsname>' command.
+
+.SH SEE ALSO
+groupd(8), group_tool(8)
+
/cvs/cluster/cluster/group/man/group_tool.8,v  -->  standard output
revision 1.1
--- cluster/group/man/group_tool.8
+++ -	2007-08-16 20:07:54.321303000 +0000
@@ -0,0 +1,67 @@
+.\"  Copyright (C) 2007 Red Hat, Inc.  All rights reserved.
+.\"  
+.\"  This copyrighted material is made available to anyone wishing to use,
+.\"  modify, copy, or redistribute it subject to the terms and conditions
+.\"  of the GNU General Public License v.2.
+
+.TH group_tool 8
+
+.SH NAME
+group_tool - display/dump information about fence, dlm and gfs groups
+
+.SH SYNOPSIS
+.B
+group_tool
+[\fISUBCOMMAND\fR] [\fIOPTION\fR]...
+
+.SH DESCRIPTION
+
+The group_tool program displays the status of fence, dlm and gfs groups.
+The information is read from the groupd daemon which controls the fenced,
+dlm_controld and gfs_controld daemons.  group_tool will also dump debug
+logs from various daemons.
+
+.SH SUBCOMMANDS
+
+.TP
+\fBls\fP
+displays the list of groups and their membership.  It is the default
+subcommand if none is specified.
+
+.TP
+\fBdump\fP
+dumps the debug log from groupd.
+
+.TP
+\fBdump fence\fP
+dumps the debug log from fenced.
+
+.TP
+\fBdump gfs\fP
+dumps the debug log from gfs_controld.
+
+.TP
+\fBdump plocks\fP <fsname>
+prints the posix locks on the named gfs fs from gfs_controld.
+
+.SH OPTIONS
+.TP
+\fB-v\fP
+Verbose output, used with the 'ls' subcommand.
+.TP
+\fB-D\fP
+Run the daemon in the foreground and print debug statements to stdout.
+.TP
+\fB-V\fP
+Print the version information and exit.
+.TP
+\fB-h\fP 
+Print out a help message describing available options, then exit.
+
+.SH DEBUGGING
+The groupd daemon keeps a circular buffer of debug messages that can be
+dumped with the 'group_tool dump' command.
+
+.SH SEE ALSO
+groupd(8)
+
/cvs/cluster/cluster/group/man/groupd.8,v  -->  standard output
revision 1.1
--- cluster/group/man/groupd.8
+++ -	2007-08-16 20:07:54.405915000 +0000
@@ -0,0 +1,49 @@
+.\"  Copyright (C) 2007 Red Hat, Inc.  All rights reserved.
+.\"  
+.\"  This copyrighted material is made available to anyone wishing to use,
+.\"  modify, copy, or redistribute it subject to the terms and conditions
+.\"  of the GNU General Public License v.2.
+
+.TH groupd 8
+
+.SH NAME
+groupd - the group manager for fenced, dlm_controld and gfs_controld
+
+.SH SYNOPSIS
+.B
+groupd
+[\fIOPTION\fR]...
+
+.SH DESCRIPTION
+
+The group daemon, groupd, provides a compatibility layer between the
+openais closed process group (CPG) service and the fenced, dlm_controld
+and gfs_controld daemons.  groupd and its associated libgroup interface
+will go away in the future as the fencing, dlm and gfs daemons are ported
+to use the libcpg interfaces directly.  groupd translates and buffers cpg
+events between openais's cpg service and the fence/dlm/gfs systems that
+use it.  CPG's are used to represent the membership of the fence domain,
+dlm lockspaces and gfs mount groups.
+
+groupd is also a convenient place to query the status of the fence, dlm
+and gfs groups.  This is done by the group_tool program.
+
+
+.SH OPTIONS
+.TP
+\fB-D\fP
+Run the daemon in the foreground and print debug statements to stdout.
+.TP
+\fB-V\fP
+Print the version information and exit.
+.TP
+\fB-h\fP 
+Print out a help message describing available options, then exit.
+
+.SH DEBUGGING
+The groupd daemon keeps a circular buffer of debug messages that can be
+dumped with the 'group_tool dump' command.
+
+.SH SEE ALSO
+group_tool(8)
+




More information about the Cluster-devel mailing list