[Cluster-devel] conga/luci/docs user_manual.html

jparsons at sourceware.org jparsons at sourceware.org
Tue Sep 26 13:35:57 UTC 2006


CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	jparsons at sourceware.org	2006-09-26 13:35:57

Modified files:
	luci/docs      : user_manual.html 

Log message:
	additional text

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/docs/user_manual.html.diff?cvsroot=cluster&r1=1.3&r2=1.4

--- conga/luci/docs/user_manual.html	2006/09/26 12:36:12	1.3
+++ conga/luci/docs/user_manual.html	2006/09/26 13:35:57	1.4
@@ -8,7 +8,7 @@
   Conga is an agent/server architecture for remote administration of systems. The agent component is called 'ricci', and the server is called luci. One luci server can communicate with many multiple ricci agents installed on systems.
   When a system is added to a luci server to be administered, authentication is done once. No authentication is necessary from then on (unless the certificate used is revoked by a CA, but in fact, CA integration is not complete in version #1 of conga). Through the UI provided by luci, users can configure and administer storage and cluster behavior on remote systems. Communication between luci and ricci is done via XML.
    <h3>Luci Description</h3>
-    As stated above, systems to be administered are 'added' to a luci server. This is done by storing the hostname (FQDN) or IP address of the system in the luci database. When a luci server is first installed, the database is empty. It is possible, however, to import part or all of a systems database from an existing luci server when deploying a new luci server. This capability provides a means for replication of a luci server instance, as well as an easier testing path.
+    As stated above, systems to be administered are 'added' to a luci server (in the documentation that follows, the term 'registered' is also used to mean that a system has been added to a luci server to administered remotely). This is done by storing the hostname (FQDN) or IP address of the system in the luci database. When a luci server is first installed, the database is empty. It is possible, however, to import part or all of a systems database from an existing luci server when deploying a new luci server. This capability provides a means for replication of a luci server instance, as well as an easier testing path.
   <p/>
   Every luci server instance has one user at initial installation time. This user is called 'admin'. Only the admin user may add systems to a luci server. The admin user can also create additional user accounts and determine which users are allowed to access which systems in the luci server database. It is possible to import users as a batch operation in a new luci server, just as it is possible to import systems.
     <h4>Installation of Luci</h4> 
@@ -72,6 +72,10 @@
   <img src="./clus1.png"/>
  NOTE: Until a specific cluster is selected, the cluster pages have no context associated with them. Once a cluster has been selected, however, an additional navigation table is displayed with links to nodes, services, fence devices, and failover domains.
   <img src="./clus2.png"/>
+  <p/>
+  <h4>Node List</h4>
+  Selecting 'Nodes' from the lower Navigation Table displays a list of nodes in the current cluster, along with some helpful links to services running on that node, fencing for the node, and even a link that displays recent log activity for the node in a new browser window. A dropdown menu allows administrators of the cluster a way to have the node join or leave the cluster. The node can also be fenced, rebooted, or deleted through the options in the dropdown menu.
+  <img src="./clus3.png"/>
   
   <h2>Storage Tab</h2>
  </body>




More information about the Cluster-devel mailing list