[Cluster-devel] conga/luci/docs user_manual.html ss_login1.png ...

jparsons at sourceware.org jparsons at sourceware.org
Fri Sep 15 03:57:42 UTC 2006


CVSROOT:	/cvs/cluster
Module name:	conga
Changes by:	jparsons at sourceware.org	2006-09-15 03:57:41

Added files:
	luci/docs      : user_manual.html ss_login1.png ss_homebase1.png 

Log message:
	beginningg of serious user manual

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/docs/user_manual.html.diff?cvsroot=cluster&r1=NONE&r2=1.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/docs/ss_login1.png.diff?cvsroot=cluster&r1=NONE&r2=1.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/docs/ss_homebase1.png.diff?cvsroot=cluster&r1=NONE&r2=1.1

/cvs/cluster/conga/luci/docs/user_manual.html,v  -->  standard output
revision 1.1
--- conga/luci/docs/user_manual.html
+++ -	2006-09-15 03:57:42.679039000 +0000
@@ -0,0 +1,57 @@
+<html>
+ <head>
+  <title>Conga User Manual</title>
+ </head>
+ <body>
+  <h2>Introduction</h2>
+   <h3>Conga Architecture</h3>
+  Conga is an agent/server architecture for remote administration of systems. The agent component is called 'ricci', and the server is called luci. One luci server can communicate with many multiple ricci agents installed on systems.
+  When a system is added to a luci server to be administered, authentication is done once. No authentication is necessary from then on (unless the certificate used is revoked by a CA, but in fact, CA integration is not complete in version #1 of conga). Through the UI provided by luci, users can configure and administer storage and cluster behavior on remote systems. Communication between luci and ricci is done via XML.
+   <h3>Luci Description</h3>
+    As stated above, systems to be administered are 'added' to a luci server. This is done by storing the hostname (FQDN) or IP address of the system in the luci database. When a luci server is first installed, the database is empty. It is possible, however, to import part or all of a systems database from an existing luci server when deploying a new luci server. This capability provides a means for replication of a luci server instance, as well as an easier testing path.
+  <p/>
+  Every luci server instance has one user at initial installation time. This user is called 'admin'. Only the admin user may add systems to a luci server. The admin user can also create additional user accounts and determine which users are allowed to access which systems in the luci server database. It is possible to import users as a batch operation in a new luci server, just as it is possible to import systems.
+    <h4>Installation of Luci</h4> 
+    After the necessary luci RPMs are installed, the server can be started with the command "service luci start". The first time the server is started, a couple of events take place. The first is that the server is initialized by generating https SSL certificates for the server. An initial password for the admin user is generated as a random value. The admin password can be set any time by running the /usr/sbin/luci_admin application and specifying 'password' on the command line. luci_admin can be run before luci is started for the first time to set up an initial password for the admin account. Other utilities available from luci_admin are:
+  <ul><li>backup:  This option backs the luci server up to a file.</li>
+      <li>restore: This restores a luci site from a backup file.</li>
+      <li>init: This option regenerates ssl certs.</li>
+      <li>help: Shows usage</li>
+  </ul>
+    <h4>Logging In</h4>
+    With the luci service running and an admin password set up, the next step is to log in to the server. Remember to specify https in the browser. Port 8084 is the default port for luci, but this value can be easily changed in /etc/sysconfig/luci.
+  <br/>
+  Typical URL: https://hostname.org:8084/luci
+  <p/>
+  Here is a screenshot of the luci login page.<br/>
+  <img src="./ss_login1.png"/>
+  <p/>
+  Enter admin as the user name, and then enter the admin password that has been set up in the appropriate field, then click 'log in'.
+    <h4>Organization</h4>
+    luci is set up with three tabs right now. They are:
+    <ul><li>Homebase: This is where admin tools for adding and deleting systems or users are located. Only admin is allowed access to this tab.</li>
+        <li>Cluster: If any clusters are set up with the luci server, they will show up in a list in this tab. If a user other than admin navigates to the cluster tab, only those clusters that the user has permission to manage show up in the cluster list. The cluster tab provides a means for creating and configuring clusters.</li>
+       <li>Storage: Remote administration of storage is available through this page in the luci site.</li>
+    </ul>
+  <h2>Homebase Tab</h2>
+  The following figure shows the entry look of the Homebase tab.<br/>
+  <img src="./ss_homebase1.png"/>
+  <p/>
+  With no systems registered with a luci server, the homebase page provides 3 initial utilities to the admin:
+  <ul><li>Add a system: Adding a single system to luci in this first release makes the system available for remote storage administration. In addition to storage administration, conga also provides remote package retrieval and installation, chkconfig functionality, full remote cluster administration, and module support to filter and retrieve log entries. The storage and cluster UIs use some of this broad functionality, but at this time UI has not been built for all that conga will do remotely. <p/>
+  To add a system, click on the 'Add a System' link in the left hand nav table. This will load the following page:
+  <h1>SCREENSHOT</h1>
+  The fully qualified domain name  OR IP Address of the system is entered in the System Hostname field. The root passsword for the system is entered in the adjacent field. As a convenience for adding multiple systems at once, and 'Add Another Entry' button is provided. Whhen this button is clicked and at least one additional entry row has been provided, a checkbox is also made available that can be selected if all systems specified for addition to the luci server share the same password.
+  <h1>SCREENSHOT MULTI ROW</h1>
+  <p/>
+  If the System Hostname is left blank for any row, it is disregarded when the list of systems is submitted for addition. If systems in the list of rows do NOT share the same password (and the checkbox is, of course, left unchecked) and one ior more passwords are incorrect, an error message is generated for each system that has an incorrect password. Those systems listed with correct passwords are added to the luci server. Inn addition to incorrect password problems, an error message is also displayed if luci is unable to connect to the ricci agent on a system. Finally, is a system is entered on the form for addition and it is ALREADY being managed by the luci server, it is not added again - but the admin is informed via error message.</li>
+  <li>Add a Cluster: This page looks much like the Add a System page, only one system may be listed. Any node in the cluster may bbe used for this entry.  Luci will contact tthe specified system and attempt to authenticate with the password provided. If successful, the complete list of cluster nodes will be returned, and a table will be populated with the node names and an adjacent field for a password for each node. The initial node that was entered appears in tthe list with its password field marked as 'authennticated'. There is a convenience checkbox if all nodes share the same password. NOTE: At this point, no cluster nodes have been added to luci - not even the initial node used to retrieve the cluster node list that successfully autthenticated. The cluster and subsequent nodes are only added after the entire list has been submitted with tthe submit button, and all nodes authenticate. 
+  <p/>
+If any nodes fail to authenticate, they appear in the list in red font, so that the password can be corrected and the node list submittted aggain. Luci hhas a strict policy about addinng a cluster to be managed: A cluster cannot be added unless ALL nodes can be reached and authenticated.
+  <p/>When a cluster is added to a luci server, all nodes are also added as general systems so that storage may be managed on them. If this is not desired, the individual systems may be removed fromm luci, while remote cluster management capability is maintained.
+  
+  <h2>Cluster Tab</h2>
+  <h2>Storage Tab</h2>
+ </body>
+</html>
+
/cvs/cluster/conga/luci/docs/ss_login1.png,v  -->  standard output
revision 1.1
Binary files /cvs/cluster/conga/luci/docs/ss_login1.png and - differ
co: output error: Broken pipe
co aborted
/cvs/cluster/conga/luci/docs/ss_homebase1.png,v  -->  standard output
revision 1.1
Binary files /cvs/cluster/conga/luci/docs/ss_homebase1.png and - differ
co: output error: Broken pipe
co aborted




More information about the Cluster-devel mailing list