[Cluster-devel] conga/luci cluster/form-macros site/luci/Exten ...

rmccabe at sourceware.org rmccabe at sourceware.org
Thu May 3 20:16:57 UTC 2007


CVSROOT:	/cvs/cluster
Module name:	conga
Branch: 	EXPERIMENTAL
Changes by:	rmccabe at sourceware.org	2007-05-03 20:16:38

Modified files:
	luci/cluster   : form-macros 
	luci/site/luci/Extensions: FenceHandler.py HelperFunctions.py 
	                           StorageReport.py Variable.py 
	                           cluster_adapters.py 
	                           conga_constants.py 
	                           conga_storage_constants.py 
	                           homebase_adapters.py 
	                           ricci_communicator.py 
	                           ricci_defines.py storage_adapters.py 
	                           system_adapters.py 
Added files:
	luci/site/luci/Extensions: LuciDB.py ResourceHandler.py 
	                           RicciQueries.py 
	luci/site/luci/Extensions/ClusterModel: Apache.py 
	                                        BaseResource.py 
	                                        Cluster.py 
	                                        ClusterNode.py 
	                                        ClusterNodes.py 
	                                        Clusterfs.py Cman.py 
	                                        Device.py 
	                                        FailoverDomain.py 
	                                        FailoverDomainNode.py 
	                                        FailoverDomains.py 
	                                        Fence.py FenceDaemon.py 
	                                        FenceDevice.py 
	                                        FenceDevices.py 
	                                        FenceXVMd.py Fs.py 
	                                        GeneralError.py Gulm.py 
	                                        Heuristic.py Ip.py 
	                                        LVM.py Lockserver.py 
	                                        Method.py 
	                                        ModelBuilder.py 
	                                        Multicast.py MySQL.py 
	                                        NFSClient.py 
	                                        NFSExport.py Netfs.py 
	                                        OpenLDAP.py Postgres8.py 
	                                        QuorumD.py RefObject.py 
	                                        Resources.py Rm.py 
	                                        Samba.py Script.py 
	                                        Service.py TagObject.py 
	                                        Tomcat5.py Totem.py 
	                                        Vm.py __init__.py 
Removed files:
	luci/site/luci/Extensions: Apache.py BaseResource.py Cluster.py 
	                           ClusterNode.py ClusterNodes.py 
	                           Clusterfs.py Cman.py Device.py 
	                           FailoverDomain.py 
	                           FailoverDomainNode.py 
	                           FailoverDomains.py Fence.py 
	                           FenceDaemon.py FenceDevice.py 
	                           FenceDevices.py FenceXVMd.py Fs.py 
	                           GeneralError.py Gulm.py Heuristic.py 
	                           Ip.py LVM.py Lockserver.py Method.py 
	                           ModelBuilder.py Multicast.py MySQL.py 
	                           NFSClient.py NFSExport.py Netfs.py 
	                           OpenLDAP.py Postgres8.py QuorumD.py 
	                           README.txt RefObject.py Resources.py 
	                           Rm.py Samba.py Script.py Service.py 
	                           ServiceData.py TagObject.py 
	                           Tomcat5.py Totem.py Vm.py 
	                           clui_constants.py permission_check.py 
	                           ricci_bridge.py 

Log message:
	Big luci code refactor and cleanup, part 1.
	
	This is broken right now. Don't use this branch.

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/cluster/form-macros.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.198&r2=1.198.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/LuciDB.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ResourceHandler.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/RicciQueries.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/FenceHandler.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.18&r2=1.18.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/HelperFunctions.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.6&r2=1.6.4.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/StorageReport.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.23&r2=1.23.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Variable.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.4&r2=1.4.8.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/cluster_adapters.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.255&r2=1.255.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/conga_constants.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.39&r2=1.39.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/conga_storage_constants.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.8&r2=1.8.8.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/homebase_adapters.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.50&r2=1.50.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ricci_communicator.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.25&r2=1.25.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ricci_defines.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=1.1.8.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/storage_adapters.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.9&r2=1.9.4.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/system_adapters.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.2&r2=1.2.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Apache.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.2&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/BaseResource.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Cluster.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.5&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterNode.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.2&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterNodes.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Clusterfs.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.2&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Cman.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Device.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/FailoverDomain.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.2&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/FailoverDomainNode.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/FailoverDomains.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Fence.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/FenceDaemon.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.2&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/FenceDevice.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.3&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/FenceDevices.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/FenceXVMd.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Fs.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.2&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/GeneralError.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Gulm.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Heuristic.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Ip.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.2&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/LVM.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Lockserver.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Method.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ModelBuilder.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.26&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Multicast.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/MySQL.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.2&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/NFSClient.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.2&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/NFSExport.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.2&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Netfs.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.2&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/OpenLDAP.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.2&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Postgres8.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.2&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/QuorumD.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/README.txt.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/RefObject.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Resources.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Rm.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Samba.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.2&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Script.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.2&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Service.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ServiceData.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/TagObject.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.3&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Tomcat5.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.2&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Totem.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/Vm.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.4&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/clui_constants.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.2&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/permission_check.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.1&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ricci_bridge.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=1.62&r2=NONE
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Apache.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/BaseResource.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Cluster.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/ClusterNode.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/ClusterNodes.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Clusterfs.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Cman.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Device.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/FailoverDomain.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/FailoverDomainNode.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/FailoverDomains.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Fence.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/FenceDaemon.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/FenceDevice.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/FenceDevices.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/FenceXVMd.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Fs.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/GeneralError.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Gulm.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Heuristic.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Ip.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/LVM.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Lockserver.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Method.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/ModelBuilder.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Multicast.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/MySQL.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/NFSClient.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/NFSExport.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Netfs.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/OpenLDAP.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Postgres8.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/QuorumD.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/RefObject.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Resources.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Rm.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Samba.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Script.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Service.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/TagObject.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Tomcat5.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Totem.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/Vm.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1
http://sourceware.org/cgi-bin/cvsweb.cgi/conga/luci/site/luci/Extensions/ClusterModel/__init__.py.diff?cvsroot=cluster&only_with_tag=EXPERIMENTAL&r1=NONE&r2=1.1.2.1

--- conga/luci/cluster/form-macros	2007/03/15 16:41:11	1.198
+++ conga/luci/cluster/form-macros	2007/05/03 20:16:37	1.198.2.1
@@ -4062,9 +4062,76 @@
 	<div class="service_comp_list">
 	<table class="systemsTable">
 		<thead class="systemsTable">
-			<tr class="systemsTable"><td class="systemsTable">
-				<p class="reshdr">Properties for <tal:block tal:replace="vminfo/name | string:virtual machine service"/></p>
-			</td></tr>
+			<tr class="systemsTable">
+				<td class="systemsTable">
+					<p class="reshdr">Properties for <tal:block tal:replace="vminfo/name | string:virtual machine service"/></p>
+				</td>
+			</tr>
+
+			<tr class="systemsTable">
+				<td class="cluster service service_action"
+					tal:condition="python: sinfo and 'innermap' in sinfo">
+				<form method="post">
+					<input type="hidden" name="pagetype" tal:attributes="
+						value request/pagetype | request/form/pagetype | nothing" />
+					<select name="gourl"
+						tal:define="global innermap sinfo/innermap;
+						starturls innermap/links">
+
+						<option value="">Choose a Task...</option>
+						<tal:block tal:condition="running">
+							<option
+								tal:attributes="value innermap/restarturl">Restart this service</option>
+
+							<option
+								tal:attributes="value innermap/disableurl">Disable this service</option>
+
+							<option value="">----------</option>
+
+							<tal:block tal:repeat="starturl innermap/links">
+								<option
+									tal:condition="not:exists: starturl/migrate"
+									tal:attributes="value starturl/url">Relocate this service to <span tal:replace="starturl/nodename" />
+								</option>
+							</tal:block>
+
+							<tal:block tal:condition="svc/is_vm | nothing">
+								<option value="">----------</option>
+								<tal:block tal:repeat="starturl innermap/links">
+									<option
+										tal:condition="exists: starturl/migrate"
+										tal:attributes="value starturl/url">Migrate this service to <span tal:replace="starturl/nodename" /></option>
+								</tal:block>
+							</tal:block>
+						</tal:block>
+
+						<tal:block tal:condition="not: running">
+							<option
+								tal:attributes="value innermap/enableurl">Enable this service</option>
+							<option value="">----------</option>
+
+							<tal:block tal:repeat="starturl innermap/links">
+								<option
+									tal:condition="not:exists: starturl/migrate"
+									tal:attributes="value starturl/url">Start this service on <span tal:replace="starturl/nodename" />
+								</option>
+							</tal:block>
+
+							<option value="">----------</option>
+
+							<option
+								tal:attributes="value innermap/delurl | nothing"
+								tal:content="string:Delete this service" />
+						</tal:block>
+					</select>
+
+					<input type="button" value="Go"
+						onclick="if (this.form.gourl[this.form.gourl.selectedIndex].value && confirm(this.form.gourl[this.form.gourl.selectedIndex].text + '?')) return dropdown(this.form.gourl)" />
+				</form>
+				</td>
+			</tr>
+		</thead>
+
 		<tfoot class="systemsTable">
 			<tr class="systemsTable">
 				<td>Automatically start this service</td>
@@ -4382,7 +4449,7 @@
 	<table class="cluster service" width="100%">
 		<tr class="cluster service info_top">
 			<td class="cluster service service_name">
-				<strong class="service_name">Service Name:</strong>
+				<strong class="service_name">Service Name</strong>
 				<span
 					tal:content="sinfo/name | nothing"
 					tal:attributes="class python: running and 'running' or 'stopped'" />
@@ -4413,7 +4480,7 @@
 								</option>
 							</tal:block>
 
-							<tal:block tal:condition="svc/is_vm | nothing">
+							<tal:block tal:condition="innermap/is_vm | nothing">
 								<option value="">----------</option>
 								<tal:block tal:repeat="starturl innermap/links">
 									<option
@@ -4451,8 +4518,18 @@
 
 		<tr class="cluster service info_middle">
 			<td class="cluster service service_status">
-				<strong>Service Status:</strong>
-				<span tal:replace="python: running and 'Running' or 'Stopped'" />
+				<strong>Service Status</strong>
+
+				<tal:block tal:condition="running">
+					<span tal:condition="exists:innermap/current"
+						tal:replace="innermap/current | nothing" />
+					<span tal:condition="not:exists:innermap/current"
+						tal:replace="string:Running" />
+				</tal:block>
+
+				<tal:block tal:condition="not:running">
+					Stopped
+				</tal:block>
 			</td>
 		</tr>
 	</table>
--- conga/luci/site/luci/Extensions/FenceHandler.py	2007/02/12 23:26:54	1.18
+++ conga/luci/site/luci/Extensions/FenceHandler.py	2007/05/03 20:16:38	1.18.2.1
@@ -1,6 +1,7 @@
-import re
-from Device import Device
-from conga_constants import FD_VAL_SUCCESS, FD_VAL_FAIL
+from ClusterModel.Device import Device
+
+FD_VAL_FAIL = 1
+FD_VAL_SUCCESS = 0
 
 FD_NEW_SUCCESS = 'New %s successfully added to cluster'
 FD_UPDATE_SUCCESS = 'Fence device %s successfully updated'
@@ -144,10 +145,11 @@
 	'fence_manual': ['name']
 }
 
-ILLEGAL_CHARS = re.compile(':| ')
 
 def makeNCName(name):
 	### name must conform to relaxNG ID type ##
+	import re
+	ILLEGAL_CHARS = re.compile(':| ')
 	return ILLEGAL_CHARS.sub('_', name)
 
 def check_unique_fd_name(model, name):
@@ -158,7 +160,7 @@
 	return True
 
 def validateNewFenceDevice(form, model):
-	from FenceDevice import FenceDevice
+	from ClusterModel.FenceDevice import FenceDevice
 	fencedev = FenceDevice()
 
 	try:
@@ -174,7 +176,6 @@
 	return (FD_VAL_FAIL, ret)
 
 def validateFenceDevice(form, model):
-	from FenceDevice import FenceDevice
 	try:
 		old_fence_name = form['orig_name'].strip()
 		if not old_fence_name:
--- conga/luci/site/luci/Extensions/HelperFunctions.py	2006/12/06 22:34:09	1.6
+++ conga/luci/site/luci/Extensions/HelperFunctions.py	2007/05/03 20:16:38	1.6.4.1
@@ -1,27 +1,63 @@
-
-import AccessControl
-
+from AccessControl import getSecurityManager
+from ricci_communicator import RicciCommunicator, CERTS_DIR_PATH
+from conga_constants import PLONE_ROOT
 import threading
-from ricci_communicator import RicciCommunicator
 
+def siteIsSetup(self):
+	import os
+	try:
+		return os.path.isfile('%sprivkey.pem' % CERTS_DIR_PATH) and os.path.isfile('%scacert.pem' % CERTS_DIR_PATH)
+	except:
+		pass
+	return False
+
+def strFilter(regex, replaceChar, arg):
+	import re
+	return re.sub(regex, replaceChar, arg)
+
+def userAuthenticated(self):
+	try:
+		if (isAdmin(self) or getSecurityManager().getUser().has_role('Authenticated', self.restrictedTraverse(PLONE_ROOT))):
+			return True
+	except Exception, e:
+		luci_log.debug_verbose('UA0: %s' % str(e)) 
+	return False
+
+def isAdmin(self):
+	try:
+		return getSecurityManager().getUser().has_role('Owner', self.restrictedTraverse(PLONE_ROOT))
+	except Exception, e:
+		luci_log.debug_verbose('IA0: %s' % str(e)) 
+	return False
+
+def userIsAdmin(self, userId):
+	try:
+		return self.portal_membership.getMemberById(userId).has_role('Owner', self.restrictedTraverse(PLONE_ROOT))
+	except Exception, e:
+		luci_log.debug_verbose('UIA0: %s: %s' % (userId, str(e)))
+	return False
+
+def resolveOSType(os_str):
+	if not os_str or os_str.find('Tikanga') != (-1) or os_str.find('FC6') != (-1) or os_str.find('Zod') != (-1):
+		return 'rhel5'
+	else:
+		return 'rhel4'
 
 def add_commas(self, str1, str2):
-    return str1 + '; ' + str2
-
+  return '%s; %s' % (str1, str2)
 
 def allowed_systems(self, user, systems):
   allowed = []
+  sm = getSecurityManager()
+  user = sm.getUser()
   for system in systems:
     #Does this take too long?
-    sm = AccessControl.getSecurityManager()
-    user =  sm.getUser()
-    if user.has_permission("View",system[1]):
+    if user.has_permission('View', system[1]):
       allowed.append(system)
   return allowed
 
-
-def access_to_host_allowed(self, hostname, allowed_systems):
-  for system in allowed_systems:
+def access_to_host_allowed(self, hostname, allowed_systems_list):
+  for system in allowed_systems_list:
     if system[0] == hostname:
       if len(self.allowed_systems(None, [system])) == 1:
           return True
@@ -31,7 +67,6 @@
 
 
 
-
 class Worker(threading.Thread):
     def __init__(self,
                  mutex,
@@ -192,7 +227,7 @@
     elif units.lower() == 'tb':
         return 1024*1024*1024*1024.0
     else:
-        raise "invalid size unit"
+        raise Exception, 'invalid size unit'
 
 def convert_bytes(bytes, units):
     c = int(bytes) / get_units_multiplier(units)
--- conga/luci/site/luci/Extensions/StorageReport.py	2007/03/05 20:45:17	1.23
+++ conga/luci/site/luci/Extensions/StorageReport.py	2007/05/03 20:16:38	1.23.2.1
@@ -6,7 +6,6 @@
 
 from Variable import parse_variable, Variable, VariableList
 from ricci_defines import *
-from PropsObject import PropsObject
 from conga_storage_constants import *
 from HelperFunctions import *
 
@@ -14,7 +13,7 @@
 
 
 
-SESSION_STORAGE_XML_REPORT='storage_xml_report_dir'
+SESSION_STORAGE_XML_REPORT = 'storage_xml_report_dir'
 
 
 
@@ -36,7 +35,7 @@
             except:
                 pass
         if self.__mappers == None or self.__m_temps == None:
-            raise 'invalid storage_xml_report'
+            raise Exception, 'invalid storage_xml_report'
         
         self.__mapp_dir = {} # holds mapper lists by mapper_type
         for mapp_node in self.__mappers:
@@ -85,7 +84,7 @@
     
     def get_mapper(self, id):
         if id == '':
-            raise 'empty mapper_id!!!'
+            raise Exception, 'empty mapper_id!!!'
         for m in self.__mappers:
             if m.getAttribute('mapper_id') == id:
                 return m.cloneNode(True)
@@ -188,7 +187,7 @@
                 if node.nodeName == PROPS_TAG:
                     props = node.cloneNode(True)
         if props == None:
-            raise 'mapper missing properties tag'
+            raise Exception, 'mapper missing properties tag'
         return props
     
     
@@ -334,9 +333,9 @@
     if succ_v.get_value() != True:
         # error
         if err_code_v.get_value() == -1:
-            raise Exception, 'Generic error on host:\n\n' + err_desc_v.get_value()
+            raise Exception, 'Generic error on host:\n\n%s' % err_desc_v.get_value()
         else:
-            raise Exception, 'Host responded: ' + err_desc_v.get_value()
+            raise Exception, 'Host responded: %s' % err_desc_v.get_value()
     
     #xml_report = fr_r.toxml()
     xml_report = fr_r
@@ -444,9 +443,9 @@
     
     type = mapper.getAttribute('mapper_type')
     pretty_type, pretty_target_name, pretty_source_name = get_pretty_mapper_info(type)
-    pretty_name = mapper_id.replace(type + ':', '').replace('/dev/', '')
-    pretty_targets_name = pretty_target_name + 's'
-    pretty_sources_name = pretty_source_name + 's'
+    pretty_name = mapper_id.replace('%s:' % type, '').replace('/dev/', '')
+    pretty_targets_name = '%ss' % pretty_target_name
+    pretty_sources_name = '%ss' % pretty_source_name
     icon_name, dummy1, dummy2 = get_mapper_icons(type)
     color = 'black'
     
@@ -474,21 +473,25 @@
     actions = []
     if removable:
         action = {'name' : 'Remove',
-                  'msg'  : 'Are you sure you want to remove ' + pretty_type + ' \\\'' + pretty_name + '\\\'?',
+                  'msg'  : 'Are you sure you want to remove %s \\\'%s\\\'?' % (pretty_type, pretty_name),
                   'link' : ''}
         actions.append(action)
     if type == MAPPER_VG_TYPE or type == MAPPER_MDRAID_TYPE or type == MAPPER_ATARAID_TYPE or type == MAPPER_MULTIPATH_TYPE:
-        action = {'name' : 'Add ' + mapper_ret['pretty_sources_name'], 
+        action = {'name' : 'Add %s' % mapper_ret['pretty_sources_name'], 
                   'msg'  : '',
-                  'link' : './?' + PAGETYPE + '=' + ADD_SOURCES + '&' + PT_MAPPER_ID + '=' + mapper_ret['mapper_id'] + '&' + PT_MAPPER_TYPE + '=' + mapper_ret['mapper_type']}
+                  'link' : './?%s=%s&%s=%s&%s=%s' % (PAGETYPE, ADD_SOURCES, PT_MAPPER_ID, mapper_ret['mapper_id'], PT_MAPPER_TYPE, mapper_ret['mapper_type'])}
         actions.append(action)
     if type == MAPPER_VG_TYPE:
         for nt in mapper_ret['new_targets']:
             if nt['props']['snapshot']['value'] == 'false':
                 if nt['new']:
-                    action = {'name' : 'New ' + mapper_ret['pretty_target_name'], 
+                    action = {'name' : 'New %s' % mapper_ret['pretty_target_name'], 
                               'msg'  : '',
-                              'link' : './?' + PAGETYPE + '=' + VIEW_BD + '&' + PT_MAPPER_ID + '=' + mapper_ret['mapper_id'] + '&' + PT_MAPPER_TYPE + '=' + mapper_ret['mapper_type'] + '&' + PT_PATH + '=' + nt['path']}
+                              'link' : './?%s=%s&%s=%s&%s=%s&%s=%s' \
+                                 % (PAGETYPE, VIEW_BD,
+                                    PT_MAPPER_ID, mapper_ret['mapper_id'], \
+                                    PT_MAPPER_TYPE, mapper_ret['mapper_type'], \
+                                    PT_PATH, nt['path'])}
                     actions.append(action)
                     break
     mapper_ret['actions'] = actions
@@ -515,7 +518,8 @@
         if snap['props']['snapshot']['value'] != 'true':
             continue
         orig_name = snap['props']['snapshot_origin']['value']
-        snap['description'] += ', ' + orig_name + '\'s Snapshot'
+        snap['description'] = '%s, %s\'s Snapshot' \
+            % (snap['description'], orig_name)
         
         # find origin
         for t in mapper['targets']:
@@ -628,9 +632,9 @@
     
     type = mapper.getAttribute('mapper_type')
     pretty_type, pretty_target_name, pretty_source_name = get_pretty_mapper_info(type)
-    pretty_name = mapper_id.replace(type + ':', '').replace('/dev/', '')
-    pretty_targets_name = pretty_target_name + 's'
-    pretty_sources_name = pretty_source_name + 's'
+    pretty_name = mapper_id.replace('%s:' % type, '').replace('/dev/', '')
+    pretty_targets_name = '%ss' % pretty_target_name
+    pretty_sources_name = '%ss' % pretty_source_name
     icon_name, dummy1, dummy2 = get_mapper_icons(type)
     color = 'black'
     
@@ -713,7 +717,7 @@
                 if request[v] == 'on':
                     sources_num += 1
         if sources_num < int(data['min_sources']) or sources_num > int(data['max_sources']):
-            return 'BAD: Invalid number of ' + data['pretty_sources_name'] + ' selected'
+            return 'BAD: Invalid number of %s selected' % data['pretty_sources_name']
         props = data['props']
         pass
     elif object_type == 'add_sources':
@@ -725,18 +729,18 @@
                 if request[v] == 'on':
                     sources_num += 1
         if sources_num == 0 or sources_num > len(data['new_sources']):
-            return 'BAD: Invalid number of ' + data['pretty_sources_name'] + ' selected'
+            return 'BAD: Invalid number of %s selected' % data['pretty_sources_name']
         pass
     
     if props != None:
         res = check_props(self, props, request)
         if res[0] == False:
-            return res[1] + ' ' + res[2]
+            return '%s %s' % (res[1], res[2])
     
     if content_props != None:
         res = check_props(self, content_props, request)
         if res[0] == False:
-            return res[1] + ' ' + res[2]
+            return '%s %s' % (res[1], res[2])
     
     return 'OK'
 def check_props(self, props, request):
@@ -753,7 +757,7 @@
                     try:
                         req_value = int(req_value)
                     except:
-                        msg = prop['pretty_name'] + ' is missing an integer value'
+                        msg = '%s is missing an integer value' % prop['pretty_name']
                         var_name = prop_name
                         valid = False
                         break
@@ -762,7 +766,8 @@
                     step = int(prop['validation']['step'])
                     r_val = (req_value / step) * step
                     if r_val > max or r_val < min:
-                        msg = prop['pretty_name'] + ' has to be within range ' + str(min) + ' - ' + str(max) + ' ' + prop['units']
+                        msg = '%s has to be within range %d-%d %s' \
+                          % (prop['pretty_name'], min, max, prop['units'])
                         var_name = prop_name
                         valid = False
                         break
@@ -770,7 +775,7 @@
                     try:
                         req_value = float(req_value)
                     except:
-                        msg = prop['pretty_name'] + ' is missing a float value'
+                        msg = '%s is missing a float value' % prop['pretty_name']
                         var_name = prop_name
                         valid = False
                         break
@@ -782,30 +787,33 @@
                         step = 0.000001
                     r_val = (req_value / step) * step
                     if r_val > max or r_val < min:
-                        msg = prop['pretty_name'] + ' has to be within range ' + str(min) + ' - ' + str(max) + ' ' + units
+                        msg = '%s has to be within range %d-%d %s' \
+                          % (prop['pretty_name'], min, max, units)
                         var_name = prop_name
                         valid = False
                         break
             elif prop['type'] == 'text':
                 if len(req_value) < int(prop['validation']['min_length']):
-                    msg = prop['pretty_name'] + ' has to have minimum length of ' + prop['validation']['min_length']
+                    msg = '%s has to have minimum length of %s' \
+                      % (prop['pretty_name'], prop['validation']['min_length'])
                     var_name = prop_name
                     valid = False
                     break
                 elif len(req_value) > int(prop['validation']['max_length']):
-                    msg = prop['pretty_name'] + ' has to have maximum length of ' + prop['validation']['max_length']
+                    msg = '%s has to have maximum length of %s' \
+                      % (prop['pretty_name'], prop['validation']['max_length'])
                     var_name = prop_name
                     valid = False
                     break
                 elif req_value in prop['validation']['reserved_words'].split(';') and req_value != '':
-                    msg = prop['pretty_name'] + ' contains reserved keyword. \nReserved keywords are ' + prop['validation']['reserved_words'].replace(';', ', ')
+                    msg = '%s contains reserved keyword. \nReserved keywords are %s' % (prop['pretty_name'], prop['validation']['reserved_words'].replace(';', ', '))
                     var_name = prop_name
                     valid = False
                     break
                 # check illegal chars
                 for ch in prop['validation']['illegal_chars']:
                     if ch in req_value and ch != '':
-                        msg = prop['pretty_name'] + ' contains illegal character. \nIllegal characters are ' + prop['validation']['illegal_chars'].replace(';', ', ')
+                        msg = '%s contains illegal characters. \nIllegal characters are %s' % (prop['pretty_name'],  prop['validation']['illegal_chars'].replace(';', ', '))
                         var_name = prop_name
                         valid = False
                         break
@@ -816,7 +824,7 @@
 
 def apply(self, ricci, storage_report, request):
     if validate(self, storage_report, request) != 'OK':
-        raise 'Internal error: input not validated!!!'
+        raise Exception, 'Internal error: input not validated!!!'
     
     session = request.SESSION
     
@@ -915,7 +923,7 @@
                     if node.nodeName == VARIABLE_TAG:
                         if node.getAttribute('mutable') == 'true':
                             var_name = node.getAttribute('name')
-                            req_name = 'content_variable_' + selected_content_id + '_' + var_name
+                            req_name = 'content_variable_%s_%s' % (selected_content_id, var_name)
                             if req_name in request:
                                 if selected_content_data['props'][req_name]['type'] == 'int':
                                     if selected_content_data['props'][req_name]['units'] != 'bytes':
@@ -1045,7 +1053,7 @@
                         if node.nodeName == VARIABLE_TAG:
                             if node.getAttribute('mutable') == 'true':
                                 var_name = node.getAttribute('name')
-                                req_name = 'content_variable_' + selected_content_id + '_' + var_name
+                                req_name = 'content_variable_%s_%s' % (selected_content_id, var_name)
                                 if req_name in request:
                                     if selected_content_data['props'][req_name]['type'] == 'int':
                                         if selected_content_data['props'][req_name]['units'] != 'bytes':
@@ -1308,10 +1316,10 @@
     
     
     if batch_id == '':
-        raise 'unsupported function'
+        raise Exception, 'unsupported function'
     else:
         invalidate_storage_report(request.SESSION, storagename)
-        return batch_id;
+        return batch_id
 
 
 def get_storage_batch_result(self, 
@@ -1328,7 +1336,7 @@
         # ricci down
         error   = True
         url     = url
-        msg     = 'Unable to contact ' + storagename
+        msg     = 'Unable to contact %s' % storagename
     else:
         batch = 'no batch'
         try:
@@ -1338,12 +1346,13 @@
         if batch == 'no batch':
             error = True
             url   = url
-            msg   = 'Ricci on ' + storagename + ' responded with error. No detailed info available.'
+            msg   = 'Ricci on %s responded with error. No detailed info available.' % storagename
         elif batch == None:
             # no such batch
             error     = False
             completed = True
-            url      += '?' + STONAME + '=' + storagename + '&' + PAGETYPE + '=' + STORAGE
+            url       = '%s?%s=%s&%s=%s' \
+                % (index_html_URL, STONAME, storagename, PAGETYPE, STORAGE)
             msg       = 'No such batch'
         else:
             DEFAULT_ERROR = 'extract_module_status() failed'
@@ -1354,8 +1363,9 @@
                 pass
             if code == DEFAULT_ERROR:
                 error = True
-                url  += '?' + STONAME + '=' + storagename + '&' + PAGETYPE + '=' + STORAGE
-                msg   = 'Ricci on ' + storagename + ' sent malformed response'
+                url   = '%s?%s=%s&%s=%s' \
+                    % (index_html_URL, STONAME, storagename, PAGETYPE, STORAGE)
+                msg   = 'Ricci on %s sent a malformed response' % storagename
             elif code == -101 or code == -102:
                 # in progress
                 error     = False
@@ -1364,23 +1374,27 @@
             elif code == -103:
                 # module removed from scheduler
                 error = True
-                url  += '?' + STONAME + '=' + storagename + '&' + PAGETYPE + '=' + STORAGE
-                msg   = 'Ricci on ' + storagename + ' removed request from scheduler. File bug report against ricci.' 
+                url   = '%s?%s=%s&%s=%s' \
+                    % (index_html_URL, STONAME, storagename, PAGETYPE, STORAGE)
+                msg   = 'Ricci on %s removed request from scheduler. File bug report against ricci.' % storagename
             elif code == -104:
                 # module failure
                 error = True
-                url  += '?' + STONAME + '=' + storagename + '&' + PAGETYPE + '=' + STORAGE
-                msg   = 'Ricci on ' + storagename + ' failed to execute storage module; reinstall it.'
+                url   = '%s?%s=%s&%s=%s' \
+                    % (index_html_URL, STONAME, storagename, PAGETYPE, STORAGE)
+                msg   = 'Ricci on %s failed to execute storage module; reinstall it.' % storagename
             elif code == -2:
                 # API error
                 error = True
-                url  += '?' + STONAME + '=' + storagename + '&' + PAGETYPE + '=' + STORAGE
-                msg   = 'Luci server used invalid API to communicate with ' + storagename + '. File a bug report against luci.'
+                url   = '%s?%s=%s&%s=%s' \
+                    % (index_html_URL, STONAME, storagename, PAGETYPE, STORAGE)
+                msg   = 'Luci server used invalid API to communicate with %s. File a bug report against luci.' % storagename
             elif code == -1:
                 # undefined error
                 error = True
-                url  += '?' + STONAME + '=' + storagename + '&' + PAGETYPE + '=' + STORAGE
-                msg   = 'Reason for failure (as reported by ' + storagename + '): ' + err_msg
+                url   = '%s?%s=%s&%s=%s' \
+                    % (index_html_URL, STONAME, storagename, PAGETYPE, STORAGE)
+                msg   = 'Reason for failure (as reported by %s): %s' % (storagename, err_msg)
             elif code == 0:
                 # no error
                 error     = False
@@ -1393,41 +1407,49 @@
             elif code == 1:
                 # mid-air
                 error  = True
-                url   += '?' + STONAME + '=' + storagename + '&' + PAGETYPE + '=' + STORAGE
-                msg    = 'Mid-Air collision (storage on ' + storagename + ' has changed since last probe). '
+                url   = '%s?%s=%s&%s=%s' \
+                    % (index_html_URL, STONAME, storagename, PAGETYPE, STORAGE)
+                msg    = 'Mid-Air collision (storage on %s has changed since last probe).' % storagename
             elif code == 2:
                 # validation error
                 error  = True
-                url   += '?' + STONAME + '=' + storagename + '&' + PAGETYPE + '=' + STORAGE
+                url   = '%s?%s=%s&%s=%s' \
+                    % (index_html_URL, STONAME, storagename, PAGETYPE, STORAGE)
                 msg    = 'Validation error. File bug report against Luci.'
             elif code == 3:
                 # unmount error
                 error  = True
-                url   += '?' + STONAME + '=' + storagename + '&' + PAGETYPE + '=' + STORAGE
-                msg    = 'Unmount failure: ' + err_msg
+                url   = '%s?%s=%s&%s=%s' \
+                    % (index_html_URL, STONAME, storagename, PAGETYPE, STORAGE)
+                msg    = 'Unmount failure: %s' % err_msg
             elif code == 4:
                 # clvmd error
                 error  = True
-                url   += '?' + STONAME + '=' + storagename + '&' + PAGETYPE + '=' + STORAGE
-                msg    = 'clvmd (clustered LVM daemon) is not running on ' + storagename
+                url   = '%s?%s=%s&%s=%s' \
+                    % (index_html_URL, STONAME, storagename, PAGETYPE, STORAGE)
+                msg    = 'clvmd (clustered LVM daemon) is not running on %s' % storagename
             elif code == 5:
                 # not quorate
                 error  = True
-                url   += '?' + STONAME + '=' + storagename + '&' + PAGETYPE + '=' + STORAGE
+                url   = '%s?%s=%s&%s=%s' \
+                    % (index_html_URL, STONAME, storagename, PAGETYPE, STORAGE)
                 msg    = 'Cluster quorum is required, and yet cluster is not quorate. Start cluster, and try again.'
             elif code == 6:
                 # LVM cluster locking not enabled
                 error  = True
-                url   += '?' + STONAME + '=' + storagename + '&' + PAGETYPE + '=' + STORAGE
-                msg    = 'LVM cluster locking is not enabled on ' + storagename
+                url   = '%s?%s=%s&%s=%s' \
+                    % (index_html_URL, STONAME, storagename, PAGETYPE, STORAGE)
+                msg    = 'LVM cluster locking is not enabled on %s' % storagename
             elif code == 7:
                 # cluster not running
                 error  = True
-                url   += '?' + STONAME + '=' + storagename + '&' + PAGETYPE + '=' + STORAGE
-                msg    = 'Cluster infrastructure is not running on ' + storagename
+                url   = '%s?%s=%s&%s=%s' \
+                    % (index_html_URL, STONAME, storagename, PAGETYPE, STORAGE)
+                msg    = 'Cluster infrastructure is not running on %s' % storagename
             elif code > 8:
                 error  = True
-                url   += '?' + STONAME + '=' + storagename + '&' + PAGETYPE + '=' + STORAGE
+                url   = '%s?%s=%s&%s=%s' \
+                    % (index_html_URL, STONAME, storagename, PAGETYPE, STORAGE)
                 msg    = err_msg
     
     return {'error'        : error,
@@ -1452,21 +1474,21 @@
             if node.nodeName == 'module':
                 module_r = node
     if module_r == None:
-        raise 'missing <module/> in <batch/>'
+        raise Exception, 'missing <module/> in <batch/>'
     resp_r = None
     for node in module_r.childNodes:
         if node.nodeType == xml.dom.Node.ELEMENT_NODE:
             if node.nodeName == RESPONSE_TAG:
                 resp_r = node
     if resp_r == None:
-        raise 'missing <response/> in <module/>'
+        raise Exception, 'missing <response/> in <module/>'
     fr_r = None
     for node in resp_r.childNodes:
         if node.nodeType == xml.dom.Node.ELEMENT_NODE:
             if node.nodeName == FUNC_RESP_TAG:
                 fr_r = node
     if fr_r == None:
-        raise 'missing <function_response/> in <response/>'
+        raise Exception, 'missing <function_response/> in <response/>'
     vars = {}
     for node in fr_r.childNodes:
         try:
@@ -1489,26 +1511,24 @@
         bd_path     = bd.getAttribute('path')
         mapper_type = bd.getAttribute('mapper_type')
         mapper_id   = bd.getAttribute('mapper_id')
-    
-    url  = main_url + '?'
-    url += STONAME + '=' + storagename
+
+    url_list = list()
+    url_list.append('%s?%s=%s' % (main_url, STONAME, storagename))
     if mapper_type != '':
-        url += '&' + PT_MAPPER_TYPE + '=' + mapper_type
+        url_list.append('&%s=%s' % (PT_MAPPER_TYPE, mapper_type))
     if mapper_id != '':
-        url += '&' + PT_MAPPER_ID + '=' + mapper_id
+        url_list.append('&%s=%s' % (PT_MAPPER_ID, mapper_id))
     if bd_path != '':
-        url += '&' + PT_PATH + '=' + bd_path
+        url_list.append('&%s=%s' % (PT_PATH, bd_path))
     
     if mapper_type == '':
-        url += '&' + PAGETYPE + '=' + STORAGE
+        url_list.append('&%s=%s' % (PAGETYPE, STORAGE))
     elif bd_path != '':
-        url += '&' + PAGETYPE + '=' + VIEW_BD
+        url_list.append('&%s=%s' % (PAGETYPE, VIEW_BD))
     else:
-        url += '&' + PAGETYPE + '=' + VIEW_MAPPER
-    
-    return url
-                        
-                        
+        url_list.append('&%s=%s' % (PAGETYPE, VIEW_MAPPER))
+
+    return ''.join(url_list)
 
 
 def get_bd_data_internal(session, bd_xml, mapper_xml):
@@ -1527,11 +1547,12 @@
     color = 'black'
     
     size_in_units, units = bytes_to_value_units(props['size']['value'])
-    description = str(size_in_units) + ' ' + units
-    
+
+    description = None
     if mapper_type == MAPPER_SYS_TYPE:
         if 'scsi_id' in props:
-            description += ', SCSI ID = ' + props['scsi_id']['value']
+            description = '%s %s, SCSI ID = %s' \
+                % (size_in_units, units, props['scsi_id']['value'])
             icon_name = 'icon_bd_scsi.png'
     elif mapper_type == MAPPER_VG_TYPE:
         pretty_name = props['lvname']['value']
@@ -1539,13 +1560,17 @@
         if props['snapshot']['value'] == 'true':
             icon_name = 'icon_bd_LV_snapshot.png'
             pretty_type = 'Snapshot'
+
+    if description is None:
+        description = '%s %s' % (size_in_units, units)
     
     if bd_xml.nodeName == BD_TEMPLATE:
-        path = 'unused_segment'
         if mapper_type == MAPPER_PT_TYPE:
-            path += '_' + props['partition_begin']['value']
-            path += '_' + props['partition_type']['value']
-        pretty_type = 'New ' + pretty_type
+            path = 'unused_segment_%s_%s' \
+                % (props['partition_begin']['value'], props['partition_type']['value'])
+        else:
+            path = 'unused_segment'
+        pretty_type = 'New %s' % pretty_type
         pretty_name = 'Unused Space'
         data['new'] = True
     else:
@@ -1574,7 +1599,8 @@
     actions = []
     if removable:
         action = {'name' : 'Remove',
-                  'msg'  : 'Are you sure you want to remove ' + pretty_type + ' \\\'' + pretty_name + '\\\'?',
+                  'msg'  : 'Are you sure you want to remove %s \\\'%s\\\'?' \
+                     % (pretty_type, pretty_name),
                   'link' : ''}
         actions.append(action)
     if data['mapper_type'] == MAPPER_VG_TYPE and not data['new']:
@@ -1594,7 +1620,11 @@
                 if pretty_name in origs:
                     action = {'name' : 'Take Snapshot',
                               'msg'  : '', 
-                              'link' : './?' + PAGETYPE + '=' + VIEW_BD + '&' + PT_MAPPER_ID + '=' + data['mapper_id'] + '&' + PT_MAPPER_TYPE + '=' + data['mapper_type'] + '&' + PT_PATH + '=' + snap_lv['path']}
+                              'link' : './?%s=%s&%s=%s&%s=%s&%s=%s' \
+                                % (PAGETYPE, VIEW_BD, \
+                                   PT_MAPPER_ID, data['mapper_id'], \
+                                   PT_MAPPER_TYPE, data['mapper_type'], \
+                                   PT_PATH, snap_lv['path'])}
                     actions.append(action)
     data['actions'] = actions
     
@@ -1675,10 +1705,13 @@
         elif type == VARIABLE_TYPE_LIST_INT or type == VARIABLE_TYPE_LIST_STR:
             d_type = 'label'
             d_value = ''
+            d_val_list = list()
             for node in var.childNodes:
                 if node.nodeType == xml.dom.Node.ELEMENT_NODE:
                     if node.nodeName == VARIABLE_TYPE_LISTENTRY:
-                        d_value += node.getAttribute('value') + ', '
+                        d_val_list.append(node.getAttribute('value'))
+                        d_val_list.append(', ')
+            d_value = ''.join(d_val_list)
             if d_value != '':
                 d_value = d_value[:len(d_value)-2]
         elif type == 'hidden':
@@ -1811,7 +1844,7 @@
         old_props = d['props']
         new_props = {}
         for name in old_props:
-            new_name = 'content_variable_' + d['id'] + '_' + name
+            new_name = 'content_variable_%s_%s' % (d['id'], name)
             new_props[new_name] = old_props[name]
             new_props[new_name]['name'] = new_name
         d['props'] = new_props
@@ -1852,14 +1885,14 @@
     id = c_xml.getAttribute('type')
     if id == CONTENT_FS_TYPE:
         fs_type = c_xml.getAttribute('fs_type')
-        id += '_' + fs_type
+        id = '%s_%s' % (id, fs_type)
         name = get_pretty_fs_name(fs_type)
     elif id == CONTENT_NONE_TYPE:
         name = 'Empty'
     elif id == CONTENT_MS_TYPE:
         mapper_type = c_xml.getAttribute('mapper_type')
         mapper_id = c_xml.getAttribute('mapper_id')
-        id += '_' + mapper_type + '_' + mapper_id.replace(':', '__colon__')
+        id = '%s_%s_%s' % (id, mapper_type, mapper_id.replace(':', '__colon__'))
         if mapper_type == MAPPER_SYS_TYPE:
             pass
         elif mapper_type == MAPPER_VG_TYPE:
@@ -1877,7 +1910,7 @@
         elif mapper_type == MAPPER_iSCSI_TYPE:
             pass
         else:
-            name = 'Source of ' + mapper_type
+            name = 'Source of %s' % mapper_type
     elif id == CONTENT_HIDDEN_TYPE:
         name = 'Extended Partition'
     else:
@@ -1933,7 +1966,7 @@
                  'color_css'  : '#0192db', 
                  'description': mapper_data['pretty_targets_name']}
     if mapper_data['mapper_type'] == MAPPER_PT_TYPE:
-        upper_cyl['description'] = 'Physical ' + upper_cyl['description']
+        upper_cyl['description'] = 'Physical %s' % upper_cyl['description']
     
     offset = 0
     for t in mapper_data['targets_all']:
@@ -1963,7 +1996,7 @@
     
     # build highlights
     for d in upper_cyl['cyls']:
-        h_id = d['id'] + '_selected'
+        h_id = '%s_selected' % d['id']
         beg = d['beg']
         end = d['end']
         upper_cyl['highs'].append({'beg'  : beg, 
@@ -1980,22 +2013,22 @@
         if bd['mapper_type'] == MAPPER_VG_TYPE and not bd['new']:
             if 'origin' in bd:
                 # snapshot
-                snap_id = bd['path'] + '_snapshot'
+                snap_id = '%s_snapshot' % bd['path']
                 upper_cyl['highs'].append({'beg'  : beg, 
                                            'end'  : end, 
                                            'id'   : snap_id,
                                            'type' : 'snapshot'})
                 orig = bd['origin']
-                high_list[d['id']].append(orig['path'] + '_origin')
+                high_list[d['id']].append('%s_origin' % orig['path'])
                 high_list[d['id']].append(snap_id)
             if 'snapshots' in bd:
                 # origin
                 upper_cyl['highs'].append({'beg'  : beg, 
                                            'end'  : end, 
-                                           'id'   : bd['path'] + '_origin',
+                                           'id'   : '%s_origin' % bd['path'],
                                            'type' : 'snapshot-origin'})
                 for snap in bd['snapshots']:
-                    high_list[d['id']].append(snap['path'] + '_snapshot')
+                    high_list[d['id']].append('%s_snapshot', snap['path'])
                     
         
         
@@ -2025,7 +2058,7 @@
         offset = end
     
     if mapper_data['mapper_type'] == MAPPER_PT_TYPE:
-        lower_cyl['description'] = 'Logical ' + mapper_data['pretty_targets_name']
+        lower_cyl['description'] = 'Logical %s' % mapper_data['pretty_targets_name']
         lower_cyl['cyls']        = []
         lower_cyl['color']       = 'blue'
         lower_cyl['color_css']   = '#0192db'
@@ -2065,7 +2098,7 @@
     
     # build highlights
     for d in lower_cyl['cyls']:
-        h_id = d['id'] + '_selected'
+        h_id = '%s_selected' % d['id']
         beg = d['beg']
         end = d['end']
         lower_cyl['highs'].append({'beg'  : beg, 
--- conga/luci/site/luci/Extensions/Variable.py	2006/10/16 07:39:27	1.4
+++ conga/luci/site/luci/Extensions/Variable.py	2007/05/03 20:16:38	1.4.8.1
@@ -1,15 +1,12 @@
-
 import xml.dom
 
-from ricci_defines import *
-
-
+from ricci_defines import VARIABLE_TAG, VARIABLE_TYPE_BOOL, VARIABLE_TYPE_FLOAT, VARIABLE_TYPE_INT, VARIABLE_TYPE_INT_SEL, VARIABLE_TYPE_LISTENTRY, VARIABLE_TYPE_LIST_INT, VARIABLE_TYPE_LIST_STR, VARIABLE_TYPE_LIST_XML, VARIABLE_TYPE_STRING, VARIABLE_TYPE_STRING_SEL, VARIABLE_TYPE_XML
 
 def parse_variable(node):
     if node.nodeType != xml.dom.Node.ELEMENT_NODE:
-        raise 'not a variable'
+        raise Exception, 'not a variable'
     if node.nodeName != str(VARIABLE_TAG):
-        raise 'not a variable'
+        raise Exception, 'not a variable'
     
     attrs_dir = {}
     attrs = node.attributes
@@ -18,9 +15,9 @@
         attrValue = attrNode.nodeValue
         attrs_dir[attrName.strip()] = attrValue
     if ('name' not in attrs_dir) or ('type' not in attrs_dir):
-        raise 'incomplete variable'
+        raise Exception, 'incomplete variable'
     if (attrs_dir['type'] != VARIABLE_TYPE_LIST_INT and attrs_dir['type'] != VARIABLE_TYPE_LIST_STR and attrs_dir['type'] != VARIABLE_TYPE_LIST_XML and attrs_dir['type'] != VARIABLE_TYPE_XML) and ('value' not in attrs_dir):
-        raise 'incomplete variable'
+        raise Exception, 'incomplete variable'
     
     mods = {}
     for mod in attrs_dir:
@@ -42,7 +39,7 @@
             else:
                 continue
             if v == None:
-                raise 'invalid listentry'
+                raise Exception, 'invalid listentry'
             value.append(v)
         return VariableList(attrs_dir['name'], value, mods, VARIABLE_TYPE_LIST_STR)
     elif attrs_dir['type'] == VARIABLE_TYPE_LIST_XML:
@@ -61,7 +58,7 @@
     elif attrs_dir['type'] == VARIABLE_TYPE_INT_SEL:
         value = int(attrs_dir['value'])
         if 'valid_values' not in mods:
-            raise 'missing valid_values'
+            raise Exception, 'missing valid_values'
     elif attrs_dir['type'] == VARIABLE_TYPE_FLOAT:
         value = float(attrs_dir['value'])
     elif attrs_dir['type'] == VARIABLE_TYPE_STRING:
@@ -69,11 +66,11 @@
     elif attrs_dir['type'] == VARIABLE_TYPE_STRING_SEL:
         value = attrs_dir['value']
         if 'valid_values' not in mods:
-            raise 'missing valid_values'
+            raise Exception, 'missing valid_values'
     elif attrs_dir['type'] == VARIABLE_TYPE_BOOL:
         value = (attrs_dir['value'] == 'true')
     else:
-        raise 'invalid variable'
+        raise Exception, 'invalid variable'
     
     return Variable(attrs_dir['name'], value, mods)
 
@@ -85,7 +82,7 @@
         self.__name = str(name)
         self.__mods = mods
         self.set_value(value)
-    
+
     def get_name(self):
         return self.__name
     
@@ -105,7 +102,7 @@
             self.__value = float(value)
             
         elif self.__is_list(value):
-            raise "lists not implemented"
+            raise Exception, "lists not implemented"
             if self.__is_int(value[0]):
                 self.__type = VARIABLE_TYPE_LIST_INT
                 self.__value = value
@@ -113,7 +110,7 @@
                 self.__type = VARIABLE_TYPE_LIST_STR
                 self.__value = value
             else:
-                raise "not valid list type"
+                raise Exception, "not valid list type"
         elif self.__is_xml(value):
             self.__type = VARIABLE_TYPE_XML
             self.__value = value
@@ -151,7 +148,7 @@
             else:
                 elem.setAttribute('value', str(self.__value))
         else:
-            raise "lists not implemented"
+            raise Exception, "lists not implemented"
             l = self.__value
             for i in range(len(l)):
                 x = l[i]
@@ -176,7 +173,7 @@
             elif self.__is_string(value[0]):
                 return VARIABLE_TYPE_LIST_STR
             else:
-                raise "not valid list type"
+                raise Exception, "not valid list type"
         elif self.__is_xml(value):
             return VARIABLE_TYPE_XML
         else:
@@ -229,9 +226,9 @@
     
     def __init__(self, name, value, mods, list_type):
         if list_type != VARIABLE_TYPE_LIST_STR and list_type != VARIABLE_TYPE_LIST_XML:
-            raise 'invalid list type'
+            raise Exception, 'invalid list type'
         #if ! self.__is_list(value):
-        #    raise 'value not a list'
+        #    raise Exception, 'value not a list'
         self.__name = name
         self.__mods = mods
         self.__type = list_type
@@ -244,7 +241,7 @@
     def get_value(self):
         return self.__value
     def set_value(self, value):
-        raise 'VariableList.set_value() not implemented'
+        raise Exception, 'VariableList.set_value() not implemented'
     
     def type(self):
         return self.__type
--- conga/luci/site/luci/Extensions/cluster_adapters.py	2007/05/03 19:51:21	1.255
+++ conga/luci/site/luci/Extensions/cluster_adapters.py	2007/05/03 20:16:38	1.255.2.1
@@ -1,111 +1,37 @@
-import socket
-from ModelBuilder import ModelBuilder
 from xml.dom import minidom
 import AccessControl
 from conga_constants import *
-from ricci_bridge import *
+import RicciQueries as rq
 from ricci_communicator import RicciCommunicator, RicciError, batch_status, extract_module_status
-import time
-import Products.ManagedSystem
-from Products.Archetypes.utils import make_uuid
-from Ip import Ip
-from Clusterfs import Clusterfs
-from Fs import Fs
-from FailoverDomain import FailoverDomain
-from FailoverDomainNode import FailoverDomainNode
-from RefObject import RefObject
-from ClusterNode import ClusterNode
-from NFSClient import NFSClient
-from NFSExport import NFSExport
-from Service import Service
-from Lockserver import Lockserver
-from Netfs import Netfs
-from Apache import Apache
-from MySQL import MySQL
-from Postgres8 import Postgres8
-from Tomcat5 import Tomcat5
-from OpenLDAP import OpenLDAP
-from Vm import Vm
-from FenceXVMd import FenceXVMd
-from Script import Script
-from Samba import Samba
-from LVM import LVM
-from QuorumD import QuorumD
-from Heuristic import Heuristic
-from clusterOS import resolveOSType
-from Fence import Fence
-from Method import Method
-from Totem import Totem
-from Device import Device
-from FenceHandler import validateNewFenceDevice, FENCE_OPTS, validateFenceDevice, validate_fenceinstance
-from GeneralError import GeneralError
-from homebase_adapters import manageCluster, createClusterSystems, havePermCreateCluster, setNodeFlag, delNodeFlag, userAuthenticated, getStorageNode, getClusterNode, delCluster, parseHostForm
+
+from ClusterModel.ModelBuilder import ModelBuilder
+from ClusterModel.FailoverDomain import FailoverDomain
+from ClusterModel.FailoverDomainNode import FailoverDomainNode
+from ClusterModel.RefObject import RefObject
+from ClusterModel.ClusterNode import ClusterNode
+from ClusterModel.Service import Service
+from ClusterModel.Lockserver import Lockserver
+from ClusterModel.Vm import Vm
+from ClusterModel.FenceXVMd import FenceXVMd
+from ClusterModel.QuorumD import QuorumD
+from ClusterModel.Heuristic import Heuristic
+from ClusterModel.Fence import Fence
+from ClusterModel.Method import Method
+from ClusterModel.GeneralError import GeneralError
+
+from HelperFunctions import resolveOSType
 from LuciSyslog import LuciSyslog
-from system_adapters import validate_svc_update
+from ResourceHandler import create_resource
+from FenceHandler import validateNewFenceDevice, FENCE_OPTS, validateFenceDevice, validate_fenceinstance, FD_VAL_FAIL, FD_VAL_SUCCESS
 
-#Policy for showing the cluster chooser menu:
-#1) If there are no clusters in the ManagedClusterSystems
-#folder, then only the admin user may see this menu, and
-#the configure option should not be displayed.
-#2)If there are clusters in the ManagedClusterSystems,
-#then only display chooser if the current user has
-#permissions on at least one. If the user is admin, show ALL clusters
+from system_adapters import validate_svc_update
+from homebase_adapters import manageCluster, createClusterSystems, havePermCreateCluster, setNodeFlag, delNodeFlag, userAuthenticated, getStorageNode, getClusterNode, delCluster, parseHostForm
 
 try:
 	luci_log = LuciSyslog()
 except:
 	pass
 
-def get_fsid_list(model):
-	obj_list = model.searchObjectTree('fs')
-	obj_list.extend(model.searchObjectTree('clusterfs'))
-	return map(lambda x: x.getAttribute('fsid') and int(x.getAttribute('fsid')) or 0, obj_list)
-
-def fsid_is_unique(model, fsid):
-	fsid_list = get_fsid_list(model)
-	return fsid not in fsid_list
-
-def generate_fsid(model, name):
-	import binascii
-	from random import random
-	fsid_list = get_fsid_list(model)
-
-	fsid = binascii.crc32(name) & 0xffff
-	dupe = fsid in fsid_list
-	while dupe is True:
-		fsid = (fsid + random.randrange(1, 0xfffe)) & 0xffff
-		dupe = fsid in fsid_list
-	return fsid
-
-def buildClusterCreateFlags(self, batch_map, clusterName):
-	path = str(CLUSTER_FOLDER_PATH + clusterName)
-
-	try:
-		clusterfolder = self.restrictedTraverse(path)
-	except Exception, e:
-		luci_log.debug_verbose('buildCCF0: no cluster folder at %s' % path)
-		return None
-
-	for key in batch_map.keys():
-		try:
-			key = str(key)
-			batch_id = str(batch_map[key])
-			#This suffix needed to avoid name collision
-			objname = str(key + "____flag")
-
-			clusterfolder.manage_addProduct['ManagedSystem'].addManagedSystem(objname)
-			#now designate this new object properly
-			objpath = str(path + "/" + objname)
-			flag = self.restrictedTraverse(objpath)
-
-			flag.manage_addProperty(BATCH_ID, batch_id, "string")
-			flag.manage_addProperty(TASKTYPE, CLUSTER_ADD, "string")
-			flag.manage_addProperty(FLAG_DESC, "Creating node " + key + " for cluster " + clusterName, "string")
-			flag.manage_addProperty(LAST_STATUS, 0, "int")
-		except Exception, e:
-			luci_log.debug_verbose('buildCCF1: error creating flag for %s: %s' \
-				% (key, str(e)))
-
 def parseClusterNodes(self, request, cluster_os):
 	check_certs = False
 	try:
@@ -213,7 +139,7 @@
 				except Exception, e:
 					luci_log.debug_verbose('PCN3: %s: %s' % (cur_host, str(e)))
 
-				errors.append('%s reports it is a member of cluster \"%s\"' \
+				errors.append('%s reports it is a member of cluster "%s"' \
 					% (cur_host, cur_cluster_name))
 				luci_log.debug_verbose('PCN4: %s: already in %s cluster' \
 					% (cur_host, cur_cluster_name))
@@ -307,7 +233,7 @@
 		return (False, { 'errors': errors, 'messages': messages })
 
 	node_list = add_cluster['nodes'].keys()
-	batchNode = createClusterBatch(add_cluster['cluster_os'],
+	batchNode = rq.createClusterBatch(add_cluster['cluster_os'],
 					clusterName,
 					clusterName,
 					node_list,
@@ -350,7 +276,7 @@
 		except Exception, e:
 			luci_log.debug_verbose('validateCreateCluster0: %s: %s' \
 				% (i, str(e)))
-			errors.append('An error occurred while attempting to add cluster node \"%s\"' % i)
+			errors.append('An error occurred while attempting to add cluster node "%s"' % i)
 			if len(batch_id_map) == 0:
 				request.SESSION.set('create_cluster', add_cluster)
 				return (False, { 'errors': errors, 'messages': messages })
@@ -358,9 +284,11 @@
 
 	buildClusterCreateFlags(self, batch_id_map, clusterName)
 	response = request.RESPONSE
-	response.redirect(request['URL'] + "?pagetype=" + CLUSTER_CONFIG + "&clustername=" + clusterName + '&busyfirst=true')
+	response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
+		% (request['URL'], CLUSTER_CONFIG, clusterName))
 
 def validateAddClusterNode(self, request):
+	import time
 	try:
 		request.SESSION.delete('add_node')
 	except:
@@ -399,7 +327,7 @@
 	if cluster_os is None:
 		cluster_folder = None
 		try:
-			cluster_folder = self.restrictedTraverse(str(CLUSTER_FOLDER_PATH + clusterName))
+			cluster_folder = self.restrictedTraverse('%s%s' % (CLUSTER_FOLDER_PATH, clusterName))
 			if not cluster_folder:
 				raise Exception, 'cluster DB object is missing'
 		except Exception, e:
@@ -509,7 +437,7 @@
 				except Exception, e:
 					luci_log.debug_verbose('VACN6: %s: %s' % (cur_host, str(e)))
 
-				errors.append('%s reports it is already a member of cluster \"%s\"' % (cur_host, cur_cluster_name))
+				errors.append('%s reports it is already a member of cluster "%s"' % (cur_host, cur_cluster_name))
 				luci_log.debug_verbose('VACN7: %s: already in %s cluster' \
 					% (cur_host, cur_cluster_name))
 				continue
@@ -581,8 +509,7 @@
 			i = system_list[x]
 
 			try:
-				batch_node = addClusterNodeBatch(cluster_os,
-								clusterName,
+				batch_node = rq.addClusterNodeBatch(clusterName,
 								True,
 								True,
 								shared_storage,
@@ -603,7 +530,7 @@
 				except Exception, e:
 					luci_log.debug_verbose('VACN12: %s: %s' % (cur_host, str(e)))
 
-				errors.append('Unable to initiate cluster join for %s' % cur_host)
+				errors.append('Unable to initiate cluster join for node "%s"' % cur_host)
 				luci_log.debug_verbose('VACN13: %s: %s' % (cur_host, str(e)))
 				continue
 
@@ -625,7 +552,7 @@
 		if not conf_str:
 			raise Exception, 'Unable to save the new cluster model.'
 
-		batch_number, result = setClusterConf(cluster_ricci, conf_str)
+		batch_number, result = rq.setClusterConf(cluster_ricci, conf_str)
 		if not batch_number or not result:
 			raise Exception, 'batch or result is None'
 	except Exception, e:
@@ -638,7 +565,7 @@
 	# abort the whole process.
 	try:
 		while True:
-			batch_ret = checkBatch(cluster_ricci, batch_number)
+			batch_ret = rq.checkBatch(cluster_ricci, batch_number)
 			code = batch_ret[0]
 			if code == True:
 				break
@@ -696,7 +623,7 @@
 
 		if not success:
 			incomplete = True
-			errors.append('An error occurred while attempting to add cluster node \"%s\"' % cur_host)
+			errors.append('An error occurred while attempting to add cluster node "%s"' % cur_host)
 
 	if incomplete or len(errors) > 0:
 		request.SESSION.set('add_node', add_cluster)
@@ -705,7 +632,8 @@
 	buildClusterCreateFlags(self, batch_id_map, clusterName)
 
 	response = request.RESPONSE
-	response.redirect(request['URL'] + "?pagetype=" + CLUSTER_CONFIG + "&clustername=" + clusterName + '&busyfirst=true')
+	response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
+		% (request['URL'], CLUSTER_CONFIG, clusterName))
 
 def validateServiceAdd(self, request):
 	errors = list()
@@ -770,12 +698,10 @@
 		try:
 			res_type = dummy_form['type'].strip()
 			if not res_type:
-				raise Exception, 'no resource type was given'
-			if not res_type in resourceAddHandler:
-				raise Exception, 'invalid resource type: %s' % res_type
+				raise Exception, 'no resource type'
 		except Exception, e:
 			luci_log.debug_verbose('vSA3: %s' % str(e))
-			return (False, {'errors': [ 'An invalid resource type was specified' ]})
+			return (False, {'errors': [ 'No resource type was specified' ]})
 
 		try:
 			if res_type == 'ip':
@@ -790,7 +716,7 @@
 				resObj = RefObject(newRes)
 				resObj.setRef(newRes.getName())
 			else:
-				resObj = resourceAddHandler[res_type](request, dummy_form)[0]
+				resObj = create_resource(res_type, dummy_form, model)
 		except Exception, e:
 			resObj = None
 			luci_log.debug_verbose('vSA4: type %s: %s' % (res_type, str(e)))
@@ -817,7 +743,7 @@
 			recovery = None
 		else:
 			if recovery != 'restart' and recovery != 'relocate' and recovery != 'disable':
-				errors.append('You entered an invalid recovery option: \"%s\" Valid options are \"restart\" \"relocate\" and \"disable\"')
+				errors.append('You entered an invalid recovery option: "%s" Valid options are "restart" "relocate" and "disable."')
 	except:
 		recovery = None
 
@@ -919,7 +845,7 @@
 			luci_log.debug_verbose('vAS7: missing ricci hostname')
 			raise Exception, 'unknown ricci agent hostname'
 
-		batch_number, result = setClusterConf(rc, str(conf))
+		batch_number, result = rq.setClusterConf(rc, str(conf))
 		if batch_number is None or result is None:
 			luci_log.debug_verbose('vAS8: missing batch_number or result')
 			raise Exception, 'unable to save the new cluster configuration.'
@@ -929,14 +855,15 @@
 
 	try:
 		if request.form['action'] == 'edit':
-			set_node_flag(self, clustername, ragent, str(batch_number), SERVICE_CONFIG, "Configuring service \'%s\'" % service_name)
+			set_node_flag(self, clustername, ragent, str(batch_number), SERVICE_CONFIG, 'Configuring service "%s"' % service_name)
 		else:
-			set_node_flag(self, clustername, ragent, str(batch_number), SERVICE_ADD, "Adding new service \'%s\'" % service_name)
+			set_node_flag(self, clustername, ragent, str(batch_number), SERVICE_ADD, 'Creating service "%s"' % service_name)
 	except Exception, e:
 		luci_log.debug_verbose('vAS10: failed to set flags: %s' % str(e))
 
 	response = request.RESPONSE
-	response.redirect(request['URL'] + "?pagetype=" + SERVICES + "&clustername=" + clustername + '&busyfirst=true')
+	response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
+		% (request['URL'], SERVICES, clustername))
 
 def validateResourceAdd(self, request):
 	try:
@@ -944,31 +871,41 @@
 		if not res_type:
 			raise KeyError, 'type is blank'
 	except Exception, e:
-		luci_log.debug_verbose('resourceAdd: type is blank')
+		luci_log.debug_verbose('VRA0: type is blank')
 		return (False, {'errors': ['No resource type was given.']})
 
+	try:
+		model = request.SESSION.get('model')
+	except Exception, e:
+		luci_log.debug_verbose('VRA1: no model: %s' % str(e))
+		return None
+	
 	errors = list()
 	try:
-		res = resourceAddHandler[res_type](request)
-		if res is None or res[0] is None or res[1] is None:
-			if res and res[2]:
-				errors.extend(res[2])
-			raise Exception, 'An error occurred while adding this resource'
-		model = res[1]
-		newres = res[0]
-		addResource(self, request, model, newres, res_type)
+		res = create_resource(res_type, request.form, model)
 	except Exception, e:
-		if len(errors) < 1:
-			errors.append('An error occurred while adding this resource')
+		errors.extend(e)
+
+	if len(errors) < 1:
+		try:
+			addResource(self, request, model, res)
+		except Exception, e:
+			errors.append('An error occurred while adding resource "%s"' \
+				% res.getName())
+	if len(errors) > 0:
+		errors.append('An error occurred while adding this resource')
 		luci_log.debug_verbose('resource error: %s' % str(e))
 		return (False, {'errors': errors})
 
+
 	return (True, {'messages': ['Resource added successfully']})
 
+
 ## Cluster properties form validation routines
 
 # rhel5 cluster version
 def validateMCastConfig(model, form):
+	import socket
 	try:
 		gulm_ptr = model.getGULMPtr()
 		if gulm_ptr:
@@ -1128,7 +1065,7 @@
 			if hint < 1:
 				raise ValueError, 'Heuristic interval values must be greater than 0'
 		except KeyError, e:
-			errors.append('No interval was given for heuristic #%d' % i + 1)
+			errors.append('No interval was given for heuristic %d' % i + 1)
 		except ValueError, e:
 			errors.append('An invalid interval was given for heuristic %d: %s' \
 				% (i + 1, str(e)))
@@ -1232,9 +1169,7 @@
 
 	totem = model.getTotemPtr()
 	if totem is None:
-		cp = model.getClusterPtr()
-		totem = Totem()
-		cp.addChild(totem)
+		totem = model.addTotemPtr()
 
 	try:
 		token = form['token'].strip()
@@ -1491,7 +1426,7 @@
       % clustername)
 
   if rc:
-    batch_id, result = setClusterConf(rc, str(conf_str))
+    batch_id, result = rq.setClusterConf(rc, str(conf_str))
     if batch_id is None or result is None:
       luci_log.debug_verbose('VCC7: setCluserConf: batchid or result is None')
       errors.append('Unable to propagate the new cluster configuration for %s' \
@@ -1508,7 +1443,8 @@
     return (retcode, {'errors': errors, 'messages': messages})
 
   response = request.RESPONSE
-  response.redirect(request['URL'] + "?pagetype=" + CLUSTER_CONFIG + "&clustername=" + clustername + '&busyfirst=true')
+  response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
+	% (request['URL'], CLUSTER_CONFIG, clustername))
 
 def validateFenceAdd(self, request):
   errors = list()
@@ -1580,7 +1516,7 @@
         % clustername)
 
     if rc:
-      batch_id, result = setClusterConf(rc, str(conf_str))
+      batch_id, result = rq.setClusterConf(rc, str(conf_str))
       if batch_id is None or result is None:
         luci_log.debug_verbose('VFA: setCluserConf: batchid or result is None')
         errors.append('Unable to propagate the new cluster configuration for %s' \
@@ -1588,11 +1524,11 @@
       else:
         try:
           set_node_flag(self, clustername, rc.hostname(), batch_id,
-            CLUSTER_CONFIG, 'Adding new fence device \"%s\"' % retobj)
+            CLUSTER_CONFIG, 'Adding new fence device "%s"' % retobj)
         except:
           pass
 
-    response.redirect(request['URL'] + "?pagetype=" + FENCEDEV + "&clustername=" + clustername + "&fencename=" + retobj + '&busyfirst=true')
+    response.redirect('%s?pagetype=%s&clustername=%s&fencename=%s&busyfirst=true' % (request['URL'], FENCEDEV, clustername, retobj))
   else:
     errors.extend(retobj)
     return (False, {'errors': errors, 'messages': messages})
@@ -1672,7 +1608,7 @@
           % clustername)
 
     if rc:
-      batch_id, result = setClusterConf(rc, str(conf_str))
+      batch_id, result = rq.setClusterConf(rc, str(conf_str))
       if batch_id is None or result is None:
         luci_log.debug_verbose('VFA: setClusterConf: batchid or result is None')
         errors.append('Unable to propagate the new cluster configuration for %s' \
@@ -1680,11 +1616,11 @@
       else:
         try:
           set_node_flag(self, clustername, rc.hostname(), batch_id,
-            CLUSTER_CONFIG, 'Updating fence device \"%s\"' % retobj)
+            CLUSTER_CONFIG, 'Updating fence device "%s"' % retobj)
         except:
           pass
 
-    response.redirect(request['URL'] + "?pagetype=" + FENCEDEV + "&clustername=" + clustername + "&fencename=" + retobj + '&busyfirst=true')
+    response.redirect('%s?pagetype=%s&clustername=%s&fencename=%s&busyfirst=true' % (request['URL'], FENCEDEV, clustername, retobj))
   else:
     errors.extend(retobj)
     return (False, {'errors': errors, 'messages': messages})
@@ -1878,7 +1814,7 @@
 
 					# Add back the tags under the method block
 					# for the fence instance
-					if fence_type == 'fence_manual':
+					if type == 'fence_manual':
 						instance_list.append({'name': fencedev_name, 'nodename': nodename })
 					else:
 						instance_list.append({'name': fencedev_name })
@@ -1895,7 +1831,7 @@
 			# so the appropriate XML goes into the <method> block inside
 			# <node><fence>. All we need for that is the device name.
 			if not 'sharable' in fence_form:
-				if fence_type == 'fence_manual':
+				if type == 'fence_manual':
 					instance_list.append({'name': fencedev_name, 'nodename': nodename })
 				else:
 					instance_list.append({'name': fencedev_name })
@@ -1938,7 +1874,7 @@
 		conf = str(model.exportModelAsString())
 		if not conf:
 			raise Exception, 'model string is blank'
-		luci_log.debug_verbose('vNFC16: exported \"%s\"' % conf)
+		luci_log.debug_verbose('vNFC16: exported "%s"' % conf)
 	except Exception, e:
 		luci_log.debug_verbose('vNFC17: exportModelAsString failed: %s' \
 			% str(e))
@@ -1950,7 +1886,7 @@
 		return (False, {'errors': ['Unable to find a ricci agent for the %s cluster' % clustername ]})
 	ragent = rc.hostname()
 
-	batch_number, result = setClusterConf(rc, conf)
+	batch_number, result = rq.setClusterConf(rc, conf)
 	if batch_number is None or result is None:
 		luci_log.debug_verbose('vNFC19: missing batch and/or result')
 		return (False, {'errors': [ 'An error occurred while constructing the new cluster configuration.' ]})
@@ -1961,7 +1897,7 @@
 		luci_log.debug_verbose('vNFC20: failed to set flags: %s' % str(e))
 
 	response = request.RESPONSE
-	response.redirect(request['URL'] + "?pagetype=" + NODE + "&clustername=" + clustername + '&nodename=' + nodename + '&busyfirst=true')
+	response.redirect('%s?pagetype=%s&clustername=%s&nodename=%s&busyfirst=true' % (request['URL'], NODE, clustername, nodename))
 
 def deleteFenceDevice(self, request):
   errors = list()
@@ -2069,7 +2005,7 @@
         % clustername)
 
     if rc:
-      batch_id, result = setClusterConf(rc, str(conf_str))
+      batch_id, result = rq.setClusterConf(rc, str(conf_str))
       if batch_id is None or result is None:
         luci_log.debug_verbose('VFA: setCluserConf: batchid or result is None')
         errors.append('Unable to propagate the new cluster configuration for %s' \
@@ -2077,11 +2013,12 @@
       else:
         try:
           set_node_flag(self, clustername, rc.hostname(), batch_id,
-            CLUSTER_CONFIG, 'Removing fence device \"%s\"' % fencedev_name)
+            CLUSTER_CONFIG, 'Removing fence device "%s"' % fencedev_name)
         except:
           pass
 
-    response.redirect(request['URL'] + "?pagetype=" + FENCEDEVS + "&clustername=" + clustername + '&busyfirst=true')
+    response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
+		% (request['URL'], FENCEDEVS, clustername))
     return (True, {'errors': errors, 'messages': messages})
   else:
     errors.append(error_string)
@@ -2137,7 +2074,8 @@
 
 	if len(enable_list) < 1 and len(disable_list) < 1:
 		luci_log.debug_verbose('VDP4: no changes made')
-		response.redirect(request['URL'] + "?pagetype=" + NODE + "&clustername=" + clustername + '&nodename=' + nodename)
+		response.redirect('%s?pagetype=%s&clustername=%s&nodename=%s' \
+			% (request['URL'], NODE, clustername, nodename))
 
 	nodename_resolved = resolve_nodename(self, clustername, nodename)
 	try:
@@ -2149,18 +2087,19 @@
 		errors.append('Unable to connect to the ricci agent on %s to update cluster daemon properties' % nodename_resolved)
 		return (False, {'errors': errors})
 
-	batch_id, result = updateServices(rc, enable_list, disable_list)
+	batch_id, result = rq.updateServices(rc, enable_list, disable_list)
 	if batch_id is None or result is None:
 		luci_log.debug_verbose('VDP6: setCluserConf: batchid or result is None')
 		errors.append('Unable to update the cluster daemon properties on node %s' % nodename_resolved)
 		return (False, {'errors': errors})
 
 	try:
-		status_msg = 'Updating %s daemon properties:' % nodename_resolved
 		if len(enable_list) > 0:
-			status_msg += ' enabling %s' % str(enable_list)[1:-1]
+			status_msg = 'Updating node "%s" daemon properties: enabling "%s"' \
+				% (nodename_resolved, str(enable_list)[1:-1])
 		if len(disable_list) > 0:
-			status_msg += ' disabling %s' % str(disable_list)[1:-1]
+			status_msg = 'Updating node "%s" daemon properties: disabling "%s"' \
+				% (nodename_resolved, str(disable_list)[1:-1])
 		set_node_flag(self, clustername, rc.hostname(), batch_id, CLUSTER_DAEMON, status_msg)
 	except:
 		pass
@@ -2168,7 +2107,7 @@
 	if len(errors) > 0:
 		return (False, {'errors': errors})
 
-	response.redirect(request['URL'] + "?pagetype=" + NODE + "&clustername=" + clustername + '&nodename=' + nodename + '&busyfirst=true')
+	response.redirect('%s?pagetype=%s&clustername=%s&nodename=%s&busyfirst=true' % (request['URL'], NODE, clustername, nodename))
 
 def validateFdom(self, request):
 	errors = list()
@@ -2227,14 +2166,14 @@
 
 	if oldname is None or oldname != name:
 		if model.getFailoverDomainByName(name) is not None:
-			errors.append('A failover domain named \"%s\" already exists.' % name)
+			errors.append('A failover domain named "%s" already exists.' % name)
 
 	fdom = None
 	if oldname is not None:
 		fdom = model.getFailoverDomainByName(oldname)
 		if fdom is None:
 			luci_log.debug_verbose('validateFdom1: No fdom named %s exists' % oldname)
-			errors.append('No failover domain named \"%s" exists.' % oldname)
+			errors.append('No failover domain named "%s" exists.' % oldname)
 		else:
 			fdom.addAttribute('name', name)
 			fdom.children = list()
@@ -2264,7 +2203,7 @@
 			if prioritized:
 				priority = 1
 				try:
-					priority = int(request.form['__PRIORITY__' + i].strip())
+					priority = int(request.form['__PRIORITY__%s' % i].strip())
 					if priority < 1:
 						priority = 1
 				except Exception, e:
@@ -2291,21 +2230,22 @@
 		return (False, {'errors': ['Unable to find a ricci agent for the %s cluster' % clustername ]})
 	ragent = rc.hostname()
 
-	batch_number, result = setClusterConf(rc, conf)
+	batch_number, result = rq.setClusterConf(rc, conf)
 	if batch_number is None or result is None:
 		luci_log.debug_verbose('validateFdom4: missing batch and/or result')
 		return (False, {'errors': [ 'An error occurred while constructing the new cluster configuration.' ]})
 
 	try:
 		if oldname:
-			set_node_flag(self, clustername, ragent, str(batch_number), FDOM, 'Updating failover domain \"%s\"' % oldname)
+			set_node_flag(self, clustername, ragent, str(batch_number), FDOM, 'Updating failover domain "%s"' % oldname)
 		else:
-			set_node_flag(self, clustername, ragent, str(batch_number), FDOM_ADD, 'Creating failover domain \"%s\"' % name)
+			set_node_flag(self, clustername, ragent, str(batch_number), FDOM_ADD, 'Creating failover domain "%s"' % name)
 	except Exception, e:
 		luci_log.debug_verbose('validateFdom5: failed to set flags: %s' % str(e))
 
 	response = request.RESPONSE
-	response.redirect(request['URL'] + "?pagetype=" + FDOM + "&clustername=" + clustername + '&fdomname=' + name + '&busyfirst=true')
+	response.redirect('%s?pagetype=%s&clustername=%s&fdomname=%s&busyfirst=true' \
+		% (request['URL'], FDOM, clustername, name))
 
 def validateVM(self, request):
 	errors = list()
@@ -2353,7 +2293,7 @@
 			recovery = None
 		else:
 			if recovery != 'restart' and recovery != 'relocate' and recovery != 'disable':
-				errors.append('You entered an invalid recovery option: \"%s\" Valid options are \"restart\" \"relocate\" and \"disable\"')
+				errors.append('You entered an invalid recovery option: "%s" Valid options are "restart" "relocate" and "disable"')
 	except:
 		recovery = None
 
@@ -2386,7 +2326,7 @@
 			rmptr.removeChild(xvm)
 			delete_vm = True
 		except:
-			return (False, {'errors': ['No virtual machine service named \"%s\" exists.' % old_name ]})
+			return (False, {'errors': ['No virtual machine service named "%s" exists.' % old_name ]})
 	else:
 		if isNew is True:
 			xvm = Vm()
@@ -2400,7 +2340,7 @@
 				if not xvm:
 					raise Exception, 'not found'
 			except:
-				return (False, {'errors': ['No virtual machine service named \"%s\" exists.' % old_name ]})
+				return (False, {'errors': ['No virtual machine service named "%s" exists.' % old_name ]})
 			xvm.addAttribute('name', vm_name)
 			xvm.addAttribute('path', vm_path)
 
@@ -2447,7 +2387,7 @@
 		luci_log.debug_verbose('validateVM4: no ricci for %s' % clustername)
 		return (False, {'errors': ['Unable to contact a ricci agent for this cluster.']})
 
-	batch_number, result = setClusterConf(rc, stringbuf)
+	batch_number, result = rq.setClusterConf(rc, stringbuf)
 	if batch_number is None or result is None:
 		luci_log.debug_verbose('validateVM5: missing batch and/or result')
 		return (False, {'errors': [ 'Error creating virtual machine %s.' % vm_name ]})
@@ -2463,7 +2403,8 @@
 		luci_log.debug_verbose('validateVM6: failed to set flags: %s' % str(e))
 
 	response = request.RESPONSE
-	response.redirect(request['URL'] + "?pagetype=" + SERVICES + "&clustername=" + clustername + '&busyfirst=true')
+	response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
+		% (request['URL'], SERVICES, clustername))
 
 formValidators = {
 	6: validateCreateCluster,
@@ -2499,6 +2440,14 @@
 		return formValidators[pagetype](self, request)
 
 
+# Policy for showing the cluster chooser menu:
+# 1) If there are no clusters in the ManagedClusterSystems
+# folder, then only the admin user may see this menu, and
+# the configure option should not be displayed.
+# 2)If there are clusters in the ManagedClusterSystems,
+# then only display chooser if the current user has
+# permissions on at least one. If the user is admin, show ALL clusters
+
 def createCluChooser(self, request, systems):
   dummynode = {}
 
@@ -2514,8 +2463,8 @@
     except:
       pass
 
-  #First, see if a cluster is chosen, then
-  #check that the current user can access that system
+  # First, see if a cluster is chosen, then
+  # check that the current user can access that system
   cname = None
   try:
     cname = request[CLUNAME]
@@ -2532,11 +2481,10 @@
   except:
     pagetype = '3'
 
-
   cldata = {}
   cldata['Title'] = "Cluster List"
   cldata['cfg_type'] = "clusters"
-  cldata['absolute_url'] = url + "?pagetype=" + CLUSTERLIST
+  cldata['absolute_url'] = '%s?pagetype=%s' % (url, CLUSTERLIST)
   cldata['Description'] = "Clusters available for configuration"
   if pagetype == CLUSTERLIST:
     cldata['currentItem'] = True
@@ -2548,7 +2496,7 @@
     cladd = {}
     cladd['Title'] = "Create a New Cluster"
     cladd['cfg_type'] = "clusteradd"
-    cladd['absolute_url'] = url + "?pagetype=" + CLUSTER_ADD
+    cladd['absolute_url'] = '%s?pagetype=%s' % (url, CLUSTER_ADD)
     cladd['Description'] = "Create a Cluster"
     if pagetype == CLUSTER_ADD:
       cladd['currentItem'] = True
@@ -2558,7 +2506,7 @@
   clcfg = {}
   clcfg['Title'] = "Configure"
   clcfg['cfg_type'] = "clustercfg"
-  clcfg['absolute_url'] = url + "?pagetype=" + CLUSTERS
+  clcfg['absolute_url'] = '%s?pagetype=%s' % (url, CLUSTERS)
   clcfg['Description'] = "Configure a cluster"
   if pagetype == CLUSTERS:
     clcfg['currentItem'] = True
@@ -2579,7 +2527,7 @@
     clsys = {}
     clsys['Title'] = system[0]
     clsys['cfg_type'] = "cluster"
-    clsys['absolute_url'] = url + "?pagetype=" + CLUSTER + "&clustername=" + system[0]
+    clsys['absolute_url'] = '%s?pagetype=%s&clustername=%s' % (url, CLUSTER, system[0])
     clsys['Description'] = "Configure this cluster"
 
     if pagetype == CLUSTER or pagetype == CLUSTER_CONFIG:
@@ -2615,7 +2563,7 @@
   if not model:
     return {}
 
-  #There should be a positive page type
+  # There should be a positive page type
   try:
     pagetype = request[PAGETYPE]
   except:
@@ -2626,14 +2574,14 @@
   except:
     url = "/luci/cluster/index_html"
 
-  #The only way this method can run is if there exists
-  #a clustername query var
+  # The only way this method can run is if there exists
+  # a clustername query var
   cluname = request['clustername']
 
   nd = {}
   nd['Title'] = "Nodes"
   nd['cfg_type'] = "nodes"
-  nd['absolute_url'] = url + "?pagetype=" + NODES + "&clustername=" + cluname
+  nd['absolute_url'] = '%s?pagetype=%s&clustername=%s' % (url, NODES, cluname)
   nd['Description'] = "Node configuration for this cluster"
   if pagetype == NODES or pagetype == NODE_GRID or pagetype == NODE_LIST or pagetype == NODE_CONFIG or pagetype == NODE_ADD or pagetype == NODE:
     nd['show_children'] = True
@@ -2651,7 +2599,7 @@
   ndadd = {}
   ndadd['Title'] = "Add a Node"
   ndadd['cfg_type'] = "nodeadd"
-  ndadd['absolute_url'] = url + "?pagetype=" + NODE_ADD + "&clustername=" + cluname
+  ndadd['absolute_url'] = '%s?pagetype=%s&clustername=%s' % (url, NODE_ADD, cluname)
   ndadd['Description'] = "Add a node to this cluster"
   if pagetype == NODE_ADD:
     ndadd['currentItem'] = True
@@ -2661,7 +2609,7 @@
   ndcfg = {}
   ndcfg['Title'] = "Configure"
   ndcfg['cfg_type'] = "nodecfg"
-  ndcfg['absolute_url'] = url + "?pagetype=" + NODE_CONFIG + "&clustername=" + cluname
+  ndcfg['absolute_url'] = '%s?pagetype=%s&clustername=%s' % (url, NODE_CONFIG, cluname)
   ndcfg['Description'] = "Configure cluster nodes"
   if pagetype == NODE_CONFIG or pagetype == NODE or pagetype == NODES or pagetype == NODE_LIST or pagetype == NODE_GRID or pagetype == NODE_ADD:
     ndcfg['show_children'] = True
@@ -2682,7 +2630,7 @@
     cfg = {}
     cfg['Title'] = nodename
     cfg['cfg_type'] = "node"
-    cfg['absolute_url'] = url + "?pagetype=" + NODE + "&nodename=" + nodename + "&clustername=" + cluname
+    cfg['absolute_url'] = '%s?pagetype=%s&nodename=%s&clustername=%s' % (url, NODE, nodename, cluname)
     cfg['Description'] = "Configure this cluster node"
     if pagetype == NODE:
       try:
@@ -2711,7 +2659,7 @@
   sv = {}
   sv['Title'] = "Services"
   sv['cfg_type'] = "services"
-  sv['absolute_url'] = url + "?pagetype=" + SERVICES + "&clustername=" + cluname
+  sv['absolute_url'] = '%s?pagetype=%s&clustername=%s' % (url, SERVICES, cluname)
   sv['Description'] = "Service configuration for this cluster"
   if pagetype == SERVICES or pagetype == SERVICE_CONFIG or pagetype == SERVICE_ADD or pagetype == SERVICE or pagetype == SERVICE_LIST or pagetype == VM_ADD or pagetype == VM_CONFIG:
     sv['show_children'] = True
@@ -2725,7 +2673,7 @@
   svadd = {}
   svadd['Title'] = "Add a Service"
   svadd['cfg_type'] = "serviceadd"
-  svadd['absolute_url'] = url + "?pagetype=" + SERVICE_ADD + "&clustername=" + cluname
+  svadd['absolute_url'] = '%s?pagetype=%s&clustername=%s' % (url, SERVICE_ADD, cluname)
   svadd['Description'] = "Add a Service to this cluster"
   if pagetype == SERVICE_ADD:
     svadd['currentItem'] = True
@@ -2736,7 +2684,7 @@
     vmadd = {}
     vmadd['Title'] = "Add a Virtual Service"
     vmadd['cfg_type'] = "vmadd"
-    vmadd['absolute_url'] = url + "?pagetype=" + VM_ADD + "&clustername=" + cluname
+    vmadd['absolute_url'] = '%s?pagetype=%s&clustername=%s' % (url, VM_ADD, cluname)
     vmadd['Description'] = "Add a Virtual Service to this cluster"
     if pagetype == VM_ADD:
       vmadd['currentItem'] = True
@@ -2746,7 +2694,7 @@
   svcfg = {}
   svcfg['Title'] = "Configure a Service"
   svcfg['cfg_type'] = "servicecfg"
-  svcfg['absolute_url'] = url + "?pagetype=" + SERVICE_CONFIG + "&clustername=" + cluname
+  svcfg['absolute_url'] = '%s?pagetype=%s&clustername=%s' % (url, SERVICE_CONFIG, cluname)
   svcfg['Description'] = "Configure a Service for this cluster"
   if pagetype == SERVICE_CONFIG or pagetype == SERVICE or pagetype == VM_CONFIG:
     svcfg['show_children'] = True
@@ -2759,12 +2707,13 @@
 
   services = model.getServices()
   serviceable = list()
+
   for service in services:
     servicename = service.getName()
     svc = {}
     svc['Title'] = servicename
     svc['cfg_type'] = "service"
-    svc['absolute_url'] = url + "?pagetype=" + SERVICE + "&servicename=" + servicename + "&clustername=" + cluname
+    svc['absolute_url'] = '%s?pagetype=%s&servicename=%s&clustername=%s' % (url, SERVICE, servicename, cluname)
     svc['Description'] = "Configure this service"
     if pagetype == SERVICE:
       try:
@@ -2786,7 +2735,7 @@
     svc = {}
     svc['Title'] = name
     svc['cfg_type'] = "vm"
-    svc['absolute_url'] = url + "?pagetype=" + VM_CONFIG + "&servicename=" + name + "&clustername=" + cluname
+    svc['absolute_url'] = '%s?pagetype=%s&servicename=%s&clustername=%s' % (url, VM_CONFIG, name, cluname)
     svc['Description'] = "Configure this Virtual Service"
     if pagetype == VM_CONFIG:
       try:
@@ -2816,7 +2765,7 @@
   rv = {}
   rv['Title'] = "Resources"
   rv['cfg_type'] = "resources"
-  rv['absolute_url'] = url + "?pagetype=" + RESOURCES + "&clustername=" + cluname
+  rv['absolute_url'] = '%s?pagetype=%s&clustername=%s' % (url, RESOURCES, cluname)
   rv['Description'] = "Resource configuration for this cluster"
   if pagetype == RESOURCES or pagetype == RESOURCE_CONFIG or pagetype == RESOURCE_ADD or pagetype == RESOURCE:
     rv['show_children'] = True
@@ -2830,7 +2779,7 @@
   rvadd = {}
   rvadd['Title'] = "Add a Resource"
   rvadd['cfg_type'] = "resourceadd"
-  rvadd['absolute_url'] = url + "?pagetype=" + RESOURCE_ADD + "&clustername=" + cluname
+  rvadd['absolute_url'] = '%s?pagetype=%s&clustername=%s' % (url, RESOURCE_ADD, cluname)
   rvadd['Description'] = "Add a Resource to this cluster"
   if pagetype == RESOURCE_ADD:
     rvadd['currentItem'] = True
@@ -2840,7 +2789,7 @@
   rvcfg = {}
   rvcfg['Title'] = "Configure a Resource"
   rvcfg['cfg_type'] = "resourcecfg"
-  rvcfg['absolute_url'] = url + "?pagetype=" + RESOURCE_CONFIG + "&clustername=" + cluname
+  rvcfg['absolute_url'] = '%s?pagetype=%s&clustername=%s' % (url, RESOURCE_CONFIG, cluname)
   rvcfg['Description'] = "Configure a Resource for this cluster"
   if pagetype == RESOURCE_CONFIG or pagetype == RESOURCE:
     rvcfg['show_children'] = True
@@ -2858,7 +2807,7 @@
     rvc = {}
     rvc['Title'] = resourcename
     rvc['cfg_type'] = "resource"
-    rvc['absolute_url'] = url + "?pagetype=" + RESOURCE + "&resourcename=" + resourcename + "&clustername=" + cluname
+    rvc['absolute_url'] = '%s?pagetype=%s&resourcename=%s&clustername=%s' % (url, RESOURCES, resourcename, cluname)
     rvc['Description'] = "Configure this resource"
     if pagetype == RESOURCE:
       try:
@@ -2885,7 +2834,7 @@
   fd = {}
   fd['Title'] = "Failover Domains"
   fd['cfg_type'] = "failoverdomains"
-  fd['absolute_url'] = url + "?pagetype=" + FDOMS + "&clustername=" + cluname
+  fd['absolute_url'] = '%s?pagetype=%s&clustername=%s' % (url, FDOMS, cluname)
   fd['Description'] = "Failover domain configuration for this cluster"
   if pagetype == FDOMS or pagetype == FDOM_CONFIG or pagetype == FDOM_ADD or pagetype == FDOM:
     fd['show_children'] = True
@@ -2899,7 +2848,7 @@
   fdadd = {}
   fdadd['Title'] = "Add a Failover Domain"
   fdadd['cfg_type'] = "failoverdomainadd"
-  fdadd['absolute_url'] = url + "?pagetype=" + FDOM_ADD + "&clustername=" + cluname
+  fdadd['absolute_url'] = '%s?pagetype=%s&clustername=%s' % (url, FDOM_ADD, cluname)
   fdadd['Description'] = "Add a Failover Domain to this cluster"
   if pagetype == FDOM_ADD:
     fdadd['currentItem'] = True
@@ -2909,7 +2858,7 @@
   fdcfg = {}
   fdcfg['Title'] = "Configure a Failover Domain"
   fdcfg['cfg_type'] = "failoverdomaincfg"
-  fdcfg['absolute_url'] = url + "?pagetype=" + FDOM_CONFIG + "&clustername=" + cluname
+  fdcfg['absolute_url'] = '%s?pagetype=%s&clustername=%s' % (url, FDOM_CONFIG, cluname)
   fdcfg['Description'] = "Configure a Failover Domain for this cluster"
   if pagetype == FDOM_CONFIG or pagetype == FDOM:
     fdcfg['show_children'] = True
@@ -2927,7 +2876,7 @@
     fdc = {}
     fdc['Title'] = fdomname
     fdc['cfg_type'] = "fdom"
-    fdc['absolute_url'] = url + "?pagetype=" + FDOM + "&fdomname=" + fdomname + "&clustername=" + cluname
+    fdc['absolute_url'] = '%s?pagetype=%s&fdomname=%s&clustername=%s' % (url, FDOM, fdomname, cluname)
     fdc['Description'] = "Configure this Failover Domain"
     if pagetype == FDOM:
       try:
@@ -2954,7 +2903,7 @@
   fen = {}
   fen['Title'] = "Shared Fence Devices"
   fen['cfg_type'] = "fencedevicess"
-  fen['absolute_url'] = url + "?pagetype=" + FENCEDEVS + "&clustername=" + cluname
+  fen['absolute_url'] = '%s?pagetype=%s&clustername=%s' % (url, FENCEDEVS, cluname)
   fen['Description'] = "Fence Device configuration for this cluster"
   if pagetype == FENCEDEVS or pagetype == FENCEDEV_CONFIG or pagetype == FENCEDEV_ADD or pagetype == FENCEDEV:
     fen['show_children'] = True
@@ -2968,7 +2917,7 @@
   fenadd = {}
   fenadd['Title'] = "Add a Fence Device"
   fenadd['cfg_type'] = "fencedeviceadd"
-  fenadd['absolute_url'] = url + "?pagetype=" + FENCEDEV_ADD + "&clustername=" + cluname
+  fenadd['absolute_url'] = '%s?pagetype=%s&clustername=%s' % (url, FENCEDEV_ADD, cluname)
   fenadd['Description'] = "Add a Fence Device to this cluster"
   if pagetype == FENCEDEV_ADD:
     fenadd['currentItem'] = True
@@ -2978,7 +2927,7 @@
   fencfg = {}
   fencfg['Title'] = "Configure a Fence Device"
   fencfg['cfg_type'] = "fencedevicecfg"
-  fencfg['absolute_url'] = url + "?pagetype=" + FENCEDEV_CONFIG + "&clustername=" + cluname
+  fencfg['absolute_url'] = '%s?pagetype=%s&clustername=%s' % (url, FENCEDEV_CONFIG, cluname)
   fencfg['Description'] = "Configure a Fence Device for this cluster"
   if pagetype == FENCEDEV_CONFIG or pagetype == FENCEDEV:
     fencfg['show_children'] = True
@@ -2996,7 +2945,7 @@
     fenc = {}
     fenc['Title'] = fencename
     fenc['cfg_type'] = "fencedevice"
-    fenc['absolute_url'] = url + "?pagetype=" + FENCEDEV + "&fencename=" + fencename + "&clustername=" + cluname
+    fenc['absolute_url'] = '%s?pagetype=%s&fencename=%s&clustername=%s' % (url, FENCEDEV, fencename, cluname)
     fenc['Description'] = "Configure this Fence Device"
     if pagetype == FENCEDEV:
       try:
@@ -3032,18 +2981,16 @@
 
   return dummynode
 
-
 def getClusterName(self, model):
-  return model.getClusterName()
+	return model.getClusterName()
 
 def getClusterAlias(self, model):
-  if not model:
-    return ''
-  alias = model.getClusterAlias()
-  if alias is None:
-    return model.getClusterName()
-  else:
-    return alias
+	if not model:
+		return ''
+	alias = model.getClusterAlias()
+	if alias is None:
+		return model.getClusterName()
+	return alias
 
 def getClusterURL(self, request, model):
 	try:
@@ -3110,18 +3057,17 @@
 
   return portaltabs
 
-
-
 def check_clusters(self, clusters):
-  clist = list()
-  for cluster in clusters:
-    if cluster_permission_check(cluster[1]):
-      clist.append(cluster)
+	sm = AccessControl.getSecurityManager()
+	user = sm.getUser()
 
-  return clist
+	clist = list()
+	for cluster in clusters:
+		if user.has_permission('View', cluster):
+			clist.append(cluster)
+	return clist
 
 def cluster_permission_check(cluster):
-	#Does this take too long?
 	try:
 		sm = AccessControl.getSecurityManager()
 		user = sm.getUser()
@@ -3133,7 +3079,7 @@
 
 def getRicciAgent(self, clustername):
 	#Check cluster permission here! return none if false
-	path = str(CLUSTER_FOLDER_PATH + clustername)
+	path = '%s%s' % (CLUSTER_FOLDER_PATH, clustername)
 
 	try:
 		clusterfolder = self.restrictedTraverse(path)
@@ -3315,7 +3261,7 @@
 	results.append(vals)
 
 	try:
-		cluster_path = CLUSTER_FOLDER_PATH + clustername
+		cluster_path = '%s%s' % (CLUSTER_FOLDER_PATH, clustername)
 		nodelist = self.restrictedTraverse(cluster_path).objectItems('Folder')
 	except Exception, e:
 		luci_log.debug_verbose('GCSDB0: %s -> %s: %s' \
@@ -3346,7 +3292,7 @@
 
 def getClusterStatus(self, request, rc, cluname=None):
 	try:
-		doc = getClusterStatusBatch(rc)
+		doc = rq.getClusterStatusBatch(rc)
 		if not doc:
 			raise Exception, 'doc is None'
 	except Exception, e:
@@ -3428,7 +3374,7 @@
 	return results
 
 def getServicesInfo(self, status, model, req):
-	map = {}
+	svc_map = {}
 	maplist = list()
 
 	try:
@@ -3461,39 +3407,39 @@
 				cur_node = item['nodename']
 				itemmap['running'] = "true"
 				itemmap['nodename'] = cur_node
-				itemmap['disableurl'] = baseurl + "?" + "clustername=" + cluname +"&servicename=" + item['name'] + "&pagetype=" + SERVICE_STOP
-				itemmap['restarturl'] = baseurl + "?" + "clustername=" + cluname +"&servicename=" + item['name'] + "&pagetype=" + SERVICE_RESTART
+				itemmap['disableurl'] = '%s?clustername=%s&servicename=%s&pagetype=%s' % (baseurl, cluname, item['name'], SERVICE_STOP)
+				itemmap['restarturl'] = '%s?clustername=%s&servicename=%s&pagetype=%s' % (baseurl, cluname, item['name'], SERVICE_RESTART)
 			else:
-				itemmap['enableurl'] = baseurl + "?" + "clustername=" + cluname +"&servicename=" + item['name'] + "&pagetype=" + SERVICE_START
+				itemmap['enableurl'] = '%s?clustername=%s&servicename=%s&pagetype=%s' % (baseurl, cluname, item['name'], SERVICE_START)
 
 			itemmap['autostart'] = item['autostart']
 
 			try:
 				svc = model.retrieveServiceByName(item['name'])
-				itemmap['cfgurl'] = baseurl + "?" + "clustername=" + cluname + "&servicename=" + item['name'] + "&pagetype=" + SERVICE
-				itemmap['delurl'] = baseurl + "?" + "clustername=" + cluname + "&servicename=" + item['name'] + "&pagetype=" + SERVICE_DELETE
+				itemmap['cfgurl'] = '%s?clustername=%s&servicename=%s&pagetype=%s' % (baseurl, cluname, item['name'], SERVICE)
+				itemmap['cfgurl'] = '%s?clustername=%s&servicename=%s&pagetype=%s' % (baseurl, cluname, item['name'], SERVICE_DELETE)
 			except:
 				try:
 					svc = model.retrieveVMsByName(item['name'])
 					itemmap['is_vm'] = True
-					itemmap['cfgurl'] = baseurl + "?" + "clustername=" + cluname + "&servicename=" + item['name'] + "&pagetype=" + VM_CONFIG 
-					itemmap['delurl'] = baseurl + "?" + "clustername=" + cluname + "&servicename=" + item['name'] + "&pagetype=" + VM_CONFIG
+					itemmap['cfgurl'] = '%s?clustername=%s&servicename=%s&pagetype=%s' % (baseurl, cluname, item['name'], VM_CONFIG)
+					itemmap['delurl'] = '%s?clustername=%s&servicename=%s&pagetype=%s' % (baseurl, cluname, item['name'], VM_CONFIG)
 				except:
 					continue
 
 			starturls = list()
 			for node in nodes:
+				cur_nodename = node.getName()
 				if node.getName() != cur_node:
 					starturl = {}
-					cur_nodename = node.getName()
 					starturl['nodename'] = cur_nodename
-					starturl['url'] = baseurl + '?' + 'clustername=' + cluname +'&servicename=' + item['name'] + '&pagetype=' + SERVICE_START + '&nodename=' + node.getName()
+					starturl['url'] = '%s?clustername=%s&servicename=%s&pagetype=%s&nodename=%s' % (baseurl, cluname, item['name'], SERVICE_START, cur_nodename)
 					starturls.append(starturl)
 
 					if itemmap.has_key('is_vm') and itemmap['is_vm'] is True:
 						migrate_url = { 'nodename': cur_nodename }
 						migrate_url['migrate'] = True
-						migrate_url['url'] = baseurl + '?' + 'clustername=' + cluname +'&servicename=' + item['name'] + '&pagetype=' + SERVICE_MIGRATE + '&nodename=' + node.getName()
+						migrate_url['url'] = '%s?clustername=%s&servicename=%s&pagetype=%s&nodename=%s' % (baseurl, cluname, item['name'], SERVICE_MIGRATE, cur_nodename)
 						starturls.append(migrate_url)
 
 			itemmap['links'] = starturls
@@ -3505,13 +3451,14 @@
 				itemmap['faildom'] = "No Failover Domain"
 			maplist.append(itemmap)
 
-	map['services'] = maplist
-	return map
+	svc_map['services'] = maplist
+	return svc_map
 
 def get_fdom_names(model):
 	return map(lambda x: x.getName(), model.getFailoverDomains())
 
 def getServiceInfo(self, status, model, req):
+	from Products.Archetypes.utils import make_uuid
 	#set up struct for service config page
 	hmap = {}
 	root_uuid = 'toplevel'
@@ -3561,11 +3508,11 @@
 				if item['running'] == 'true':
 					hmap['running'] = 'true'
 					nodename = item['nodename']
-					innermap['current'] = 'This service is currently running on %s' % nodename
+					innermap['current'] = 'Running on %s' % nodename
 
-					innermap['disableurl'] = baseurl + "?" + "clustername=" + cluname +"&servicename=" + servicename + "&pagetype=" + SERVICE_STOP
-					innermap['restarturl'] = baseurl + "?" + "clustername=" + cluname +"&servicename=" + servicename + "&pagetype=" + SERVICE_RESTART
-					innermap['delurl'] = baseurl + "?" + "clustername=" + cluname +"&servicename=" + servicename + "&pagetype=" + SERVICE_DELETE
+					innermap['disableurl'] = '%s?clustername=%s&servicename=%s&pagetype=%s' % (baseurl, cluname, servicename, SERVICE_STOP)
+					innermap['restarturl'] = '%s?clustername=%s&servicename=%s&pagetype=%s' % (baseurl, cluname, servicename, SERVICE_RESTART)
+					innermap['delurl'] = '%s?clustername=%s&servicename=%s&pagetype=%s' % (baseurl, cluname, servicename, SERVICE_DELETE)
 
 					#In this case, determine where it can run...
 					nodes = model.getNodes()
@@ -3574,20 +3521,20 @@
 							starturl = {}
 							cur_nodename = node.getName()
 							starturl['nodename'] = cur_nodename
-							starturl['url'] = baseurl + "?" + "clustername=" + cluname +"&servicename=" + servicename + "&pagetype=" + SERVICE_START + "&nodename=" + node.getName()
+							starturl['url'] = '%s?clustername=%s&servicename=%s&pagetype=%s&nodename=%s' % (baseurl, cluname, servicename, SERVICE_START, cur_nodename)
 							starturls.append(starturl)
 
 							if item.has_key('is_vm') and item['is_vm'] is True:
 								migrate_url = { 'nodename': cur_nodename }
-								migrate_url['url'] = baseurl + "?" + "clustername=" + cluname +"&servicename=" + servicename + "&pagetype=" + SERVICE_MIGRATE + "&nodename=" + node.getName()
+								migrate_url['url'] = '%s?clustername=%s&servicename=%s&pagetype=%s&nodename=%s' % (baseurl, cluname, servicename, SERVICE_MIGRATE, cur_nodename)
 								migrate_url['migrate'] = True
 								starturls.append(migrate_url)
 					innermap['links'] = starturls
 				else:
 					#Do not set ['running'] in this case...ZPT will detect it is missing
-					innermap['current'] = "This service is currently stopped"
-					innermap['enableurl'] = baseurl + "?" + "clustername=" + cluname +"&servicename=" + servicename + "&pagetype=" + SERVICE_START
-					innermap['delurl'] = baseurl + "?" + "clustername=" + cluname +"&servicename=" + servicename + "&pagetype=" + SERVICE_DELETE
+					innermap['current'] = "Stopped"
+					innermap['enableurl'] = '%s?clustername=%s&servicename=%s&pagetype=%s' % (baseurl, cluname, servicename, SERVICE_START)
+					innermap['delurl'] = '%s?clustername=%s&servicename=%s&pagetype=%s' % (baseurl, cluname, servicename, SERVICE_DELETE)
 
 					nodes = model.getNodes()
 					starturls = list()
@@ -3596,12 +3543,12 @@
 						cur_nodename = node.getName()
 
 						starturl['nodename'] = cur_nodename
-						starturl['url'] = baseurl + "?" + "clustername=" + cluname +"&servicename=" + servicename + "&pagetype=" + SERVICE_START + "&nodename=" + node.getName()
+						starturl['url'] = '%s?clustername=%s&servicename=%s&pagetype=%s&nodename=%s' % (baseurl, cluname, servicename, SERVICE_START, cur_nodename)
 						starturls.append(starturl)
 
 						if item.has_key('is_vm') and item['is_vm'] is True:
 							migrate_url = { 'nodename': cur_nodename }
-							migrate_url['url'] = baseurl + "?" + "clustername=" + cluname +"&servicename=" + servicename + "&pagetype=" + SERVICE_MIGRATE + "&nodename=" + node.getName()
+							migrate_url['url'] = '%s?clustername=%s&servicename=%s&pagetype=%s&nodename=%s' % (baseurl, cluname, servicename, SERVICE_MIGRATE, cur_nodename)
 							migrate_url['migrate'] = True
 							starturls.append(migrate_url)
 					innermap['links'] = starturls
@@ -3712,22 +3659,25 @@
 			% svcname)
 		return None
 
-	batch_number, result = startService(rc, svcname, nodename)
+	batch_number, result = rq.startService(rc, svcname, nodename)
 	if batch_number is None or result is None:
 		luci_log.debug_verbose('serviceStart3: SS(%s,%s,%s) call failed' \
 			% (svcname, cluname, nodename))
 		return None
 
 	try:
-		status_msg = "Starting service \'%s\'" % svcname
 		if nodename:
-			status_msg += " on node \'%s\'" % nodename
+			status_msg = 'Starting service "%s" on node "%s"' \
+				% (svcname, nodename)
+		else:
+			status_msg = 'Starting service "%s"' % svcname
 		set_node_flag(self, cluname, rc.hostname(), str(batch_number), SERVICE_START, status_msg)
 	except Exception, e:
 		luci_log.debug_verbose('serviceStart4: error setting flags for service %s at node %s for cluster %s' % (svcname, nodename, cluname))
 
 	response = req.RESPONSE
-	response.redirect(req['URL'] + "?pagetype=" + SERVICE_LIST + "&clustername=" + cluname + '&busyfirst=true')
+	response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
+		% (req['URL'], SERVICE_LIST, cluname))
 
 def serviceMigrate(self, rc, req):
 	svcname = None
@@ -3770,7 +3720,7 @@
 			% svcname)
 		return None
 
-	batch_number, result = migrateService(rc, svcname, nodename)
+	batch_number, result = rq.migrateService(rc, svcname, nodename)
 	if batch_number is None or result is None:
 		luci_log.debug_verbose('serviceMigrate3: SS(%s,%s,%s) call failed' \
 			% (svcname, cluname, nodename))
@@ -3782,7 +3732,8 @@
 		luci_log.debug_verbose('serviceMigrate4: error setting flags for service %s at node %s for cluster %s' % (svcname, nodename, cluname))
 
 	response = req.RESPONSE
-	response.redirect(req['URL'] + "?pagetype=" + SERVICE_LIST + "&clustername=" + cluname + '&busyfirst=true')
+	response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
+		% (req['URL'], SERVICE_LIST, cluname))
 
 def serviceRestart(self, rc, req):
 	svcname = None
@@ -3811,7 +3762,7 @@
 		luci_log.debug_verbose('serviceRestart1: no cluster for %s' % svcname)
 		return None
 
-	batch_number, result = restartService(rc, svcname)
+	batch_number, result = rq.restartService(rc, svcname)
 	if batch_number is None or result is None:
 		luci_log.debug_verbose('serviceRestart2: %s failed' % svcname)
 		return None
@@ -3822,7 +3773,8 @@
 		luci_log.debug_verbose('serviceRestart3: error setting flags for service %s for cluster %s' % (svcname, cluname))
 
 	response = req.RESPONSE
-	response.redirect(req['URL'] + "?pagetype=" + SERVICE_LIST + "&clustername=" + cluname + '&busyfirst=true')
+	response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
+		% (req['URL'], SERVICE_LIST, cluname))
 
 def serviceStop(self, rc, req):
 	svcname = None
@@ -3851,7 +3803,7 @@
 		luci_log.debug_verbose('serviceStop1: no cluster name for %s' % svcname)
 		return None
 
-	batch_number, result = stopService(rc, svcname)
+	batch_number, result = rq.stopService(rc, svcname)
 	if batch_number is None or result is None:
 		luci_log.debug_verbose('serviceStop2: stop %s failed' % svcname)
 		return None
@@ -3862,7 +3814,8 @@
 		luci_log.debug_verbose('serviceStop3: error setting flags for service %s for cluster %s' % (svcname, cluname))
 
 	response = req.RESPONSE
-	response.redirect(req['URL'] + "?pagetype=" + SERVICE_LIST + "&clustername=" + cluname + '&busyfirst=true')
+	response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
+		% (req['URL'], SERVICE_LIST, cluname))
 
 def getFdomInfo(self, model, request):
 	fhash = {}
@@ -3913,7 +3866,8 @@
   for fdom in fdoms:
     fdom_map = {}
     fdom_map['name'] = fdom.getName()
-    fdom_map['cfgurl'] = baseurl + "?pagetype=" + FDOM + "&clustername=" + clustername + '&fdomname=' + fdom.getName()
+    fdom_map['cfgurl'] = '%s?pagetype=%s&clustername=%s&fdomname=%s' \
+		% (baseurl, FDOM, clustername, fdom.getName())
     ordered_attr = fdom.getAttribute('ordered')
     restricted_attr = fdom.getAttribute('restricted')
     if ordered_attr is not None and (ordered_attr == "true" or ordered_attr == "1"):
@@ -3933,7 +3887,8 @@
         if nitem['name'] == ndname:
           break
       nodesmap['nodename'] = ndname
-      nodesmap['nodecfgurl'] = baseurl + "?clustername=" + clustername + "&nodename=" + ndname + "&pagetype=" + NODE
+      nodesmap['nodecfgurl'] = '%s?clustername=%s&nodename=%s&pagetype=%s' \
+		% (baseurl, clustername, ndname, NODE)
       if nitem['clustered'] == "true":
         nodesmap['status'] = NODE_ACTIVE
       elif nitem['online'] == "false":
@@ -3959,7 +3914,8 @@
           svcmap = {}
           svcmap['name'] = svcname
           svcmap['status'] = sitem['running']
-          svcmap['svcurl'] = baseurl + "?pagetype=" + SERVICE + "&clustername=" + clustername + "&servicename=" + svcname
+          svcmap['svcurl'] = '%s?pagetype=%s&clustername=%s&servicename=%s' \
+			% (baseurl, SERVICE, clustername, svcname)
           svcmap['location'] = sitem['nodename']
           svclist.append(svcmap)
     fdom_map['svclist'] = svclist
@@ -4044,8 +4000,9 @@
     if totem:
       clumap['totem'] = totem.getAttributes()
 
-  prop_baseurl = req['URL'] + '?' + PAGETYPE + '=' + CLUSTER_CONFIG + '&' + CLUNAME + '=' + cluname + '&'
-  basecluster_url = prop_baseurl + PROPERTIES_TAB + "=" + PROP_GENERAL_TAB
+  prop_baseurl = '%s?pagetype=%s&clustername=%s&' \
+	% (req['URL'], CLUSTER_CONFIG, cluname)
+  basecluster_url = '%stab=%s' % (prop_baseurl, PROP_GENERAL_TAB)
   #needed:
   clumap['basecluster_url'] = basecluster_url
   #name field
@@ -4061,7 +4018,7 @@
   gulm_ptr = model.getGULMPtr()
   if not gulm_ptr:
     #Fence Daemon Props
-    fencedaemon_url = prop_baseurl + PROPERTIES_TAB + "=" + PROP_FENCE_TAB
+    fencedaemon_url = '%stab=%s' % (prop_baseurl, PROP_FENCE_TAB)
     clumap['fencedaemon_url'] = fencedaemon_url
     fdp = model.getFenceDaemonPtr()
     pjd = fdp.getAttribute('post_join_delay')
@@ -4077,7 +4034,7 @@
 
     #-------------
     #if multicast
-    multicast_url = prop_baseurl + PROPERTIES_TAB + "=" + PROP_MCAST_TAB
+    multicast_url = '%stab=%s' % (prop_baseurl, PROP_MCAST_TAB)
     clumap['multicast_url'] = multicast_url
     #mcast addr
     is_mcast = model.isMulticast()
@@ -4100,12 +4057,12 @@
       if not n in gulm_lockservs:
         lockserv_list.append((n, False))
     clumap['gulm'] = True
-    clumap['gulm_url'] = prop_baseurl + PROPERTIES_TAB + '=' + PROP_GULM_TAB
+    clumap['gulm_url'] = '%stab=%s' % (prop_baseurl, PROP_GULM_TAB)
     clumap['gulm_lockservers'] = lockserv_list
 
   #-------------
   #quorum disk params
-  quorumd_url = prop_baseurl + PROPERTIES_TAB + "=" + PROP_QDISK_TAB
+  quorumd_url = '%stab=%s' % (prop_baseurl, PROP_QDISK_TAB)
   clumap['quorumd_url'] = quorumd_url
   is_quorumd = model.isQuorumd()
   clumap['is_quorumd'] = is_quorumd
@@ -4171,7 +4128,7 @@
   return clumap
 
 def getClustersInfo(self, status, req):
-  map = {}
+  clu_map = {}
   nodelist = list()
   svclist = list()
   clulist = list()
@@ -4190,28 +4147,33 @@
     return {}
   clu = clulist[0]
   if 'error' in clu:
-    map['error'] = True
+    clu_map['error'] = True
   clustername = clu['name']
   if clu['alias'] != "":
-    map['clusteralias'] = clu['alias']
+    clu_map['clusteralias'] = clu['alias']
   else:
-    map['clusteralias'] = clustername
-  map['clustername'] = clustername
+    clu_map['clusteralias'] = clustername
+  clu_map['clustername'] = clustername
   if clu['quorate'] == "true":
-    map['status'] = "Quorate"
-    map['running'] = "true"
+    clu_map['status'] = "Quorate"
+    clu_map['running'] = "true"
   else:
-    map['status'] = "Not Quorate"
-    map['running'] = "false"
-  map['votes'] = clu['votes']
-  map['minquorum'] = clu['minQuorum']
-
-  map['clucfg'] = baseurl + "?" + PAGETYPE + "=" + CLUSTER_CONFIG + "&" + CLUNAME + "=" + clustername
-
-  map['restart_url'] = baseurl + "?" + PAGETYPE + "=" + CLUSTER_PROCESS + "&" + CLUNAME + "=" + clustername + '&task=' + CLUSTER_RESTART
-  map['stop_url'] = baseurl + "?" + PAGETYPE + "=" + CLUSTER_PROCESS + "&" + CLUNAME + "=" + clustername + '&task=' + CLUSTER_STOP
-  map['start_url'] = baseurl + "?" + PAGETYPE + "=" + CLUSTER_PROCESS + "&" + CLUNAME + "=" + clustername + '&task=' + CLUSTER_START
-  map['delete_url'] = baseurl + "?" + PAGETYPE + "=" + CLUSTER_PROCESS + "&" + CLUNAME + "=" + clustername + '&task=' + CLUSTER_DELETE
+    clu_map['status'] = "Not Quorate"
+    clu_map['running'] = "false"
+  clu_map['votes'] = clu['votes']
+  clu_map['minquorum'] = clu['minQuorum']
+
+  clu_map['clucfg'] = '%s?pagetype=%s&clustername=%s' \
+	% (baseurl, CLUSTER_CONFIG, clustername)
+
+  clu_map['restart_url'] = '%s?pagetype=%s&clustername=%s&task=%s' \
+	% (baseurl, CLUSTER_PROCESS, clustername, CLUSTER_RESTART)
+  clu_map['stop_url'] = '%s?pagetype=%s&clustername=%s&task=%s' \
+	% (baseurl, CLUSTER_PROCESS, clustername, CLUSTER_STOP)
+  clu_map['start_url'] = '%s?pagetype=%s&clustername=%s&task=%s' \
+	% (baseurl, CLUSTER_PROCESS, clustername, CLUSTER_START)
+  clu_map['delete_url'] = '%s?pagetype=%s&clustername=%s&task=%s' \
+	% (baseurl, CLUSTER_PROCESS, clustername, CLUSTER_DELETE)
 
   svc_dict_list = list()
   for svc in svclist:
@@ -4220,23 +4182,26 @@
       svcname = svc['name']
       svc_dict['name'] = svcname
       svc_dict['srunning'] = svc['running']
+      svc_dict['servicename'] = svcname
 
       if svc.has_key('is_vm') and svc['is_vm'] is True:
         target_page = VM_CONFIG
       else:
         target_page = SERVICE
-      svcurl = baseurl + "?" + PAGETYPE + "=" + target_page + "&" + CLUNAME + "=" + clustername + "&servicename=" + svcname
-      svc_dict['servicename'] = svcname
+
+      svcurl = '%s?pagetype=%s&clustername=%s&servicename=%s' \
+		% (baseurl, target_page, clustername, svcname)
       svc_dict['svcurl'] = svcurl
       svc_dict_list.append(svc_dict)
-  map['currentservices'] = svc_dict_list
+  clu_map['currentservices'] = svc_dict_list
   node_dict_list = list()
 
   for item in nodelist:
     nmap = {}
     name = item['name']
     nmap['nodename'] = name
-    cfgurl = baseurl + "?" + PAGETYPE + "=" + NODE + "&" + CLUNAME + "=" + clustername + "&nodename=" + name
+    cfgurl = '%s?pagetype=%s&clustername=%s&nodename=%s' \
+		% (baseurl, NODE, clustername, name)
     nmap['configurl'] = cfgurl
     if item['clustered'] == "true":
       nmap['status'] = NODE_ACTIVE
@@ -4246,11 +4211,11 @@
       nmap['status'] = NODE_INACTIVE
     node_dict_list.append(nmap)
 
-  map['currentnodes'] = node_dict_list
-  return map
+  clu_map['currentnodes'] = node_dict_list
+  return clu_map
 
 def nodeLeave(self, rc, clustername, nodename_resolved):
-	path = str(CLUSTER_FOLDER_PATH + clustername + '/' + nodename_resolved)
+	path = '%s%s/%s' % (CLUSTER_FOLDER_PATH, clustername, nodename_resolved)
 
 	try:
 		nodefolder = self.restrictedTraverse(path)
@@ -4260,7 +4225,7 @@
 		luci_log.debug('NLO: node_leave_cluster err: %s' % str(e))
 		return None
 
-	objname = str(nodename_resolved + "____flag")
+	objname = '%s____flag' % nodename_resolved
 	fnpresent = noNodeFlagsPresent(self, nodefolder, objname, nodename_resolved)
 
 	if fnpresent is None:
@@ -4273,25 +4238,25 @@
 			% nodename_resolved)
 		return None
 
-	batch_number, result = nodeLeaveCluster(rc)
+	batch_number, result = rq.nodeLeaveCluster(rc)
 	if batch_number is None or result is None:
 		luci_log.debug_verbose('NL3: nodeLeaveCluster error: batch_number and/or result is None')
 		return None
 
 	try:
-		set_node_flag(self, clustername, rc.hostname(), str(batch_number), NODE_LEAVE_CLUSTER, "Node \'%s\' leaving cluster" % nodename_resolved)
+		set_node_flag(self, clustername, rc.hostname(), str(batch_number), NODE_LEAVE_CLUSTER, 'Node "%s" leaving cluster "%s"' % (nodename_resolved, clustername))
 	except Exception, e:
 		luci_log.debug_verbose('NL4: failed to set flags: %s' % str(e))
 	return True
 
 def nodeJoin(self, rc, clustername, nodename_resolved):
-	batch_number, result = nodeJoinCluster(rc)
+	batch_number, result = rq.nodeJoinCluster(rc)
 	if batch_number is None or result is None:
 		luci_log.debug_verbose('NJ0: batch_number and/or result is None')
 		return None
 
 	try:
-		set_node_flag(self, clustername, rc.hostname(), str(batch_number), NODE_JOIN_CLUSTER, "Node \'%s\' joining cluster" % nodename_resolved)
+		set_node_flag(self, clustername, rc.hostname(), str(batch_number), NODE_JOIN_CLUSTER, 'Node "%s" joining cluster "%s"' % (nodename_resolved, clustername))
 	except Exception, e:
 		luci_log.debug_verbose('NJ1: failed to set flags: %s' % str(e))
 	return True
@@ -4362,10 +4327,16 @@
 		luci_log.debug_verbose('cluRestart0: clusterStop: %d errs' % snum_err)
 	jnum_err = clusterStart(self, model)
 	if jnum_err:
-		luci_log.debug_verbose('cluRestart0: clusterStart: %d errs' % jnum_err)
+		luci_log.debug_verbose('cluRestart1: clusterStart: %d errs' % jnum_err)
 	return snum_err + jnum_err
 
 def clusterDelete(self, model):
+	# Try to stop all the cluster nodes before deleting any.
+	num_errors = clusterStop(self, model, delete=False)
+	if num_errors > 0:
+		return None
+
+	# If the cluster is stopped, delete all of the nodes.
 	num_errors = clusterStop(self, model, delete=True)
 	try:
 		clustername = model.getClusterName()
@@ -4381,7 +4352,7 @@
 				% (clustername, str(e)))
 
 		try:
-			clusterfolder = self.restrictedTraverse(str(CLUSTER_FOLDER_PATH + clustername))
+			clusterfolder = self.restrictedTraverse('%s%s' % (CLUSTER_FOLDER_PATH, clustername))
 			if len(clusterfolder.objectItems()) < 1:
 				clusters = self.restrictedTraverse(str(CLUSTER_FOLDER_PATH))
 				clusters.manage_delObjects([clustername])
@@ -4394,19 +4365,19 @@
 			% (clustername, num_errors))
 
 def forceNodeReboot(self, rc, clustername, nodename_resolved):
-	batch_number, result = nodeReboot(rc)
+	batch_number, result = rq.nodeReboot(rc)
 	if batch_number is None or result is None:
 		luci_log.debug_verbose('FNR0: batch_number and/or result is None')
 		return None
 
 	try:
-		set_node_flag(self, clustername, rc.hostname(), str(batch_number), NODE_REBOOT, "Node \'%s\' is being rebooted" % nodename_resolved)
+		set_node_flag(self, clustername, rc.hostname(), str(batch_number), NODE_REBOOT, 'Node "%s" is being rebooted' % nodename_resolved)
 	except Exception, e:
 		luci_log.debug_verbose('FNR1: failed to set flags: %s' % str(e))
 	return True
 
 def forceNodeFence(self, clustername, nodename, nodename_resolved):
-	path = str(CLUSTER_FOLDER_PATH + clustername)
+	path = '%s%s' % (CLUSTER_FOLDER_PATH, clustername)
 
 	try:
 		clusterfolder = self.restrictedTraverse(path)
@@ -4460,13 +4431,13 @@
 	if not found_one:
 		return None
 
-	batch_number, result = nodeFence(rc, nodename)
+	batch_number, result = rq.nodeFence(rc, nodename)
 	if batch_number is None or result is None:
 		luci_log.debug_verbose('FNF3: batch_number and/or result is None')
 		return None
 
 	try:
-		set_node_flag(self, clustername, rc.hostname(), str(batch_number), NODE_FENCE, "Node \'%s\' is being fenced" % nodename_resolved)
+		set_node_flag(self, clustername, rc.hostname(), str(batch_number), NODE_FENCE, 'Node "%s" is being fenced' % nodename_resolved)
 	except Exception, e:
 		luci_log.debug_verbose('FNF4: failed to set flags: %s' % str(e))
 	return True
@@ -4481,7 +4452,7 @@
 		# Make sure we can find a second node before we hose anything.
 		found_one = False
 
-		path = str(CLUSTER_FOLDER_PATH + clustername)
+		path = '%s%s' % (CLUSTER_FOLDER_PATH, clustername)
 
 		try:
 			clusterfolder = self.restrictedTraverse(path)
@@ -4540,7 +4511,7 @@
 
 	# First, delete cluster.conf from node to be deleted.
 	# next, have node leave cluster.
-	batch_number, result = nodeLeaveCluster(rc, purge=True)
+	batch_number, result = rq.nodeLeaveCluster(rc, purge=True)
 	if batch_number is None or result is None:
 		luci_log.debug_verbose('ND5: batch_number and/or result is None')
 		return None
@@ -4552,8 +4523,7 @@
 
 	if delete_cluster:
 		try:
-			set_node_flag(self, clustername, rc.hostname(), str(batch_number), CLUSTER_DELETE, "Deleting cluster \"%s\": Deleting node \'%s\'" \
-				% (clustername, nodename_resolved))
+			set_node_flag(self, clustername, rc.hostname(), str(batch_number), CLUSTER_DELETE, 'Deleting cluster "%s": Deleting node "%s"' % (clustername, nodename_resolved))
 		except Exception, e:
 			luci_log.debug_verbose('ND5a: failed to set flags: %s' % str(e))
 	else:
@@ -4589,13 +4559,13 @@
 			return None
 
 		# propagate the new cluster.conf via the second node
-		batch_number, result = setClusterConf(rc2, str(str_buf))
+		batch_number, result = rq.setClusterConf(rc2, str(str_buf))
 		if batch_number is None:
 			luci_log.debug_verbose('ND8: batch number is None after del node in NTP')
 			return None
 
 	# Now we need to delete the node from the DB
-	path = str(CLUSTER_FOLDER_PATH + clustername)
+	path = '%s%s' % (CLUSTER_FOLDER_PATH, clustername)
 	try:
 		clusterfolder = self.restrictedTraverse(path)
 		clusterfolder.manage_delObjects([nodename_resolved])
@@ -4663,12 +4633,12 @@
 		if not cluinfo[0] and not cluinfo[1]:
 			luci_log.debug('NTP5: node %s not in a cluster (expected %s)' \
 				% (nodename_resolved, clustername))
-			return (False, {'errors': [ 'Node %s reports it is not in a cluster.' % nodename_resolved ]})
+			return (False, {'errors': [ 'Node "%s" reports it is not in a cluster.' % nodename_resolved ]})
 
 		cname = clustername.lower()
 		if cname != cluinfo[0].lower() and cname != cluinfo[1].lower():
 			luci_log.debug('NTP6: node %s in unknown cluster %s:%s (expected %s)' % (nodename_resolved, cluinfo[0], cluinfo[1], clustername))
-			return (False, {'errors': [ 'Node %s reports it in cluster \"%s\". We expect it to be a member of cluster \"%s\"' % (nodename_resolved, cluinfo[0], clustername) ]})
+			return (False, {'errors': [ 'Node "%s" reports it in cluster "%s." We expect it to be a member of cluster "%s"' % (nodename_resolved, cluinfo[0], clustername) ]})
 
 		if not rc.authed():
 			rc = None
@@ -4689,45 +4659,50 @@
 		if rc is None:
 			luci_log.debug('NTP7: node %s is not authenticated' \
 				% nodename_resolved)
-			return (False, {'errors': [ 'Node %s is not authenticated' % nodename_resolved ]})
+			return (False, {'errors': [ 'Node "%s" is not authenticated.' % nodename_resolved ]})
 
 	if task == NODE_LEAVE_CLUSTER:
 		if nodeLeave(self, rc, clustername, nodename_resolved) is None:
 			luci_log.debug_verbose('NTP8: nodeLeave failed')
-			return (False, {'errors': [ 'Node %s failed to leave cluster %s' % (nodename_resolved, clustername) ]})
+			return (False, {'errors': [ 'Node "%s" failed to leave cluster "%s"' % (nodename_resolved, clustername) ]})
 
 		response = request.RESPONSE
-		response.redirect(request['URL'] + "?pagetype=" + NODES + "&clustername=" + clustername + '&busyfirst=true')
+		response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
+			% (request['URL'], NODES, clustername))
 	elif task == NODE_JOIN_CLUSTER:
 		if nodeJoin(self, rc, clustername, nodename_resolved) is None:
 			luci_log.debug_verbose('NTP9: nodeJoin failed')
-			return (False, {'errors': [ 'Node %s failed to join cluster %s' % (nodename_resolved, clustername) ]})
+			return (False, {'errors': [ 'Node "%s" failed to join cluster "%s"' % (nodename_resolved, clustername) ]})
 
 		response = request.RESPONSE
-		response.redirect(request['URL'] + "?pagetype=" + NODES + "&clustername=" + clustername + '&busyfirst=true')
+		response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
+			% (request['URL'], NODES, clustername))
 	elif task == NODE_REBOOT:
 		if forceNodeReboot(self, rc, clustername, nodename_resolved) is None:
 			luci_log.debug_verbose('NTP10: nodeReboot failed')
-			return (False, {'errors': [ 'Node %s failed to reboot' \
+			return (False, {'errors': [ 'Node "%s" failed to reboot.' \
 				% nodename_resolved ]})
 
 		response = request.RESPONSE
-		response.redirect(request['URL'] + "?pagetype=" + NODES + "&clustername=" + clustername + '&busyfirst=true')
+		response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
+			% (request['URL'], NODES, clustername))
 	elif task == NODE_FENCE:
 		if forceNodeFence(self, clustername, nodename, nodename_resolved) is None:
 			luci_log.debug_verbose('NTP11: nodeFencefailed')
-			return (False, {'errors': [ 'Fencing of node %s failed.' \
+			return (False, {'errors': [ 'Fencing of node "%s" failed.' \
 				% nodename_resolved]})
 
 		response = request.RESPONSE
-		response.redirect(request['URL'] + "?pagetype=" + NODES + "&clustername=" + clustername + '&busyfirst=true')
+		response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
+			% (request['URL'], NODES, clustername))
 	elif task == NODE_DELETE:
 		if nodeDelete(self, rc, model, clustername, nodename, nodename_resolved) is None:
 			luci_log.debug_verbose('NTP12: nodeDelete failed')
-			return (False, {'errors': [ 'Deletion of node %s from cluster %s failed.' % (nodename_resolved, clustername) ]})
+			return (False, {'errors': [ 'Deletion of node "%s" from cluster "%s" failed.' % (nodename_resolved, clustername) ]})
 
 		response = request.RESPONSE
-		response.redirect(request['URL'] + "?pagetype=" + NODES + "&clustername=" + clustername + '&busyfirst=true')
+		response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
+			% (request['URL'], NODES, clustername))
 
 def getNodeInfo(self, model, status, request):
   infohash = {}
@@ -4770,17 +4745,26 @@
 
   #set up drop down links
   if nodestate == NODE_ACTIVE:
-    infohash['jl_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_LEAVE_CLUSTER + "&nodename=" + nodename + "&clustername=" + clustername
-    infohash['reboot_url'] = baseurl + "?pagetype=" +NODE_PROCESS + "&task=" + NODE_REBOOT + "&nodename=" + nodename + "&clustername=" + clustername
-    infohash['fence_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_FENCE + "&nodename=" + nodename + "&clustername=" + clustername
-    infohash['delete_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_DELETE + "&nodename=" + nodename + "&clustername=" + clustername
+    infohash['jl_url'] = '%s?pagetype=%s&task=%s&nodename=%s&clustername=%s' \
+      % (baseurl, NODE_PROCESS, NODE_LEAVE_CLUSTER, nodename, clustername)
+    infohash['reboot_url'] = '%s?pagetype=%s&task=%s&nodename=%s&clustername=%s' \
+      % (baseurl, NODE_PROCESS, NODE_REBOOT, nodename, clustername)
+    infohash['fence_url'] = '%s?pagetype=%s&task=%s&nodename=%s&clustername=%s' \
+      % (baseurl, NODE_PROCESS, NODE_FENCE, nodename, clustername)
+    infohash['delete_url'] = '%s?pagetype=%s&task=%s&nodename=%s&clustername=%s' \
+      % (baseurl, NODE_PROCESS, NODE_DELETE, nodename, clustername)
   elif nodestate == NODE_INACTIVE:
-    infohash['jl_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_JOIN_CLUSTER + "&nodename=" + nodename + "&clustername=" + clustername
-    infohash['reboot_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_REBOOT + "&nodename=" + nodename + "&clustername=" + clustername
-    infohash['fence_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_FENCE + "&nodename=" + nodename + "&clustername=" + clustername
-    infohash['delete_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_DELETE + "&nodename=" + nodename + "&clustername=" + clustername
+    infohash['jl_url'] = '%s?pagetype=%s&task=%s&nodename=%s&clustername=%s' \
+      % (baseurl, NODE_PROCESS, NODE_JOIN_CLUSTER, nodename, clustername)
+    infohash['reboot_url'] = '%s?pagetype=%s&task=%s&nodename=%s&clustername=%s' \
+      % (baseurl, NODE_PROCESS, NODE_REBOOT, nodename, clustername)
+    infohash['fence_url'] = '%s?pagetype=%s&task=%s&nodename=%s&clustername=%s' \
+      % (baseurl, NODE_PROCESS, NODE_FENCE, nodename, clustername)
+    infohash['delete_url'] = '%s?pagetype=%s&task=%s&nodename=%s&clustername=%s' \
+      % (baseurl, NODE_PROCESS, NODE_DELETE, nodename, clustername)
   else:
-    infohash['fence_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_FENCE + "&nodename=" + nodename + "&clustername=" + clustername
+    infohash['fence_url'] = '%s?pagetype=%s&task=%s&nodename=%s&clustername=%s' \
+      % (baseurl, NODE_PROCESS, NODE_FENCE, nodename, clustername)
 
   #figure out current services running on this node
   svc_dict_list = list()
@@ -4788,7 +4772,8 @@
     if svc['nodename'] == nodename:
       svc_dict = {}
       svcname = svc['name']
-      svcurl = baseurl + "?" + PAGETYPE + "=" + SERVICE + "&" + CLUNAME + "=" + clustername + "&servicename=" + svcname
+      svcurl = '%s?pagetype=%s&clustername=%s&servicename=%s' \
+        % (baseurl, SERVICE, clustername, svcname)
       svc_dict['servicename'] = svcname
       svc_dict['svcurl'] = svcurl
       svc_dict_list.append(svc_dict)
@@ -4808,7 +4793,8 @@
     for fdom in fdoms:
       fdom_dict = {}
       fdom_dict['name'] = fdom.getName()
-      fdomurl = baseurl + "?" + PAGETYPE + "=" + FDOM_CONFIG + "&" + CLUNAME + "=" + clustername + "&fdomname=" + fdom.getName()
+      fdomurl = '%s?pagetype=%s&clustername=%s&fdomname=%s' \
+		% (baseurl, FDOM_CONFIG, clustername, fdom.getName())
       fdom_dict['fdomurl'] = fdomurl
       fdom_dict_list.append(fdom_dict)
   else:
@@ -4842,15 +4828,13 @@
       else:
         dlist.append("lock_gulmd")
       dlist.append("rgmanager")
-      dlist.append("clvmd")
-      dlist.append("gfs")
-      dlist.append("gfs2")
-      states = getDaemonStates(rc, dlist)
+      states = rq.getDaemonStates(rc, dlist)
       infohash['d_states'] = states
   else:
     infohash['ricci_error'] = True
 
-  infohash['logurl'] = '/luci/logs/?nodename=' + nodename_resolved + '&clustername=' + clustername
+  infohash['logurl'] = '/luci/logs/?nodename=%s&clustername=%s' \
+	% (nodename_resolved, clustername)
   return infohash
 
 def getNodesInfo(self, model, status, req):
@@ -4886,50 +4870,60 @@
           return {}
 
   for item in nodelist:
-    map = {}
+    nl_map = {}
     name = item['name']
-    map['nodename'] = name
+    nl_map['nodename'] = name
     try:
-      map['gulm_lockserver'] = model.isNodeLockserver(name)
+      nl_map['gulm_lockserver'] = model.isNodeLockserver(name)
     except:
-      map['gulm_lockserver'] = False
+      nl_map['gulm_lockserver'] = False
 
     try:
       baseurl = req['URL']
     except:
       baseurl = '/luci/cluster/index_html'
 
-    cfgurl = baseurl + "?" + PAGETYPE + "=" + NODE + "&" + CLUNAME + "=" + clustername + "&nodename=" + name
-
-    map['configurl'] = cfgurl
-    map['fenceurl'] = cfgurl + "#fence"
+    cfgurl = '%s?pagetype=%s&clustername=%s&nodename=%s' \
+      % (baseurl, NODE, clustername, name)
+    nl_map['configurl'] = cfgurl
+    nl_map['fenceurl'] = '%s#fence' % cfgurl
     if item['clustered'] == "true":
-      map['status'] = NODE_ACTIVE
-      map['status_str'] = NODE_ACTIVE_STR
+      nl_map['status'] = NODE_ACTIVE
+      nl_map['status_str'] = NODE_ACTIVE_STR
     elif item['online'] == "false":
-      map['status'] = NODE_UNKNOWN
-      map['status_str'] = NODE_UNKNOWN_STR
+      nl_map['status'] = NODE_UNKNOWN
+      nl_map['status_str'] = NODE_UNKNOWN_STR
     else:
-      map['status'] = NODE_INACTIVE
-      map['status_str'] = NODE_INACTIVE_STR
+      nl_map['status'] = NODE_INACTIVE
+      nl_map['status_str'] = NODE_INACTIVE_STR
 
     nodename_resolved = resolve_nodename(self, clustername, name)
 
-    map['logurl'] = '/luci/logs?nodename=' + nodename_resolved + '&clustername=' + clustername
+    nl_map['logurl'] = '/luci/logs?nodename=%s&clustername=%s' \
+		% (nodename_resolved, clustername)
 
     #set up URLs for dropdown menu...
-    if map['status'] == NODE_ACTIVE:
-      map['jl_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_LEAVE_CLUSTER + "&nodename=" + name + "&clustername=" + clustername
-      map['reboot_url'] = baseurl + "?pagetype=" +NODE_PROCESS + "&task=" + NODE_REBOOT + "&nodename=" + name + "&clustername=" + clustername
-      map['fence_it_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_FENCE + "&nodename=" + name + "&clustername=" + clustername
-      map['delete_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_DELETE + "&nodename=" + name + "&clustername=" + clustername
-    elif map['status'] == NODE_INACTIVE:
-      map['jl_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_JOIN_CLUSTER + "&nodename=" + name + "&clustername=" + clustername
-      map['reboot_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_REBOOT + "&nodename=" + name + "&clustername=" + clustername
-      map['fence_it_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_FENCE + "&nodename=" + name + "&clustername=" + clustername
-      map['delete_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_DELETE + "&nodename=" + name + "&clustername=" + clustername
+    if nl_map['status'] == NODE_ACTIVE:
+      nl_map['jl_url'] = '%s?pagetype=%s&task=%s&nodename=%s&clustername=%s' \
+        % (baseurl, NODE_PROCESS, NODE_LEAVE_CLUSTER, name, clustername)
+      nl_map['reboot_url'] = '%s?pagetype=%s&task=%s&nodename=%s&clustername=%s' \
+        % (baseurl, NODE_PROCESS, NODE_REBOOT, name, clustername)
+      nl_map['fence_it_url'] = '%s?pagetype=%s&task=%s&nodename=%s&clustername=%s' \
+        % (baseurl, NODE_PROCESS, NODE_FENCE, name, clustername)
+      nl_map['delete_url'] = '%s?pagetype=%s&task=%s&nodename=%s&clustername=%s' \
+        % (baseurl, NODE_PROCESS, NODE_DELETE, name, clustername)
+    elif nl_map['status'] == NODE_INACTIVE:
+      nl_map['jl_url'] = '%s?pagetype=%s&task=%s&nodename=%s&clustername=%s' \
+        % (baseurl, NODE_PROCESS, NODE_JOIN_CLUSTER, name, clustername)
+      nl_map['reboot_url'] = '%s?pagetype=%s&task=%s&nodename=%s&clustername=%s' \
+        % (baseurl, NODE_PROCESS, NODE_REBOOT, name, clustername)
+      nl_map['fence_it_url'] = '%s?pagetype=%s&task=%s&nodename=%s&clustername=%s' \
+        % (baseurl, NODE_PROCESS, NODE_FENCE, name, clustername)
+      nl_map['delete_url'] = '%s?pagetype=%s&task=%s&nodename=%s&clustername=%s' \
+        % (baseurl, NODE_PROCESS, NODE_DELETE, name, clustername)
     else:
-      map['fence_it_url'] = baseurl + "?pagetype=" + NODE_PROCESS + "&task=" + NODE_FENCE + "&nodename=" + name + "&clustername=" + clustername
+      nl_map['fence_it_url'] = '%s?pagetype=%s&task=%s&nodename=%s&clustername=%s' \
+        % (baseurl, NODE_PROCESS, NODE_FENCE, name, clustername)
 
     #figure out current services running on this node
     svc_dict_list = list()
@@ -4937,29 +4931,31 @@
       if svc['nodename'] == name:
         svc_dict = {}
         svcname = svc['name']
-        svcurl = baseurl + "?" + PAGETYPE + "=" + SERVICE + "&" + CLUNAME + "=" + clustername + "&servicename=" + svcname
+        svcurl = '%s?pagetype=%s&clustername=%s&servicename=%s' \
+          % (baseurl, SERVICE, clustername, svcname)
         svc_dict['servicename'] = svcname
         svc_dict['svcurl'] = svcurl
         svc_dict_list.append(svc_dict)
 
-    map['currentservices'] = svc_dict_list
+    nl_map['currentservices'] = svc_dict_list
     #next is faildoms
 
     if model:
       fdoms = model.getFailoverDomainsForNode(name)
     else:
-      map['ricci_error'] = True
+      nl_map['ricci_error'] = True
       fdoms = list()
     fdom_dict_list = list()
     for fdom in fdoms:
       fdom_dict = {}
       fdom_dict['name'] = fdom.getName()
-      fdomurl = baseurl + "?" + PAGETYPE + "=" + FDOM_CONFIG + "&" + CLUNAME + "=" + clustername + "&fdomname=" + fdom.getName()
+      fdomurl = '%s?pagetype=%s&clustername=%s&fdomname=%s' \
+		% (baseurl, FDOM_CONFIG, clustername, fdom.getName())
       fdom_dict['fdomurl'] = fdomurl
       fdom_dict_list.append(fdom_dict)
 
-    map['fdoms'] = fdom_dict_list
-    resultlist.append(map)
+    nl_map['fdoms'] = fdom_dict_list
+    resultlist.append(nl_map)
 
   return resultlist
 
@@ -4968,17 +4964,17 @@
     luci_log.debug_verbose('getFence0: model is None')
     return {}
 
-  map = {}
+  fence_map = {}
   fencename = request['fencename']
   fencedevs = model.getFenceDevices()
   for fencedev in fencedevs:
     if fencedev.getName().strip() == fencename:
-      map = fencedev.getAttributes()
+      fence_map = fencedev.getAttributes()
       try:
-        map['pretty_name'] = FENCE_OPTS[fencedev.getAgentType()]
+        fence_map['pretty_name'] = FENCE_OPTS[fencedev.getAgentType()]
       except:
-        map['unknown'] = True
-        map['pretty_name'] = fencedev.getAgentType()
+        fence_map['unknown'] = True
+        fence_map['pretty_name'] = fencedev.getAgentType()
 
       nodes_used = list()
       nodes = model.getNodes()
@@ -4998,14 +4994,16 @@
               baseurl = request['URL']
               clustername = model.getClusterName()
               node_hash = {}
-              node_hash['nodename'] = node.getName().strip()
-              node_hash['nodeurl'] = baseurl + "?clustername=" + clustername + "&nodename=" + node.getName() + "&pagetype=" + NODE
+              cur_nodename = node.getName().strip()
+              node_hash['nodename'] = cur_nodename
+              node_hash['nodeurl'] = '%s?clustername=%s&nodename=%s&pagetype=%s' \
+                % (baseurl, clustername, cur_nodename, NODE)
               nodes_used.append(node_hash)
 
-      map['nodesused'] = nodes_used
-      return map
+      fence_map['nodesused'] = nodes_used
+      return fence_map
 
-  return map
+  return fence_map
 
 def getFDForInstance(fds, name):
   for fd in fds:
@@ -5034,15 +5032,15 @@
     luci_log.debug_verbose('getFenceInfo1: no request.URL')
     return {}
 
-  map = {}
+  fence_map = {}
   level1 = list() #First level fence devices
   level2 = list() #Second level fence devices
   shared1 = list() #List of available sharable fence devs not used in level1
   shared2 = list() #List of available sharable fence devs not used in level2
-  map['level1'] = level1
-  map['level2'] = level2
-  map['shared1'] = shared1
-  map['shared2'] = shared2
+  fence_map['level1'] = level1
+  fence_map['level2'] = level2
+  fence_map['shared1'] = shared1
+  fence_map['shared2'] = shared2
 
   major_num = 1
   minor_num = 100
@@ -5074,7 +5072,7 @@
   len_levels = len(levels)
 
   if len_levels == 0:
-    return map
+    return fence_map
 
   if len_levels >= 1:
     first_level = levels[0]
@@ -5139,7 +5137,8 @@
               fencedev['unknown'] = True
               fencedev['prettyname'] = fd.getAgentType()
             fencedev['isShared'] = True
-            fencedev['cfgurl'] = baseurl + "?clustername=" + clustername + "&fencename=" + fd.getName().strip() + "&pagetype=" + FENCEDEV
+            fencedev['cfgurl'] = '%s?clustername=%s&fencename=%s&pagetype=%s' \
+              % (baseurl, clustername, fd.getName().strip(), FENCEDEV)
             fencedev['id'] = str(major_num)
             major_num = major_num + 1
             inlist = list()
@@ -5159,7 +5158,7 @@
             level1.append(fencedev)
             last_kid_fd = fencedev
             continue
-    map['level1'] = level1
+    fence_map['level1'] = level1
 
     #level1 list is complete now, but it is still necessary to build shared1
     for fd in fds:
@@ -5181,7 +5180,7 @@
           shared_struct['unknown'] = True
           shared_struct['prettyname'] = agentname
         shared1.append(shared_struct)
-    map['shared1'] = shared1
+    fence_map['shared1'] = shared1
 
   #YUK: This next section violates the DRY rule, :-(
   if len_levels >= 2:
@@ -5246,7 +5245,8 @@
               fencedev['unknown'] = True
               fencedev['prettyname'] = fd.getAgentType()
             fencedev['isShared'] = True
-            fencedev['cfgurl'] = baseurl + "?clustername=" + clustername + "&fencename=" + fd.getName().strip() + "&pagetype=" + FENCEDEV
+            fencedev['cfgurl'] = '%s?clustername=%s&fencename=%s&pagetype=%s' \
+              % (baseurl, clustername, fd.getName().strip(), FENCEDEV)
             fencedev['id'] = str(major_num)
             major_num = major_num + 1
             inlist = list()
@@ -5266,7 +5266,7 @@
             level2.append(fencedev)
             last_kid_fd = fencedev
             continue
-    map['level2'] = level2
+    fence_map['level2'] = level2
 
     #level2 list is complete but like above, we need to build shared2
     for fd in fds:
@@ -5288,16 +5288,16 @@
           shared_struct['unknown'] = True
           shared_struct['prettyname'] = agentname
         shared2.append(shared_struct)
-    map['shared2'] = shared2
+    fence_map['shared2'] = shared2
 
-  return map
+  return fence_map
 
 def getFencesInfo(self, model, request):
-  map = {}
+  fences_map = {}
   if not model:
     luci_log.debug_verbose('getFencesInfo0: model is None')
-    map['fencedevs'] = list()
-    return map
+    fences_map['fencedevs'] = list()
+    return fences_map
 
   clustername = request['clustername']
   baseurl = request['URL']
@@ -5325,7 +5325,8 @@
 
       fencedev['agent'] = fd.getAgentType()
       #Add config url for this fencedev
-      fencedev['cfgurl'] = baseurl + "?clustername=" + clustername + "&fencename=" + fd.getName().strip() + "&pagetype=" + FENCEDEV
+      fencedev['cfgurl'] = '%s?clustername=%s&fencename=%s&pagetype=%s' \
+        % (baseurl, clustername, fd.getName().strip(), FENCEDEV)
 
       nodes = model.getNodes()
       for node in nodes:
@@ -5342,15 +5343,17 @@
               if found_duplicate == True:
                 continue
               node_hash = {}
-              node_hash['nodename'] = node.getName().strip()
-              node_hash['nodeurl'] = baseurl + "?clustername=" + clustername + "&nodename=" + node.getName() + "&pagetype=" + NODE
+              cur_nodename = node.getName().strip()
+              node_hash['nodename'] = cur_nodename
+              node_hash['nodeurl'] = '%s?clustername=%s&nodename=%s&pagetype=%s' \
+                % (baseurl, clustername, cur_nodename, NODE)
               nodes_used.append(node_hash)
 
       fencedev['nodesused'] = nodes_used
       fencedevs.append(fencedev)
 
-  map['fencedevs'] = fencedevs
-  return map
+  fences_map['fencedevs'] = fencedevs
+  return fences_map
 
 def getLogsForNode(self, request):
 	try:
@@ -5408,349 +5411,364 @@
 
 		return 'Luci is not authenticated to node %s. Please reauthenticate first.' % nodename
 
-	return getNodeLogs(rc)
+	return rq.getNodeLogs(rc)
 
 def getVMInfo(self, model, request):
-  map = {}
-  baseurl = request['URL']
-  clustername = request['clustername']
-  svcname = None
+	vm_map = {}
 
-  try:
-    svcname = request['servicename']
-  except KeyError, e:
-    svcname = None
-  urlstring = baseurl + "?" + clustername + "&pagetype=29"
-  if svcname != None:
-    urlstring = urlstring + "&servicename=" + svcname
+	try:
+		clustername = request['clustername']
+	except Exception, e:
+		try:
+			clustername = model.getName()
+		except:
+			return vm_map
+
+	svcname = None
+	try:
+		svcname = request['servicename']
+	except Exception, e:
+		try:
+			vmname = request.form['servicename']
+		except Exception, e:
+			return vm_map
 
-  map['formurl'] = urlstring
+	vm_map['formurl'] = '%s?clustername=%s&pagetype=29&servicename=%s' \
+		% (request['URL'], clustername, svcname)
 
-  try:
-    vmname = request['servicename']
-  except:
-    try:
-      vmname = request.form['servicename']
-    except:
-      luci_log.debug_verbose('servicename is missing from request')
-      return map
+	try:
+		vm = model.retrieveVMsByName(vmname)
+	except:
+		luci_log.debug('An error occurred while attempting to get VM %s' \
+			% vmname)
+		return vm_map
 
-  try:
-    vm = model.retrieveVMsByName(vmname)
-  except:
-    luci_log.debug('An error occurred while attempting to get VM %s' \
-      % vmname)
-    return map
-
-  attrs = vm.getAttributes()
-  keys = attrs.keys()
-  for key in keys:
-    map[key] = attrs[key]
-  return map
+	attrs = vm.getAttributes()
+	keys = attrs.keys()
+	for key in keys:
+		vm_map[key] = attrs[key]
+
+	return vm_map
 
 def isClusterBusy(self, req):
-  items = None
-  map = {}
-  isBusy = False
-  redirect_message = False
-  nodereports = list()
-  map['nodereports'] = nodereports
+	items = None
+	busy_map = {}
+	isBusy = False
+	redirect_message = False
+	nodereports = list()
+	busy_map['nodereports'] = nodereports
 
-  try:
-    cluname = req['clustername']
-  except KeyError, e:
-    try:
-      cluname = req.form['clustername']
-    except:
-      try:
-        cluname = req.form['clusterName']
-      except:
-        luci_log.debug_verbose('ICB0: No cluster name -- returning empty map')
-        return map
+	try:
+		cluname = req['clustername']
+	except KeyError, e:
+		try:
+			cluname = req.form['clustername']
+		except:
+			try:
+				cluname = req.form['clusterName']
+			except:
+				luci_log.debug_verbose('ICB0: No cluster name -- returning empty map')
+				return busy_map
 
-  path = str(CLUSTER_FOLDER_PATH + cluname)
-  try:
-    clusterfolder = self.restrictedTraverse(path)
-    if not clusterfolder:
-      raise Exception, 'clusterfolder is None'
-  except Exception, e:
-    luci_log.debug_verbose('ICB1: cluster %s [%s] folder missing: %s -- returning empty map' % (cluname, path, str(e)))
-    return map
-  except:
-    luci_log.debug_verbose('ICB2: cluster %s [%s] folder missing: returning empty map' % (cluname, path))
+	path = '%s%s' % (CLUSTER_FOLDER_PATH, cluname)
 
-  try:
-    items = clusterfolder.objectItems('ManagedSystem')
-    if not items or len(items) < 1:
-      luci_log.debug_verbose('ICB3: NOT BUSY: no flags at %s for cluster %s' \
-          % (cluname, path))
-      return map  #This returns an empty map, and should indicate not busy
-  except Exception, e:
-    luci_log.debug('ICB4: An error occurred while looking for cluster %s flags at path %s: %s' % (cluname, path, str(e)))
-    return map
-  except:
-    luci_log.debug('ICB5: An error occurred while looking for cluster %s flags at path %s' % (cluname, path))
-    return map
+	try:
+		clusterfolder = self.restrictedTraverse(path)
+		if not clusterfolder:
+			raise Exception, 'clusterfolder is None'
+	except Exception, e:
+		luci_log.debug_verbose('ICB1: cluster %s [%s] folder missing: %s -- returning empty map' % (cluname, path, str(e)))
+		return busy_map
+	except:
+		luci_log.debug_verbose('ICB2: cluster %s [%s] folder missing: returning empty map' % (cluname, path))
 
-  luci_log.debug_verbose('ICB6: %s is busy: %d flags' \
-      % (cluname, len(items)))
-  map['busy'] = "true"
-  #Ok, here is what is going on...if there is an item,
-  #we need to call the ricci_bridge and get a batch report.
-  #This report will tell us one of three things:
-  ##1) the batch task is complete...delete ManagedSystem and render
-  ##normal page
-  ##2) The batch task is NOT done, so meta refresh in 5 secs and try again
-  ##3) The ricci agent has no recollection of the task, so handle like 1 above
-  ###
-  ##Here is what we have to do:
-  ##the map should have two lists:
-  ##One list of non-cluster create tasks
-  ##and one of cluster create task structs
-  ##For each item in items, check if this is a cluster create tasktype
-  ##If so, call RC, and then call stan's batch report method
-  ##check for error...if error, report and then remove flag.
-  ##if no error, check if complete. If not complete, report status
-  ##If complete, report status and remove flag.
-
-  for item in items:
-    tasktype = item[1].getProperty(TASKTYPE)
-    if tasktype == CLUSTER_ADD or tasktype == NODE_ADD:
-      node_report = {}
-      node_report['isnodecreation'] = True
-      node_report['iserror'] = False  #Default value
-      node_report['desc'] = item[1].getProperty(FLAG_DESC)
-      batch_xml = None
-      ricci = item[0].split("____") #This removes the 'flag' suffix
+	try:
+		items = clusterfolder.objectItems('ManagedSystem')
+		if not items or len(items) < 1:
+			luci_log.debug_verbose('ICB3: NOT BUSY: no flags at %s for cluster %s' % (cluname, path))
+			# This returns an empty map, and indicates not busy
+			return busy_map
+	except Exception, e:
+		luci_log.debug('ICB4: An error occurred while looking for cluster %s flags at path %s: %s' % (cluname, path, str(e)))
+		return busy_map
+	except:
+		luci_log.debug('ICB5: An error occurred while looking for cluster %s flags at path %s' % (cluname, path))
+		return busy_map
+
+	luci_log.debug_verbose('ICB6: %s is busy: %d flags' \
+		% (cluname, len(items)))
+	busy_map['busy'] = 'true'
+
+	# Ok, here is what is going on...if there is an item,
+	# we need to call ricci to get a batch report.
+	# This report will tell us one of three things:
+	#
+	# #1) the batch task is complete...delete ManagedSystem and render
+	#     normal page
+	# #2) The batch task is NOT done, so meta refresh in 5 secs and try again
+	# #3) The ricci agent has no recollection of the task,
+	#     so handle like 1 above
+	###
+	#
+	# Here is what we have to do:
+	# the map should have two lists:
+	#  One list of non-cluster create tasks
+	#  and one of cluster create task structs
+	# For each item in items, check if this is a cluster create tasktype
+	# If so, call RC, and then call the batch report method
+	# check for error...if error, report and then remove flag.
+	# if no error, check if complete. If not complete, report status
+	# If complete, report status and remove flag.
 
-      luci_log.debug_verbose('ICB6A: using host %s for rc for item %s' \
-          % (ricci[0], item[0]))
-      try:
-        rc = RicciCommunicator(ricci[0])
-        if not rc:
-          rc = None
-          luci_log.debug_verbose('ICB6b: rc is none')
-      except Exception, e:
-        rc = None
-        luci_log.debug_verbose('ICB7: RC: %s: %s' \
-          % (cluname, str(e)))
+	for item in items:
+		tasktype = item[1].getProperty(TASKTYPE)
+		if tasktype == CLUSTER_ADD or tasktype == NODE_ADD:
+			node_report = {}
+			node_report['isnodecreation'] = True
+			node_report['iserror'] = False  #Default value
+			node_report['desc'] = item[1].getProperty(FLAG_DESC)
+			batch_xml = None
+			# This removes the 'flag' suffix
+			ricci = item[0].split('____')
 
-      batch_id = None
-      if rc is not None:
-        try:
-          batch_id = item[1].getProperty(BATCH_ID)
-          luci_log.debug_verbose('ICB8: got batch_id %s from %s' \
-              % (batch_id, item[0]))
-        except Exception, e:
-          try:
-            luci_log.debug_verbose('ICB8B: failed to get batch_id from %s: %s' \
-                % (item[0], str(e)))
-          except:
-            luci_log.debug_verbose('ICB8C: failed to get batch_id from %s' \
-              % item[0])
+			luci_log.debug_verbose('ICB6A: using host %s for rc for item %s' \
+				% (ricci[0], item[0]))
 
-        if batch_id is not None:
-          try:
-            batch_xml = rc.batch_report(batch_id)
-            if batch_xml is not None:
-              luci_log.debug_verbose('ICB8D: batch_xml for %s from batch_report is not None -- getting batch status' % batch_id)
-              (creation_status, total) = batch_status(batch_xml)
-              try:
-                luci_log.debug_verbose('ICB8E: batch status returned (%d,%d)' \
-                    % (creation_status, total))
-              except:
-                luci_log.debug_verbose('ICB8F: error logging batch status return')
-            else:
-              luci_log.debug_verbose('ICB9: batch_xml for cluster is None')
-          except Exception, e:
-            luci_log.debug_verbose('ICB9A: error getting batch_xml from rc.batch_report: %s' % str(e))
-            creation_status = RICCI_CONNECT_FAILURE  #No contact with ricci (-1000)
-            batch_xml = "bloody_failure" #set to avoid next if statement
-
-      if rc is None or batch_id is None:
-          luci_log.debug_verbose('ICB12: unable to connect to a ricci agent for cluster %s to get batch status')
-          creation_status = RICCI_CONNECT_FAILURE  #No contact with ricci (-1000)
-          batch_xml = "bloody_bloody_failure" #set to avoid next if statement
-
-      if batch_xml is None:  #The job is done and gone from queue
-        if redirect_message == False: #We have not displayed this message yet
-          node_report['desc'] = REDIRECT_MSG
-          node_report['iserror'] = True
-          node_report['errormessage'] = ""
-          nodereports.append(node_report)
-          redirect_message = True
+			try:
+				rc = RicciCommunicator(ricci[0])
+				if not rc:
+					rc = None
+					luci_log.debug_verbose('ICB6b: rc is none')
+			except Exception, e:
+				rc = None
+				luci_log.debug_verbose('ICB7: RC: %s: %s' % (cluname, str(e)))
 
-        luci_log.debug_verbose('ICB13: batch job is done -- deleting %s' % item[0])
-        clusterfolder.manage_delObjects([item[0]])
-        continue
+			batch_id = None
+			if rc is not None:
+				try:
+					batch_id = item[1].getProperty(BATCH_ID)
+					luci_log.debug_verbose('ICB8: got batch_id %s from %s' \
+						% (batch_id, item[0]))
+				except Exception, e:
+					try:
+						luci_log.debug_verbose('ICB8B: failed to get batch_id from %s: %s' % (item[0], str(e)))
+					except:
+						luci_log.debug_verbose('ICB8C: failed to get batch_id from %s' % item[0])
 
-      del_db_obj = False
-      if creation_status < 0:  #an error was encountered
-        luci_log.debug_verbose('ICB13a: %s: CS %d for %s' % (cluname, creation_status, ricci[0]))
-        if creation_status == RICCI_CONNECT_FAILURE:
-          laststatus = item[1].getProperty(LAST_STATUS)
-          if laststatus == INSTALL_TASK: #This means maybe node is rebooting
-            node_report['statusindex'] = INSTALL_TASK
-            node_report['statusmessage'] = RICCI_CONNECT_FAILURE_MSG + POSSIBLE_REBOOT_MESSAGE
-          elif laststatus == 0:
-            node_report['statusindex'] = 0
-            node_report['statusmessage'] = RICCI_CONNECT_FAILURE_MSG + PRE_INSTALL
-          elif laststatus == DISABLE_SVC_TASK:
-            node_report['statusindex'] = DISABLE_SVC_TASK
-            node_report['statusmessage'] = RICCI_CONNECT_FAILURE_MSG + PRE_CFG
-          elif laststatus == REBOOT_TASK:
-            node_report['statusindex'] = REBOOT_TASK
-            node_report['statusmessage'] = RICCI_CONNECT_FAILURE_MSG + PRE_CFG
-          elif laststatus == SEND_CONF:
-            node_report['statusindex'] = SEND_CONF
-            node_report['statusmessage'] = RICCI_CONNECT_FAILURE_MSG + PRE_JOIN
-          elif laststatus == ENABLE_SVC_TASK:
-            node_report['statusindex'] = ENABLE_SVC_TASK
-            node_report['statusmessage'] = RICCI_CONNECT_FAILURE_MSG + PRE_JOIN
-          else:
-            node_report['statusindex'] = 0
-            node_report['statusmessage'] = RICCI_CONNECT_FAILURE_MSG + ' Install is in an unknown state.'
-          nodereports.append(node_report)
-          continue
-        elif creation_status == -(INSTALL_TASK):
-          node_report['iserror'] = True
-          (err_code, err_msg) = extract_module_status(batch_xml, INSTALL_TASK)
-          node_report['errormessage'] = CLUNODE_CREATE_ERRORS[INSTALL_TASK] + err_msg
-          del_db_obj = True
-        elif creation_status == -(DISABLE_SVC_TASK):
-          node_report['iserror'] = True
-          (err_code, err_msg) = extract_module_status(batch_xml, DISABLE_SVC_TASK)
-          node_report['errormessage'] = CLUNODE_CREATE_ERRORS[DISABLE_SVC_TASK] + err_msg
-          del_db_obj = True
-        elif creation_status == -(REBOOT_TASK):
-          node_report['iserror'] = True
-          (err_code, err_msg) = extract_module_status(batch_xml, REBOOT_TASK)
-          node_report['errormessage'] = CLUNODE_CREATE_ERRORS[REBOOT_TASK] + err_msg
-          del_db_obj = True
-        elif creation_status == -(SEND_CONF):
-          node_report['iserror'] = True
-          (err_code, err_msg) = extract_module_status(batch_xml, SEND_CONF)
-          node_report['errormessage'] = CLUNODE_CREATE_ERRORS[SEND_CONF] + err_msg
-        elif creation_status == -(ENABLE_SVC_TASK):
-          node_report['iserror'] = True
-          (err_code, err_msg) = extract_module_status(batch_xml, DISABLE_SVC_TASK)
-          node_report['errormessage'] = CLUNODE_CREATE_ERRORS[ENABLE_SVC_TASK] + err_msg
-        elif creation_status == -(START_NODE):
-          node_report['iserror'] = True
-          (err_code, err_msg) = extract_module_status(batch_xml, START_NODE)
-          node_report['errormessage'] = CLUNODE_CREATE_ERRORS[START_NODE]
-        else:
-          del_db_obj = True
-          node_report['iserror'] = True
-          node_report['errormessage'] = CLUNODE_CREATE_ERRORS[0]
+				if batch_id is not None:
+					try:
+						batch_xml = rc.batch_report(batch_id)
+						if batch_xml is not None:
+							luci_log.debug_verbose('ICB8D: batch_xml for %s from batch_report is not None -- getting batch status' % batch_id)
+							(creation_status, total) = batch_status(batch_xml)
+							try:
+								luci_log.debug_verbose('ICB8E: batch status returned (%d,%d)' % (creation_status, total))
+							except:
+								luci_log.debug_verbose('ICB8F: error logging batch status return')
+						else:
+							luci_log.debug_verbose('ICB9: batch_xml for cluster is None')
+					except Exception, e:
+						luci_log.debug_verbose('ICB9A: error getting batch_xml from rc.batch_report: %s' % str(e))
+					# No contact with ricci (-1000)
+					creation_status = RICCI_CONNECT_FAILURE
+					# set to avoid next if statement
+					batch_xml = 'bloody_failure'
+
+			if rc is None or batch_id is None:
+				luci_log.debug_verbose('ICB12: unable to connect to a ricci agent for cluster %s to get batch status')
+				# No contact with ricci (-1000)
+				creation_status = RICCI_CONNECT_FAILURE
+				# set to avoid next if statement
+				batch_xml = 'bloody_bloody_failure'
+
+			if batch_xml is None:
+				# The job is done and gone from queue
+				if redirect_message == False:
+					# We have not displayed this message yet
+					node_report['desc'] = REDIRECT_MSG
+					node_report['iserror'] = True
+					node_report['errormessage'] = ""
+					nodereports.append(node_report)
+					redirect_message = True
 
-        try:
-          if del_db_obj is True:
-            luci_log.debug_verbose('ICB13a: %s node creation failed for %s: %d: deleting DB entry' % (cluname, ricci[0], creation_status))
-            clusterfolder.manage_delObjects([ricci[0]])
-          clusterfolder.manage_delObjects([item[0]])
-        except Exception, e:
-          luci_log.debug_verbose('ICB14: delObjects: %s: %s' \
-            % (item[0], str(e)))
+				luci_log.debug_verbose('ICB13: batch job is done -- deleting %s' % item[0])
+				clusterfolder.manage_delObjects([item[0]])
+				continue
 
-        nodereports.append(node_report)
-        continue
-      else:  #either batch completed successfully, or still running
-        if creation_status == total:  #finished...
-          map['busy'] = "true"
-          node_report['statusmessage'] = "Node created successfully" + REDIRECT_MSG
-          node_report['statusindex'] = creation_status
-          nodereports.append(node_report)
-          try:
-              clusterfolder.manage_delObjects([item[0]])
-          except Exception, e:
-              luci_log.info('ICB15: Unable to delete %s: %s' % (item[0], str(e)))
-          continue
-        else:
-          map['busy'] = "true"
-          isBusy = True
-          node_report['statusmessage'] = "Node still being created"
-          node_report['statusindex'] = creation_status
-          nodereports.append(node_report)
-          propslist = list()
-          propslist.append(LAST_STATUS)
-          try:
-            item[1].manage_delProperties(propslist)
-            item[1].manage_addProperty(LAST_STATUS, creation_status, "int")
-          except Exception, e:
-            luci_log.debug_verbose('ICB16: last_status err: %s %d: %s' \
-              % (item[0], creation_status, str(e)))
-          continue
+			del_db_obj = False
+			if creation_status < 0:
+				# an error was encountered
+				luci_log.debug_verbose('ICB13a: %s: CS %d for %s' % (cluname, creation_status, ricci[0]))
+				if creation_status == RICCI_CONNECT_FAILURE:
+					laststatus = item[1].getProperty(LAST_STATUS)
+
+					if laststatus == INSTALL_TASK:
+						# The node may be rebooting
+						node_report['statusindex'] = INSTALL_TASK
+						node_report['statusmessage'] = '%s%s' % (RICCI_CONNECT_FAILURE_MSG, POSSIBLE_REBOOT_MESSAGE)
+					elif laststatus == 0:
+						# The node may be rebooting
+						node_report['statusindex'] = 0
+						node_report['statusmessage'] = '%s%s' % (RICCI_CONNECT_FAILURE_MSG, PRE_INSTALL)
+					elif laststatus == DISABLE_SVC_TASK:
+						node_report['statusindex'] = DISABLE_SVC_TASK
+						node_report['statusmessage'] = '%s%s' % (RICCI_CONNECT_FAILURE_MSG, PRE_CFG)
+					elif laststatus == REBOOT_TASK:
+						node_report['statusindex'] = REBOOT_TASK
+						node_report['statusmessage'] = '%s%s' % (RICCI_CONNECT_FAILURE_MSG, PRE_CFG)
+					elif laststatus == SEND_CONF:
+						node_report['statusindex'] = SEND_CONF
+						node_report['statusmessage'] = '%s%s' % (RICCI_CONNECT_FAILURE_MSG, PRE_JOIN)
+					elif laststatus == ENABLE_SVC_TASK:
+						node_report['statusindex'] = ENABLE_SVC_TASK
+						node_report['statusmessage'] = '%s%s' % (RICCI_CONNECT_FAILURE_MSG, PRE_JOIN)
+					else:
+						node_report['statusindex'] = 0
+						node_report['statusmessage'] = '%s Install is in an unknown state.' % RICCI_CONNECT_FAILURE_MSG
+					nodereports.append(node_report)
+					continue
+				elif creation_status == -(INSTALL_TASK):
+					node_report['iserror'] = True
+					(err_code, err_msg) = extract_module_status(batch_xml, INSTALL_TASK)
+					node_report['errormessage'] = CLUNODE_CREATE_ERRORS[INSTALL_TASK] % err_msg
+					del_db_obj = True
+				elif creation_status == -(DISABLE_SVC_TASK):
+					node_report['iserror'] = True
+					(err_code, err_msg) = extract_module_status(batch_xml, DISABLE_SVC_TASK)
+					node_report['errormessage'] = CLUNODE_CREATE_ERRORS[DISABLE_SVC_TASK] % err_msg
+					del_db_obj = True
+				elif creation_status == -(REBOOT_TASK):
+					node_report['iserror'] = True
+					(err_code, err_msg) = extract_module_status(batch_xml, REBOOT_TASK)
+					node_report['errormessage'] = CLUNODE_CREATE_ERRORS[REBOOT_TASK] % err_msg
+					del_db_obj = True
+				elif creation_status == -(SEND_CONF):
+					node_report['iserror'] = True
+					(err_code, err_msg) = extract_module_status(batch_xml, SEND_CONF)
+					node_report['errormessage'] = CLUNODE_CREATE_ERRORS[SEND_CONF] % err_msg
+				elif creation_status == -(ENABLE_SVC_TASK):
+					node_report['iserror'] = True
+					(err_code, err_msg) = extract_module_status(batch_xml, DISABLE_SVC_TASK)
+					node_report['errormessage'] = CLUNODE_CREATE_ERRORS[ENABLE_SVC_TASK] % err_msg
+				elif creation_status == -(START_NODE):
+					node_report['iserror'] = True
+					(err_code, err_msg) = extract_module_status(batch_xml, START_NODE)
+					node_report['errormessage'] = CLUNODE_CREATE_ERRORS[START_NODE] % err_msg
+				else:
+					del_db_obj = True
+					node_report['iserror'] = True
+					node_report['errormessage'] = CLUNODE_CREATE_ERRORS[0] % ''
 
-    else:
-      node_report = {}
-      node_report['isnodecreation'] = False
-      ricci = item[0].split("____") #This removes the 'flag' suffix
+				try:
+					if del_db_obj is True:
+						luci_log.debug_verbose('ICB13a: %s node creation failed for %s: %d: deleting DB entry' % (cluname, ricci[0], creation_status))
+						clusterfolder.manage_delObjects([ricci[0]])
+						clusterfolder.manage_delObjects([item[0]])
+				except Exception, e:
+					luci_log.debug_verbose('ICB14: delObjects: %s: %s' \
+						% (item[0], str(e)))
 
-      try:
-        rc = RicciCommunicator(ricci[0])
-      except Exception, e:
-        rc = None
-        finished = -1
-        err_msg = ''
-        luci_log.debug_verbose('ICB15: ricci error: %s: %s' \
-          % (ricci[0], str(e)))
-
-      if rc is not None:
-        batch_res = checkBatch(rc, item[1].getProperty(BATCH_ID))
-        finished = batch_res[0]
-        err_msg = batch_res[1]
-
-      if finished == True or finished == -1:
-        if finished == -1:
-          flag_msg = err_msg
-        else:
-          flag_msg = ''
-        flag_desc = item[1].getProperty(FLAG_DESC)
-        if flag_desc is None:
-          node_report['desc'] = flag_msg + REDIRECT_MSG
-        else:
-          node_report['desc'] = flag_msg + flag_desc + REDIRECT_MSG
-        nodereports.append(node_report)
-        try:
-            clusterfolder.manage_delObjects([item[0]])
-        except Exception, e:
-            luci_log.info('ICB16: Unable to delete %s: %s' % (item[0], str(e)))
-      else:
-        node_report = {}
-        map['busy'] = "true"
-        isBusy = True
-        node_report['desc'] = item[1].getProperty(FLAG_DESC)
-        nodereports.append(node_report)
-
-  if isBusy:
-    part1 = req['ACTUAL_URL']
-    part2 = req['QUERY_STRING']
-
-    dex = part2.find("&busyfirst")
-    if dex != (-1):
-      tmpstr = part2[:dex] #This strips off busyfirst var
-      part2 = tmpstr
-      ###FIXME - The above assumes that the 'busyfirst' query var is at the
-      ###end of the URL...
-    wholeurl = part1 + "?" + part2
-    map['refreshurl'] = "5; url=" + wholeurl
-    req['specialpagetype'] = "1"
-  else:
-    try:
-      query = req['QUERY_STRING'].replace('&busyfirst=true', '')
-      map['refreshurl'] = '5; url=' + req['ACTUAL_URL'] + '?' + query
-    except:
-      map['refreshurl'] = '5; url=/luci/cluster?pagetype=3'
-  return map
+				nodereports.append(node_report)
+				continue
+			else:
+				# either the batch completed successfully, or it's still running
+				if creation_status == total:
+					#finished...
+					busy_map['busy'] = 'true'
+					node_report['statusmessage'] = 'Node created successfully. %s' % REDIRECT_MSG
+					node_report['statusindex'] = creation_status
+					nodereports.append(node_report)
+					try:
+						clusterfolder.manage_delObjects([item[0]])
+					except Exception, e:
+						luci_log.info('ICB15: Unable to delete %s: %s' \
+							% (item[0], str(e)))
+					continue
+				else:
+					busy_map['busy'] = 'true'
+					isBusy = True
+					node_report['statusmessage'] = 'Node still being created'
+					node_report['statusindex'] = creation_status
+					nodereports.append(node_report)
+					propslist = list()
+					propslist.append(LAST_STATUS)
+					try:
+						item[1].manage_delProperties(propslist)
+						item[1].manage_addProperty(LAST_STATUS, creation_status, 'int')
+					except Exception, e:
+						luci_log.debug_verbose('ICB16: last_status err: %s %d: %s' % (item[0], creation_status, str(e)))
+					continue
+		else:
+			node_report = {}
+			node_report['isnodecreation'] = False
+			# This removes the 'flag' suffix
+			ricci = item[0].split('____')
+
+			try:
+				rc = RicciCommunicator(ricci[0])
+			except Exception, e:
+				rc = None
+				finished = -1
+				err_msg = ''
+				luci_log.debug_verbose('ICB15: ricci error: %s: %s' \
+					% (ricci[0], str(e)))
+
+			if rc is not None:
+				batch_res = rq.checkBatch(rc, item[1].getProperty(BATCH_ID))
+				finished = batch_res[0]
+				err_msg = batch_res[1]
+
+			if finished == True or finished == -1:
+				if finished == -1:
+					flag_msg = err_msg
+				else:
+					flag_msg = ''
+				flag_desc = item[1].getProperty(FLAG_DESC)
+				if flag_desc is None:
+					node_report['desc'] = '%s%s' % (flag_msg, REDIRECT_MSG)
+				else:
+					node_report['desc'] = '%s%s%s' % (flag_msg, flag_desc, REDIRECT_MSG)
+				nodereports.append(node_report)
+
+				try:
+					clusterfolder.manage_delObjects([item[0]])
+				except Exception, e:
+					luci_log.info('ICB16: Unable to delete %s: %s' \
+						% (item[0], str(e)))
+			else:
+				node_report = {}
+				busy_map['busy'] = 'true'
+				isBusy = True
+				node_report['desc'] = item[1].getProperty(FLAG_DESC)
+				nodereports.append(node_report)
+
+	if isBusy:
+		part1 = req['ACTUAL_URL']
+		part2 = req['QUERY_STRING']
+
+		dex = part2.find("&busyfirst")
+		if dex != (-1):
+			tmpstr = part2[:dex] #This strips off busyfirst var
+		part2 = tmpstr
+		###FIXME - The above assumes that the 'busyfirst' query var is at the
+		###end of the URL...
+		busy_map['refreshurl'] = '5; url=%s?%s' % (part1, part2) 
+		req['specialpagetype'] = '1'
+	else:
+		try:
+			query = req['QUERY_STRING'].replace('&busyfirst=true', '')
+			busy_map['refreshurl'] = '5; url=%s?%s' % (req['ACTUAL_URL'], query)
+		except:
+			busy_map['refreshurl'] = '5; url=/luci/cluster?pagetype=3'
+	return busy_map
 
 def getClusterOS(self, rc):
-	map = {}
+	clu_map = {}
 
 	try:
 		os_str = resolveOSType(rc.os())
-		map['os'] = os_str
-		map['isVirtualized'] = rc.dom0()
+		clu_map['os'] = os_str
+		clu_map['isVirtualized'] = rc.dom0()
 	except:
 		# default to rhel5 if something crazy happened.
 		try:
@@ -5759,9 +5777,9 @@
 			# this can throw an exception if the original exception
 			# is caused by rc being None or stale.
 			pass
-		map['os'] = 'rhel5'
-		map['isVirtualized'] = False
-	return map
+		clu_map['os'] = 'rhel5'
+		clu_map['isVirtualized'] = False
+	return clu_map
 
 def getResourcesInfo(model, request):
 	resList = list()
@@ -5778,13 +5796,17 @@
 
 	for item in model.getResources():
 		itemmap = {}
-		itemmap['name'] = item.getName()
+		cur_itemname = item.getName().strip()
+		itemmap['name'] = cur_itemname
 		itemmap['attrs'] = item.attr_hash
 		itemmap['type'] = item.resource_type
 		itemmap['tag_name'] = item.TAG_NAME
-		itemmap['cfgurl'] = baseurl + "?" + "clustername=" + cluname + "&resourcename=" + item.getName() + "&pagetype=" + RESOURCE_CONFIG
-		itemmap['url'] = baseurl + "?" + "clustername=" + cluname + "&resourcename=" + item.getName() + "&pagetype=" + RESOURCE
-		itemmap['delurl'] = baseurl + "?" + "clustername=" + cluname + "&resourcename=" + item.getName() + "&pagetype=" + RESOURCE_REMOVE
+		itemmap['cfgurl'] = '%s?clustername=%s&resourcename=%s&pagetype=%s' \
+			% (baseurl, cluname, cur_itemname, RESOURCE_CONFIG)		
+		itemmap['url'] = '%s?clustername=%s&resourcename=%s&pagetype=%s' \
+			% (baseurl, cluname, cur_itemname, RESOURCE)		
+		itemmap['delurl'] = '%s?clustername=%s&resourcename=%s&pagetype=%s' \
+			% (baseurl, cluname, cur_itemname, RESOURCE_REMOVE)		
 		resList.append(itemmap)
 	return resList
 
@@ -5833,14 +5855,17 @@
 		if res.getName() == name:
 			try:
 				resMap = {}
-				resMap['name'] = res.getName()
+				cur_resname = res.getName().strip()
+				resMap['name'] = cur_resname
 				resMap['type'] = res.resource_type
 				resMap['tag_name'] = res.TAG_NAME
 				resMap['attrs'] = res.attr_hash
-				resMap['cfgurl'] = baseurl + "?" + "clustername=" + cluname + "&resourcename=" + res.getName() + "&pagetype=" + RESOURCE_CONFIG
+				resMap['cfgurl'] = '%s?clustername=%s&resourcename=%s&pagetype=%s' \
+					% (baseurl, cluname, cur_resname, RESOURCE_CONFIG)
 				return resMap
 			except:
 				continue
+	return {}
 
 def delService(self, request):
 	errstr = 'An error occurred while attempting to set the new cluster.conf'
@@ -5894,7 +5919,7 @@
 		model.deleteService(name)
 	except Exception, e:
 		luci_log.debug_verbose('delService5: Unable to find a service named %s for cluster %s' % (name, clustername))
-		return (False, {'errors': [ '%s: error removing service %s.' % (errstr, name) ]})
+		return (False, {'errors': [ '%s: error removing service "%s."' % (errstr, name) ]})
 
 	try:
 		model.setModified(True)
@@ -5904,20 +5929,21 @@
 	except Exception, e:
 		luci_log.debug_verbose('delService6: exportModelAsString failed: %s' \
 			% str(e))
-		return (False, {'errors': [ '%s: error removing service %s.' % (errstr, name) ]})
+		return (False, {'errors': [ '%s: error removing service "%s."' % (errstr, name) ]})
 
-	batch_number, result = setClusterConf(rc, str(conf))
+	batch_number, result = rq.setClusterConf(rc, str(conf))
 	if batch_number is None or result is None:
 		luci_log.debug_verbose('delService7: missing batch and/or result')
-		return (False, {'errors': [ '%s: error removing service %s.' % (errstr, name) ]})
+		return (False, {'errors': [ '%s: error removing service "%s."' % (errstr, name) ]})
 
 	try:
-		set_node_flag(self, clustername, ragent, str(batch_number), SERVICE_DELETE, "Removing service \'%s\'" % name)
+		set_node_flag(self, clustername, ragent, str(batch_number), SERVICE_DELETE, 'Removing service "%s"' % name)
 	except Exception, e:
 		luci_log.debug_verbose('delService8: failed to set flags: %s' % str(e))
 
 	response = request.RESPONSE
-	response.redirect(request['URL'] + "?pagetype=" + SERVICES + "&clustername=" + clustername + '&busyfirst=true')
+	response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
+		% (request['URL'], SERVICES, clustername))
 
 def delResource(self, rc, request):
 	errstr = 'An error occurred while attempting to set the new cluster.conf'
@@ -5939,7 +5965,7 @@
 
 	if name is None:
 		luci_log.debug_verbose('delResource1: no resource name')
-		return errstr + ': no resource name was provided.'
+		return '%s: no resource name was provided.' % errstr
 
 	clustername = None
 	try:
@@ -5952,7 +5978,7 @@
 
 	if clustername is None:
 		luci_log.debug_verbose('delResource2: no cluster name for %s' % name)
-		return errstr + ': could not determine the cluster name.'
+		return '%s: could not determine the cluster name.' % errstr
 
 	try:
 		ragent = rc.hostname()
@@ -5960,7 +5986,7 @@
 			raise Exception, 'unable to determine the hostname of the ricci agent'
 	except Exception, e:
 		luci_log.debug_verbose('delResource3: %s: %s' % (errstr, str(e)))
-		return errstr + ': could not determine the ricci agent hostname'
+		return '%s: could not determine the ricci agent hostname.' % errstr
 
 	resPtr = model.getResourcesPtr()
 	resources = resPtr.getChildren()
@@ -5974,7 +6000,7 @@
 
 	if not found:
 		luci_log.debug_verbose('delResource4: cant find res %s' % name)
-		return errstr + ': the specified resource was not found.'
+		return '%s: the specified resource was not found.' % errstr
 
 	try:
 		model.setModified(True)
@@ -5986,1455 +6012,90 @@
 			% str(e))
 		return errstr
 
-	batch_number, result = setClusterConf(rc, str(conf))
+	batch_number, result = rq.setClusterConf(rc, str(conf))
 	if batch_number is None or result is None:
 		luci_log.debug_verbose('delResource6: missing batch and/or result')
 		return errstr
 
 	try:
-		set_node_flag(self, clustername, ragent, str(batch_number), RESOURCE_REMOVE, "Removing resource \'%s\'" % request['resourcename'])
+		set_node_flag(self, clustername, ragent, str(batch_number), RESOURCE_REMOVE, 'Removing resource "%s"' % request['resourcename'])
 	except Exception, e:
 		luci_log.debug_verbose('delResource7: failed to set flags: %s' % str(e))
 
 	response = request.RESPONSE
-	response.redirect(request['URL'] + "?pagetype=" + RESOURCES + "&clustername=" + clustername + '&busyfirst=true')
-
-def addIp(request, form=None):
-	errors = list()
-
-	if form is None:
-		form = request.form
-
-	if not form:
-		luci_log.debug_verbose('addIp0: form is missing')
-		return None
-
-	model = request.SESSION.get('model')
-	if not model:
-		luci_log.debug_verbose('addIp1: model is missing')
-		return None
-
-	res = None
-	if form.has_key('edit'):
-		try:
-			oldname = form['oldname'].strip()
-			if not oldname:
-				raise Exception, 'oldname is blank.'
+	response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true' \
+		% (request['URL'], RESOURCES, clustername))
 
-			try:
-				res = getResourceForEdit(model, oldname)
-			except KeyError, e1:
-				errors.append('No IP resource named \"%s\" exists.' % oldname)
-		except Exception, e:
-			errors.append('No original name was found for this IP resource.')
-	else:
-		try:
-			res = Ip()
-			if not res:
-				raise Exception, 'res is None'
-		except Exception, e:
-			errors.append('An error occurred while creating an IP resource.')
-			luci_log.debug_verbose('addIp3: %s' % str(e))
+def addResource(self, request, model, res):
+	clustername = model.getClusterName()
+	if not clustername:
+		luci_log.debug_verbose('addResource0: no cluname from mb')
+		return 'Unable to determine cluster name'
 
-	if not res:
-		return [None, None, errors]
+	rc = getRicciAgent(self, clustername)
+	if not rc:
+		luci_log.debug_verbose('addResource1: %s' % clustername)
+		return 'Unable to find a ricci agent for the %s cluster' % clustername
 
 	try:
-		addr = form['ip_address'].strip()
-		if not addr:
-			raise KeyError, 'ip_address is blank'
-		# XXX: validate IP addr
-		res.addAttribute('address', addr)
-	except KeyError, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addIp4: %s' % err)
-
-	if 'monitorLink' in form:
-		res.addAttribute('monitor_link', '1')
-	else:
-		res.addAttribute('monitor_link', '0')
-
-	if len(errors) > 1:
-		return [None, None, errors]
-	return [res, model, None]
-
-def addFs(request, form=None):
-	errors = list()
-
-	if form is None:
-		form = request.form
-
-	if not form:
-		luci_log.debug_verbose('addFs0: form is missing')
-		return None
-
-	model = request.SESSION.get('model')
-	if not model:
-		luci_log.debug_verbose('addFs1: model is missing')
-		return None
-
-	res = None
-	if form.has_key('edit'):
-		try:
-			oldname = form['oldname'].strip()
-			if not oldname:
-				raise Exception, 'oldname is blank.'
-
-			try:
-				res = getResourceForEdit(model, oldname)
-			except KeyError, e1:
-				errors.append('No filesystem resource named \"%s\" exists.' % oldname)
-		except Exception, e:
-			errors.append('No original name was found for this filesystem resource.')
-			luci_log.debug_verbose('addFs3: %s' % str(e))
-	else:
-		try:
-			res = Fs()
-			if not res:
-				raise Exception, 'res is None'
-		except Exception, e:
-			errors.append('An error occurred while creating a filesystem resource.')
-			luci_log.debug_verbose('addFs4: %s' % str(e))
-
-	if not res:
-		return [None, None, errors]
+		model.getResourcesPtr().addChild(res)
+	except Exception, e:
+		luci_log.debug_verbose('addResource2: %s' % str(e))
+		return 'Unable to add the new resource'
 
-	# XXX: sanity check these fields
 	try:
-		name = form['resourceName'].strip()
-		if not name:
-			raise Exception, 'No name was given for this filesystem resource.'
-		res.addAttribute('name', name)
+		model.setModified(True)
+		conf = model.exportModelAsString()
+		if not conf:
+			raise Exception, 'model string for %s is blank' % clustername
 	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addFs5: %s' % err)
-
-	try:
-		mountpoint = form['mountpoint'].strip()
-		if not mountpoint:
-			raise Exception, 'No mount point was given for this filesystem resource.'
-		res.addAttribute('mountpoint', mountpoint)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addFs6: %s' % err)
+		luci_log.debug_verbose('addResource3: exportModelAsString: %s' \
+			% str(e))
+		return 'An error occurred while adding this resource'
 
 	try:
-		device = form['device'].strip()
-		if not device:
-			raise Exception, 'No device was given for this filesystem resource.'
-		res.addAttribute('device', device)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addFs7: %s' % err)
+		ragent = rc.hostname()
+		if not ragent:
+			luci_log.debug_verbose('addResource4: missing ricci hostname')
+			raise Exception, 'unknown ricci agent hostname'
 
-	try:
-		options = form['options'].strip()
-		if not options:
-			raise KeyError, 'no options'
-		res.addAttribute('options', options)
-	except KeyError, e:
-		try:
-			res.removeAttribute('options')
-		except:
-			pass
+		batch_number, result = rq.setClusterConf(rc, str(conf))
+		if batch_number is None or result is None:
+			luci_log.debug_verbose('addResource5: missing batch_number or result')
+			raise Exception, 'unable to save the new cluster configuration.'
 	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addFs8: %s' % err)
+		luci_log.debug_verbose('addResource6: %s' % str(e))
+		return 'An error occurred while propagating the new cluster.conf: %s' % str(e)
 
 	try:
-		fstype = form['fstype'].strip()
-		if not fstype:
-			raise Exception, 'No filesystem type was given for this filesystem resource.'
-		res.addAttribute('fstype', fstype)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addFs9: %s' % err)
+		try:
+			if request.form.has_key('edit'):
+				action_type = RESOURCE_CONFIG
+				action_str = 'Configuring resource "%s"' % res.getName()
+			else:
+				raise Exception, 'new'
+		except Exception, e:
+			action_type = RESOURCE_ADD
+			action_str = 'Creating new resource "%s"' % res.getName()
 
-	try:
-		fsid = form['fsid'].strip()
-		if not fsid:
-			raise Exception, 'No filesystem ID was given for this filesystem resource.'
-		fsid_int = int(fsid)
-		if not fsid_is_unique(model, fsid_int):
-			raise Exception, 'The filesystem ID provided is not unique.'
+		set_node_flag(self, clustername, ragent, str(batch_number), action_type, action_str)
 	except Exception, e:
-		fsid = str(generate_fsid(model, name))
-	res.addAttribute('fsid', fsid)
+		luci_log.debug_verbose('addResource7: failed to set flags: %s' % str(e))
 
-	if form.has_key('forceunmount'):
-		res.addAttribute('force_unmount', '1')
-	else:
-		res.addAttribute('force_unmount', '0')
+	response = request.RESPONSE
+	response.redirect('%s?pagetype=%s&clustername=%s&busyfirst=true'
+		% (request['URL'], RESOURCES, clustername))
 
-	if form.has_key('selffence'):
-		res.addAttribute('self_fence', '1')
-	else:
-		res.addAttribute('self_fence', '0')
+def getResource(model, name):
+	resPtr = model.getResourcesPtr()
+	resources = resPtr.getChildren()
 
-	if form.has_key('checkfs'):
-		res.addAttribute('force_fsck', '1')
-	else:
-		res.addAttribute('force_fsck', '0')
+	for res in resources:
+		if res.getName() == name:
+			return res
 
-	if len(errors) > 1:
-		return [None, None, errors]
-	return [res, model, None]
-
-def addGfs(request, form=None):
-	errors = list()
-
-	if form is None:
-		form = request.form
-
-	if not form:
-		luci_log.debug_verbose('addGfs0: form is missing')
-		return None
-
-	model = request.SESSION.get('model')
-	if not model:
-		luci_log.debug_verbose('addGfs1: model is missing')
-		return None
-
-	res = None
-	if form.has_key('edit'):
-		try:
-			oldname = form['oldname'].strip()
-			if not oldname:
-				raise Exception, 'oldname is blank'
-
-			try:
-				res = getResourceForEdit(model, oldname)
-			except KeyError, e1:
-				errors.append('No filesystem resource named \"%s\" exists.' % oldname)
-		except Exception, e:
-			errors.append('No original name was found for this cluster filesystem resource.')
-			luci_log.debug_verbose('addGfs2: %s' % str(e))
-	else:
-		try:
-			res = Clusterfs()
-			if not res:
-				raise Exception, 'res is None'
-		except Exception, e:
-			errors.append('An error occurred while creating a cluster filesystem resource.')
-			luci_log.debug('addGfs3: %s' % str(e))
-
-	if not res:
-		return [None, None, errors]
-
-	# XXX: sanity check these fields
-	try:
-		name = form['resourceName'].strip()
-		if not name:
-			raise Exception, 'No name was given for this cluster filesystem resource.'
-		res.addAttribute('name', name)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addGfs4: %s' % err)
-
-	try:
-		mountpoint = form['mountpoint'].strip()
-		if not mountpoint:
-			raise Exception, 'No mount point was given for this cluster filesystem resource.'
-		res.addAttribute('mountpoint', mountpoint)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addGfs5: %s' % err)
-
-	try:
-		device = form['device'].strip()
-		if not device:
-			raise Exception, 'No device was given for this cluster filesystem resource.'
-		res.addAttribute('device', device)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addGfs6: %s' % err)
-
-	try:
-		options = form['options'].strip()
-		if not options:
-			raise KeyError, 'no options'
-		res.addAttribute('options', options)
-	except KeyError, e:
-		try:
-			res.removeAttribute('options')
-		except:
-			pass
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addGfs7: %s' % err)
-
-	try:
-		fsid = form['fsid'].strip()
-		if not fsid:
-			raise Exception, 'No filesystem ID was given for this cluster filesystem resource.'
-		fsid_int = int(fsid)
-		if not fsid_is_unique(model, fsid_int):
-			raise Exception, 'The filesystem ID provided is not unique.'
-	except Exception, e:
-		fsid = str(generate_fsid(model, name))
-	res.addAttribute('fsid', fsid)
-
-	if form.has_key('forceunmount'):
-		res.addAttribute('force_unmount', '1')
-	else:
-		res.addAttribute('force_unmount', '0')
-
-	if len(errors) > 1:
-		return [None, None, errors]
-
-	return [res, model, None]
-
-def addNfsm(request, form=None):
-	errors = list()
-
-	if form is None:
-		form = request.form
-
-	if not form:
-		luci_log.debug_verbose('addNfsm0: form is missing')
-		return None
-
-	model = request.SESSION.get('model')
-	if not model:
-		luci_log.debug_verbose('addNfsm1: model is missing')
-		return None
-
-	res = None
-	if form.has_key('edit'):
-		try:
-			oldname = form['oldname'].strip()
-			if not oldname:
-				raise Exception, 'oldname is blank'
-
-			try:
-				res = getResourceForEdit(model, oldname)
-			except KeyError, e1:
-				errors.append('No NFS mount resource named \"%s\" exists.' % oldname)
-		except Exception, e:
-			errors.append('No original name was found for this NFS mount resource.')
-			luci_log.debug_verbose('addNfsm2: %s' % str(e))
-	else:
-		try:
-			res = Netfs()
-			if not res:
-				raise Exception, 'res is None'
-		except Exception, e:
-			errors.append('An error occurred while creating a NFS mount resource.')
-			luci_log.debug_verbose('addNfsm3: %s' % str(e))
-
-	if not res:
-		return [None, None, errors]
-
-	# XXX: sanity check these fields
-	try:
-		name = form['resourceName'].strip()
-		if not name:
-			raise Exception, 'No name was given for this NFS mount resource.'
-		res.addAttribute('name', name)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addNfsm4: %s' % err)
-
-	try:
-		mountpoint = form['mountpoint'].strip()
-		if not mountpoint:
-			raise Exception, 'No mount point was given for NFS mount resource.'
-		res.addAttribute('mountpoint', mountpoint)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addNfsm5: %s' % err)
-
-	try:
-		host = form['host'].strip()
-		if not host:
-			raise Exception, 'No host server was given for this NFS mount resource.'
-		res.addAttribute('host', host)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addNfsm6 error: %s' % err)
-
-	try:
-		options = form['options'].strip()
-		if not options:
-			raise KeyError, 'no options'
-		res.addAttribute('options', options)
-	except KeyError, e:
-		try:
-			res.removeAttribute('options')
-		except:
-			pass
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addNfsm7: %s' % err)
-
-	try:
-		exportpath = form['exportpath'].strip()
-		if not exportpath:
-			raise Exception, 'No export path was given for this NFS mount resource.'
-		res.addAttribute('exportpath', exportpath)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addNfsm8: %s' % err)
-
-	try:
-		nfstype = form['nfstype'].strip().lower()
-		if nfstype != 'nfs' and nfstype != 'nfs4':
-			raise Exception, 'An invalid NFS version \"%s\" was given.' % nfstype
-		res.addAttribute('nfstype', nfstype)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addNfsm9: %s' % err)
-
-	if form.has_key('forceunmount'):
-		res.addAttribute('force_unmount', '1')
-	else:
-		res.addAttribute('force_unmount', '0')
-
-	if len(errors) > 1:
-		return [None, None, errors]
-	return [res, model, None]
-
-def addNfsc(request, form=None):
-	errors = list()
-
-	if form is None:
-		form = request.form
-
-	if not form:
-		luci_log.debug_verbose('addNfsc0: form is missing')
-		return None
-
-	model = request.SESSION.get('model')
-	if not model:
-		luci_log.debug_verbose('addNfsc1: model is missing')
-		return None
-
-	res = None
-	if form.has_key('edit'):
-		try:
-			oldname = form['oldname'].strip()
-			if not oldname:
-				raise Exception, 'oldname is blank'
-			try:
-				res = getResourceForEdit(model, oldname)
-			except KeyError, e1:
-				errors.append('No NFS client resource named \"%s\" exists.' % oldname)
-		except Exception, e:
-			errors.append('No original name was found for this NFS client resource.')
-			luci_log.debug_verbose('addNfsc2: %s' % str(e))
-	else:
-		try:
-			res = NFSClient()
-			if not res:
-				raise Exception, 'res is None'
-		except Exception, e:
-			errors.append('An error occurred while creating a NFS client resource.')
-			luci_log.debug_verbose('addNfsc3: %s' % str(e))
-
-	if not res:
-		return [None, None, errors]
-
-	# XXX: sanity check these fields
-	try:
-		name = form['resourceName'].strip()
-		if not name:
-			raise Exception, 'No name was given for this NFS client resource.'
-		res.addAttribute('name', name)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addNfsc4: %s' % err)
-
-	try:
-		target = form['target'].strip()
-		if not target:
-			raise Exception, 'No target was given for NFS client resource.'
-		res.addAttribute('target', target)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addNfsc5: %s' % err)
-
-	try:
-		options = form['options'].strip()
-		if not options:
-			raise KeyError, 'no options'
-		res.addAttribute('options', options)
-	except KeyError, e:
-		try:
-			res.removeAttribute('options')
-		except:
-			pass
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addNfsc6: %s' % err)
-
-	if form.has_key('allow_recover'):
-		res.addAttribute('allow_recover', '1')
-	else:
-		res.addAttribute('allow_recover', '0')
-
-	if len(errors) > 1:
-		return [None, None, errors]
-	return [res, model, None]
-
-def addNfsx(request, form=None):
-	errors = list()
-
-	if form is None:
-		form = request.form
-
-	if not form:
-		luci_log.debug_verbose('addNfsx0: model is missing')
-		return None
-
-	model = request.SESSION.get('model')
-	if not model:
-		luci_log.debug_verbose('addNfsx0: model is missing')
-		return None
-
-	res = None
-	if form.has_key('edit'):
-		try:
-			oldname = form['oldname'].strip()
-			if not oldname:
-				raise Exception, 'oldname is blank'
-			try:
-				res = getResourceForEdit(model, oldname)
-			except KeyError, e1:
-				errors.append('No NFS export resource named \"%s\" exists.' % oldname)
-		except Exception, e:
-			errors.append('No original name was found for this NFS export resource.')
-			luci_log.debug_verbose('addNfsx2: %s', str(e))
-	else:
-		try:
-			res = NFSExport()
-			if not res:
-				raise Exception, 'res is None'
-		except Exception, e:
-			errors.append('An error occurred while creating a NFS clientresource.')
-			luci_log.debug_verbose('addNfsx3: %s', str(e))
-
-	if not res:
-		return [None, None, errors]
-
-	try:
-		name = form['resourceName'].strip()
-		if not name:
-			raise Exception, 'No name was given for this NFS export resource.'
-		res.addAttribute('name', name)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addNfsx4: %s' % err)
-
-	if len(errors) > 1:
-		return [None, None, errors]
-	return [res, model, None]
-
-def addScr(request, form=None):
-	errors = list()
-
-	if form is None:
-		form = request.form
-
-	if not form:
-		luci_log.debug_verbose('addScr0: form is missing')
-		return None
-
-	model = request.SESSION.get('model')
-	if not model:
-		luci_log.debug_verbose('addScr1: model is missing')
-		return None
-
-	res = None
-	if form.has_key('edit'):
-		try:
-			oldname = form['oldname'].strip()
-			if not oldname:
-				raise Exception, 'oldname is blank'
-			try:
-				res = getResourceForEdit(model, oldname)
-			except KeyError, e1:
-				errors.append('No script resource named \"%s\" exists.' % oldname)
-		except Exception, e:
-			errors.append('No original name was found for this script resource.')
-			luci_log.debug_verbose('addScr2: %s' % str(e))
-	else:
-		try:
-			res = Script()
-			if not res:
-				raise Exception, 'res is None'
-		except Exception, e:
-			errors.append('An error occurred while creating a script resource.')
-			luci_log.debug_verbose('addScr3: %s' % str(e))
-
-	if not res:
-		return [None, None, errors]
-
-	try:
-		name = form['resourceName'].strip()
-		if not name:
-			raise Exception, 'No name was given for this script resource.'
-		res.addAttribute('name', name)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addScr4: %s' % err)
-
-	try:
-		path = form['file'].strip()
-		if not path:
-			raise Exception, 'No path to a script file was given for this script resource.'
-		res.addAttribute('file', path)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addScr5: %s' % err)
-
-	if len(errors) > 1:
-		return [None, None, errors]
-	return [res, model, None]
-
-def addSmb(request, form=None):
-	errors = list()
-
-	if form is None:
-		form = request.form
-
-	if not form:
-		luci_log.debug_verbose('addSmb0: form is missing')
-		return None
-
-	model = request.SESSION.get('model')
-	if not model:
-		luci_log.debug_verbose('addSmb1: model is missing')
-		return None
-
-	if form.has_key('edit'):
-		try:
-			oldname = form['oldname'].strip()
-			if not oldname:
-				raise Exception, 'oldname is blank'
-			try:
-				res = getResourceForEdit(model, oldname)
-			except KeyError, e1:
-				errors.append('No Samba resource named \"%s\" exists.' % oldname)
-		except Exception, e:
-			errors.append('No original name was found for this Samba resource.')
-			luci_log.debug_verbose('addSmb2: %s' % str(e))
-	else:
-		try:
-			res = Samba()
-			if not res:
-				raise Exception, 'res is None'
-		except Exception, e:
-			errors.append('An error occurred while creating a Samba resource.')
-			luci_log.debug_verbose('addSmb3: %s' % str(e))
-
-	if not res:
-		return [None, None, errors]
-
-	try:
-		name = form['resourceName'].strip()
-		if not name:
-			raise Exception, 'No name was given for this Samba resource.'
-		res.addAttribute('name', name)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addSmb4: %s' % err)
-
-	try:
-		workgroup = form['workgroup'].strip()
-		if not workgroup:
-			raise Exception, 'No workgroup was given for this Samba resource.'
-		res.addAttribute('workgroup', workgroup)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addSmb5: %s' % err)
-
-	if len(errors) > 1:
-		return [None, None, errors]
-	return [res, model, None]
-
-def addApache(request, form=None):
-	errors = list()
-
-	if form is None:
-		form = request.form
-
-	if not form:
-		luci_log.debug_verbose('addApache0: form is missing')
-		return None
-
-	model = request.SESSION.get('model')
-	if not model:
-		luci_log.debug_verbose('addApache1: model is missing')
-		return None
-
-	res = None
-	if form.has_key('edit'):
-		try:
-			oldname = form['oldname'].strip()
-			if not oldname:
-				raise Exception, 'oldname is blank.'
-			try:
-				res = getResourceForEdit(model, oldname)
-			except KeyError, e:
-				errors.append('No Apache resource named \"%s\" exists.' % oldname)
-		except Exception, e:
-			errors.append('No original name was found for this Apache resource.')
-			luci_log.debug_verbose('addApache2: %s' % str(e))
-	else:
-		try:
-			res = Apache()
-			if not res:
-				raise Exception, 'could not create Apache object'
-		except Exception, e:
-			errors.append('An error occurred while creating an Apache resource.')
-			luci_log.debug_verbose('addApache3: %s' % str(e))
-
-	if not res:
-		return [None, None, errors]
-
-	try:
-		name = form['resourceName'].strip()
-		if not name:
-			raise Exception, 'No name was given for this Apache resource.'
-		res.addAttribute('name', name)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addApache4: %s' % err)
-
-	try:
-		server_root = form['server_root'].strip()
-		if not server_root:
-			raise KeyError, 'No server root was given for this Apache resource.'
-		res.addAttribute('server_root', server_root)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addApache5: %s' % err)
-
-	try:
-		config_file = form['config_file'].strip()
-		if not server_root:
-			raise KeyError, 'No path to the Apache configuration file was given.'
-		res.addAttribute('config_file', config_file)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addApache6: %s' % err)
-
-	try:
-		options = form['httpd_options'].strip()
-		if not options:
-			raise KeyError, 'no options'
-		res.addAttribute('httpd_options', options)
-	except KeyError, e:
-		try:
-			res.removeAttribute('httpd_options')
-		except:
-			pass
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addApache7: %s' % err)
-
-	try:
-		shutdown_wait = int(form['shutdown_wait'].strip())
-		res.addAttribute('shutdown_wait', str(shutdown_wait))
-	except KeyError, e:
-		res.addAttribute('shutdown_wait', '0')
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addApache7: %s' % err)
-
-	if len(errors) > 1:
-		return [None, None, errors]
-	return [res, model, None]
-
-def addMySQL(request, form=None):
-	errors = list()
-
-	if form is None:
-		form = request.form
-
-	if not form:
-		luci_log.debug_verbose('addMySQL0: form is missing')
-		return None
-
-	model = request.SESSION.get('model')
-	if not model:
-		luci_log.debug_verbose('addMySQL1: model is missing')
-		return None
-
-	res = None
-	if form.has_key('edit'):
-		try:
-			oldname = form['oldname'].strip()
-			if not oldname:
-				raise Exception, 'oldname is blank.'
-			try:
-				res = getResourceForEdit(model, oldname)
-			except KeyError, e:
-				errors.append('No MySQL resource named \"%s\" exists.' % oldname)
-		except Exception, e:
-			errors.append('No original name was found for this MySQL resource.')
-			luci_log.debug_verbose('addMySQL2: %s' % str(e))
-	else:
-		try:
-			res = MySQL()
-			if not res:
-				raise Exception, 'could not create MySQL object'
-		except Exception, e:
-			errors.append('An error occurred while creating a MySQL resource.')
-			luci_log.debug_verbose('addMySQL3: %s' % str(e))
-
-	if not res:
-		return [None, None, errors]
-
-	try:
-		name = form['resourceName'].strip()
-		if not name:
-			raise Exception, 'No name was given for this MySQL resource.'
-		res.addAttribute('name', name)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addMySQL4: %s' % err)
-
-	try:
-		config_file = form['config_file'].strip()
-		if not config_file:
-			raise KeyError, 'No path to the MySQL configuration file was given.'
-		res.addAttribute('config_file', config_file)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addMySQL5: %s' % err)
-
-	try:
-		listen_addr = form['listen_address'].strip()
-		if not listen_addr:
-			raise KeyError, 'No address was given for MySQL server to listen on.'
-		res.addAttribute('listen_address', listen_addr)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addMySQL6: %s' % err)
-
-	try:
-		options = form['mysql_options'].strip()
-		if not options:
-			raise KeyError, 'no options'
-		res.addAttribute('mysql_options', options)
-	except KeyError, e:
-		try:
-			res.removeAttribute('mysql_options')
-		except:
-			pass
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addMySQL7: %s' % err)
-
-	try:
-		shutdown_wait = int(form['shutdown_wait'].strip())
-		res.addAttribute('shutdown_wait', str(shutdown_wait))
-	except KeyError, e:
-		res.addAttribute('shutdown_wait', '0')
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addMySQL7: %s' % err)
-
-	if len(errors) > 1:
-		return [None, None, errors]
-	return [res, model, None]
-
-def addOpenLDAP(request, form=None):
-	errors = list()
-
-	if form is None:
-		form = request.form
-
-	if not form:
-		luci_log.debug_verbose('addOpenLDAP0: form is missing')
-		return None
-
-	model = request.SESSION.get('model')
-	if not model:
-		luci_log.debug_verbose('addOpenLDAP1: model is missing')
-		return None
-
-	res = None
-	if form.has_key('edit'):
-		try:
-			oldname = form['oldname'].strip()
-			if not oldname:
-				raise Exception, 'oldname is blank.'
-			try:
-				res = getResourceForEdit(model, oldname)
-			except KeyError, e:
-				errors.append('No OpenLDAP resource named \"%s\" exists.' % oldname)
-		except Exception, e:
-			errors.append('No original name was found for this OpenLDAP resource.')
-			luci_log.debug_verbose('addOpenLDAP2: %s' % str(e))
-	else:
-		try:
-			res = OpenLDAP()
-			if not res:
-				raise Exception, 'could not create OpenLDAP object'
-		except Exception, e:
-			errors.append('An error occurred while creating an OpenLDAP resource.')
-			luci_log.debug_verbose('addOpenLDAP3: %s' % str(e))
-
-	if not res:
-		return [None, None, errors]
-
-	try:
-		name = form['resourceName'].strip()
-		if not name:
-			raise Exception, 'No name was given for this OpenLDAP resource.'
-		res.addAttribute('name', name)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addOpenLDAP4: %s' % err)
-
-	try:
-		url_list = form['url_list'].strip()
-		if not url_list:
-			raise KeyError, 'No URL list was given for this OpenLDAP resource.'
-		res.addAttribute('url_list', url_list)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addOpenLDAP5: %s' % err)
-
-	try:
-		config_file = form['config_file'].strip()
-		if not config_file:
-			raise KeyError, 'No path to the OpenLDAP configuration file was given.'
-		res.addAttribute('config_file', config_file)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addOpenLDAP6: %s' % err)
-
-	try:
-		options = form['slapd_options'].strip()
-		if not options:
-			raise KeyError, 'no options'
-		res.addAttribute('slapd_options', options)
-	except KeyError, e:
-		try:
-			res.removeAttribute('slapd_options')
-		except:
-			pass
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addOpenLDAP7: %s' % err)
-
-	try:
-		shutdown_wait = int(form['shutdown_wait'].strip())
-		res.addAttribute('shutdown_wait', str(shutdown_wait))
-	except KeyError, e:
-		res.addAttribute('shutdown_wait', '0')
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addOpenLDAP7: %s' % err)
-
-	if len(errors) > 1:
-		return [None, None, errors]
-	return [res, model, None]
-
-def addPostgres8(request, form=None):
-	errors = list()
-
-	if form is None:
-		form = request.form
-
-	if not form:
-		luci_log.debug_verbose('addPostgreSQL80: form is missing')
-		return None
-
-	model = request.SESSION.get('model')
-	if not model:
-		luci_log.debug_verbose('addPostgreSQL81: model is missing')
-		return None
-
-	res = None
-	if form.has_key('edit'):
-		try:
-			oldname = form['oldname'].strip()
-			if not oldname:
-				raise Exception, 'oldname is blank.'
-			try:
-				res = getResourceForEdit(model, oldname)
-			except KeyError, e:
-				errors.append('No PostgreSQL 8 resource named \"%s\" exists.' % oldname)
-		except Exception, e:
-			errors.append('No original name was found for this PostgreSQL 8 resource.')
-			luci_log.debug_verbose('addPostgreSQL82: %s' % str(e))
-	else:
-		try:
-			res = Postgres8()
-			if not res:
-				raise Exception, 'could not create PostgreSQL 8 object'
-		except Exception, e:
-			errors.append('An error occurred while creating a PostgreSQL 8 resource.')
-			luci_log.debug_verbose('addPostgreSQL83: %s' % str(e))
-
-	if not res:
-		return [None, None, errors]
-
-	try:
-		name = form['resourceName'].strip()
-		if not name:
-			raise Exception, 'No name was given for this PostgreSQL 8 resource.'
-		res.addAttribute('name', name)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addPostgreSQL84: %s' % err)
-
-	try:
-		user = form['postmaster_user'].strip()
-		if not user:
-			raise KeyError, 'No postmaster user was given for this PostgreSQL 8 resource.'
-		res.addAttribute('postmaster_user', user)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addPostgreSQL85: %s' % err)
-
-	try:
-		config_file = form['config_file'].strip()
-		if not config_file:
-			raise KeyError, 'No path to the PostgreSQL 8 configuration file was given.'
-		res.addAttribute('config_file', config_file)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addPostgreSQL86: %s' % err)
-
-	try:
-		options = form['postmaster_options'].strip()
-		if not options:
-			raise KeyError, 'no options'
-		res.addAttribute('postmaster_options', options)
-	except KeyError, e:
-		try:
-			res.removeAttribute('postmaster_options')
-		except:
-			pass
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addPostgreSQL87: %s' % err)
-
-	try:
-		shutdown_wait = int(form['shutdown_wait'].strip())
-		res.addAttribute('shutdown_wait', str(shutdown_wait))
-	except KeyError, e:
-		res.addAttribute('shutdown_wait', '0')
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addPostgreSQL87: %s' % err)
-
-	if len(errors) > 1:
-		return [None, None, errors]
-	return [res, model, None]
-
-def addTomcat5(request, form=None):
-	errors = list()
-
-	if form is None:
-		form = request.form
-
-	if not form:
-		luci_log.debug_verbose('addTomcat50: form is missing')
-		return None
-
-	model = request.SESSION.get('model')
-	if not model:
-		luci_log.debug_verbose('addTomcat51: model is missing')
-		return None
-
-	res = None
-	if form.has_key('edit'):
-		try:
-			oldname = form['oldname'].strip()
-			if not oldname:
-				raise Exception, 'oldname is blank.'
-			try:
-				res = getResourceForEdit(model, oldname)
-			except KeyError, e:
-				errors.append('No Tomcat 5 resource named \"%s\" exists.' % oldname)
-		except Exception, e:
-			errors.append('No original name was found for this Tomcat 5 resource.')
-			luci_log.debug_verbose('addTomcat52: %s' % str(e))
-	else:
-		try:
-			res = Tomcat5()
-			if not res:
-				raise Exception, 'could not create Tomcat5 object'
-		except Exception, e:
-			errors.append('An error occurred while creating a Tomcat 5 resource.')
-			luci_log.debug_verbose('addTomcat53: %s' % str(e))
-
-	if not res:
-		return [None, None, errors]
-
-	try:
-		name = form['resourceName'].strip()
-		if not name:
-			raise Exception, 'No name was given for this Tomcat 5 resource.'
-		res.addAttribute('name', name)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addTomcat54: %s' % err)
-
-	try:
-		user = form['tomcat_user'].strip()
-		if not user:
-			raise KeyError, 'No user was given for this Tomcat 5 resource.'
-		res.addAttribute('tomcat_user', user)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addTomcat55: %s' % err)
-
-	try:
-		config_file = form['config_file'].strip()
-		if not config_file:
-			raise KeyError, 'No path to the Tomcat 5 configuration file was given.'
-		res.addAttribute('config_file', config_file)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addTomcat56: %s' % err)
-
-	try:
-		options = form['catalina_options'].strip()
-		if not options:
-			raise KeyError, 'no options'
-		res.addAttribute('catalina_options', options)
-	except KeyError, e:
-		try:
-			res.removeAttribute('catalina_options')
-		except:
-			pass
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addTomcat57: %s' % err)
-
-	try:
-		catalina_base = form['catalina_base'].strip()
-		if not catalina_base:
-			raise KeyError, 'No cataliny base directory was given for this Tomcat 5 resource.'
-		res.addAttribute('catalina_base', catalina_base)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addTomcat58: %s' % err)
-
-	try:
-		shutdown_wait = int(form['shutdown_wait'].strip())
-		res.addAttribute('shutdown_wait', str(shutdown_wait))
-	except KeyError, e:
-		res.addAttribute('shutdown_wait', '0')
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addTomcat59: %s' % err)
-
-	if len(errors) > 1:
-		return [None, None, errors]
-	return [res, model, None]
-
-def addLVM(request, form=None):
-	errors = list()
-
-	if form is None:
-		form = request.form
-
-	if not form:
-		luci_log.debug_verbose('addLVM0: form is missing')
-		return None
-
-	model = request.SESSION.get('model')
-	if not model:
-		luci_log.debug_verbose('addLVM1: model is missing')
-		return None
-
-	res = None
-	if form.has_key('edit'):
-		try:
-			oldname = form['oldname'].strip()
-			if not oldname:
-				raise Exception, 'oldname is blank.'
-			try:
-				res = getResourceForEdit(model, oldname)
-			except KeyError, e:
-				errors.append('No LVM resource named \"%s\" exists.' % oldname)
-		except Exception, e:
-			errors.append('No original name was found for this LVM resource.')
-			luci_log.debug_verbose('addLVM2: %s' % str(e))
-	else:
-		try:
-			res = LVM()
-			if not res:
-				raise Exception, 'could not create LVM object'
-		except Exception, e:
-			errors.append('An error occurred while creating a LVM resource.')
-			luci_log.debug_verbose('addLVM3: %s' % str(e))
-
-	if not res:
-		return [None, None, errors]
-
-	try:
-		name = form['resourceName'].strip()
-		if not name:
-			raise Exception, 'No name was given for this LVM resource.'
-		res.addAttribute('name', name)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addLVM4: %s' % err)
-
-	try:
-		vg_name = form['vg_name'].strip()
-		if not vg_name:
-			raise KeyError, 'No volume group name was given.'
-		res.addAttribute('vg_name', vg_name)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addLVM5: %s' % err)
-
-	try:
-		lv_name = form['lv_name'].strip()
-		if not lv_name:
-			raise KeyError, 'No logical volume name was given.'
-		res.addAttribute('lv_name', lv_name)
-	except Exception, e:
-		err = str(e)
-		errors.append(err)
-		luci_log.debug_verbose('addLVM6: %s' % err)
-
-	if len(errors) > 1:
-		return [None, None, errors]
-	return [res, model, None]
-
-resourceAddHandler = {
-	'ip': addIp,
-	'fs': addFs,
-	'gfs': addGfs,
-	'nfsm': addNfsm,
-	'nfsx': addNfsx,
-	'nfsc': addNfsc,
-	'scr': addScr,
-	'smb': addSmb,
-	'tomcat-5': addTomcat5,
-	'postgres-8': addPostgres8,
-	'apache': addApache,
-	'openldap': addOpenLDAP,
-	'lvm': addLVM,
-	'mysql': addMySQL
-}
-
-def resolveClusterChanges(self, clusterName, model):
-	try:
-		mb_nodes = model.getNodes()
-		if not mb_nodes or not len(mb_nodes):
-			raise Exception, 'node list is empty'
-	except Exception, e:
-		luci_log.debug_verbose('RCC0: no model builder nodes found for %s: %s' \
-				% (str(e), clusterName))
-		return 'Unable to find cluster nodes for %s' % clusterName
-
-	try:
-		cluster_node = self.restrictedTraverse(PLONE_ROOT + '/systems/cluster/' + clusterName)
-		if not cluster_node:
-			raise Exception, 'cluster node is none'
-	except Exception, e:
-		luci_log.debug('RCC1: cant find cluster node for %s: %s'
-			% (clusterName, str(e)))
-		return 'Unable to find an entry for %s in the Luci database.' % clusterName
-
-	try:
-		db_nodes = map(lambda x: x[0], cluster_node.objectItems('Folder'))
-		if not db_nodes or not len(db_nodes):
-			raise Exception, 'no database nodes'
-	except Exception, e:
-		# Should we just create them all? Can this even happen?
-		luci_log.debug('RCC2: error: %s' % str(e))
-		return 'Unable to find database entries for any nodes in %s' % clusterName
-
-	same_host = lambda x, y: x == y or x[:len(y) + 1] == y + '.' or y[:len(x) + 1] == x + '.'
-
-	# this is a really great algorithm.
-	missing_list = list()
-	new_list = list()
-	for i in mb_nodes:
-		for j in db_nodes:
-			f = 0
-			if same_host(i, j):
-				f = 1
-				break
-		if not f:
-			new_list.append(i)
-
-	for i in db_nodes:
-		for j in mb_nodes:
-			f = 0
-			if same_host(i, j):
-				f = 1
-				break
-		if not f:
-			missing_list.append(i)
-
-	messages = list()
-	for i in missing_list:
-		try:
-			## or alternately
-			##new_node = cluster_node.restrictedTraverse(i)
-			##setNodeFlag(self, new_node, CLUSTER_NODE_NOT_MEMBER)
-			cluster_node.delObjects([i])
-			messages.append('Node \"%s\" is no longer in a member of cluster \"%s\." It has been deleted from the management interface for this cluster.' % (i, clusterName))
-			luci_log.debug_verbose('VCC3: deleted node %s' % i)
-		except Exception, e:
-			luci_log.debug_verbose('VCC4: delObjects: %s: %s' % (i, str(e)))
-
-	new_flags = CLUSTER_NODE_NEED_AUTH | CLUSTER_NODE_ADDED
-	for i in new_list:
-		try:
-			cluster_node.manage_addFolder(i, '__luci__:csystem:' + clusterName)
-			new_node = cluster_node.restrictedTraverse(i)
-			setNodeFlag(self, new_node, new_flags)
-			messages.append('A new cluster node, \"%s,\" is now a member of cluster \"%s.\" It has been added to the management interface for this cluster, but you must authenticate to it in order for it to be fully functional.' % (i, clusterName))
-		except Exception, e:
-			messages.append('A new cluster node, \"%s,\" is now a member of cluster \"%s,\". but it has not been added to the management interface for this cluster as a result of an error creating a database entry for it.' % (i, clusterName))
-			luci_log.debug_verbose('VCC5: addFolder: %s/%s: %s' \
-				% (clusterName, i, str(e)))
-
-	return messages
-
-def addResource(self, request, model, res, res_type):
-	clustername = model.getClusterName()
-	if not clustername:
-		luci_log.debug_verbose('addResource0: no cluname from mb')
-		return 'Unable to determine cluster name'
-
-	rc = getRicciAgent(self, clustername)
-	if not rc:
-		luci_log.debug_verbose('addResource1: unable to find a ricci agent for cluster %s' % clustername)
-		return 'Unable to find a ricci agent for the %s cluster' % clustername
-
-	try:
-		model.getResourcesPtr().addChild(res)
-	except Exception, e:
-		luci_log.debug_verbose('addResource2: adding the new resource failed: %s' % str(e))
-		return 'Unable to add the new resource'
-
-	try:
-		model.setModified(True)
-		conf = model.exportModelAsString()
-		if not conf:
-			raise Exception, 'model string for %s is blank' % clustername
-	except Exception, e:
-		luci_log.debug_verbose('addResource3: exportModelAsString : %s' \
-			% str(e))
-		return 'An error occurred while adding this resource'
-
-	try:
-		ragent = rc.hostname()
-		if not ragent:
-			luci_log.debug_verbose('addResource4: missing ricci hostname')
-			raise Exception, 'unknown ricci agent hostname'
-
-		batch_number, result = setClusterConf(rc, str(conf))
-		if batch_number is None or result is None:
-			luci_log.debug_verbose('addResource5: missing batch_number or result')
-			raise Exception, 'unable to save the new cluster configuration.'
-	except Exception, e:
-		luci_log.debug_verbose('addResource6: %s' % str(e))
-		return 'An error occurred while propagating the new cluster.conf: %s' % str(e)
-
-	if res_type != 'ip':
-		res_name = res.attr_hash['name']
-	else:
-		res_name = res.attr_hash['address']
-
-	try:
-		try:
-			if request.form.has_key('edit'):
-				action_type = RESOURCE_CONFIG
-				action_str = 'Configuring resource \"%s\"' % res_name
-			else:
-				raise Exception, 'new'
-		except Exception, e:
-			action_type = RESOURCE_ADD
-			action_str = 'Creating new resource \"%s\"' % res_name
-
-		set_node_flag(self, clustername, ragent, str(batch_number), action_type, action_str)
-	except Exception, e:
-		luci_log.debug_verbose('addResource7: failed to set flags: %s' % str(e))
-
-	response = request.RESPONSE
-	response.redirect(request['URL'] + "?pagetype=" + RESOURCES + "&clustername=" + clustername + '&busyfirst=true')
-
-def getResource(model, name):
-	resPtr = model.getResourcesPtr()
-	resources = resPtr.getChildren()
-
-	for res in resources:
-		if res.getName() == name:
-			return res
-
-	luci_log.debug_verbose('getResource: unable to find resource \"%s\"' % name)
-	raise KeyError, name
-
-def getResourceForEdit(model, name):
-	resPtr = model.getResourcesPtr()
-	resources = resPtr.getChildren()
-
-	for res in resources:
-		if res.getName() == name:
-			resPtr.removeChild(res)
-			return res
-
-	luci_log.debug_verbose('GRFE0: unable to find resource \"%s\"' % name)
-	raise KeyError, name
+	luci_log.debug_verbose('getResource: unable to find resource "%s"' % name)
+	raise KeyError, name
 
 def appendModel(request, model):
 	try:
@@ -7443,76 +6104,9 @@
 		luci_log.debug_verbose('Appending model to request failed')
 		return 'An error occurred while storing the cluster model.'
 
-def resolve_nodename(self, clustername, nodename):
-	path = str(CLUSTER_FOLDER_PATH + clustername)
-
-	try:
-		clusterfolder = self.restrictedTraverse(path)
-		objs = clusterfolder.objectItems('Folder')
-	except Exception, e:
-		luci_log.debug_verbose('RNN0: error for %s/%s: %s' \
-			% (nodename, clustername, str(e)))
-		return nodename
-
-	for obj in objs:
-		try:
-			if obj[0].find(nodename) != (-1):
-				return obj[0]
-		except:
-			continue
-
-	luci_log.debug_verbose('RNN1: failed for %s/%s: nothing found' \
-		% (nodename, clustername))
-	return nodename
-
-def noNodeFlagsPresent(self, nodefolder, flagname, hostname):
-	try:
-		items = nodefolder.objectItems('ManagedSystem')
-	except:
-		luci_log.debug('NNFP0: error getting flags for %s' % nodefolder[0])
-		return None
-
-	for item in items:
-		if item[0] != flagname:
-			continue
-
-		#a flag already exists... try to delete it
-		try:
-			# hostname must be a FQDN
-			rc = RicciCommunicator(hostname)
-		except Exception, e:
-			luci_log.info('NNFP1: ricci error %s: %s' % (hostname, str(e)))
-			return None
-
-		if not rc.authed():
-			try:
-				snode = getStorageNode(self, hostname)
-				setNodeFlag(snode, CLUSTER_NODE_NEED_AUTH)
-			except:
-				pass
-			luci_log.info('NNFP2: %s not authenticated' % item[0])
-
-		batch_ret = checkBatch(rc, item[1].getProperty(BATCH_ID))
-		finished = batch_ret[0]
-		if finished == True or finished == -1:
-			if finished == -1:
-				luci_log.debug_verbose('NNFP2: batch error: %s' % batch_ret[1])
-			try:
-				nodefolder.manage_delObjects([item[0]])
-			except Exception, e:
-				luci_log.info('NNFP3: manage_delObjects for %s failed: %s' \
-					% (item[0], str(e)))
-				return None
-			return True
-		else:
-			#Not finished, so cannot remove flag
-			return False
-
-	return True
-
 def getModelBuilder(self, rc, isVirtualized):
 	try:
-		cluster_conf_node = getClusterConf(rc)
+		cluster_conf_node = rq.getClusterConf(rc)
 		if not cluster_conf_node:
 			raise Exception, 'getClusterConf returned None'
 	except Exception, e:
@@ -7525,7 +6119,7 @@
 			raise Exception, 'ModelBuilder returned None'
 	except Exception, e:
 		try:
-			luci_log.debug_verbose('GMB1: An error occurred while trying to get model for conf \"%s\": %s' % (cluster_conf_node.toxml(), str(e)))
+			luci_log.debug_verbose('GMB1: An error occurred while trying to get model for conf "%s": %s' % (cluster_conf_node.toxml(), str(e)))
 		except:
 			luci_log.debug_verbose('GMB1: ModelBuilder failed')
 
@@ -7551,87 +6145,57 @@
 
 	return model
 
-def set_node_flag(self, cluname, agent, batchid, task, desc):
-	path = str(CLUSTER_FOLDER_PATH + cluname)
-	batch_id = str(batchid)
-	objname = str(agent + '____flag')
-
-	objpath = ''
-	try:
-		clusterfolder = self.restrictedTraverse(path)
-		clusterfolder.manage_addProduct['ManagedSystem'].addManagedSystem(objname)
-		objpath = str(path + '/' + objname)
-		flag = self.restrictedTraverse(objpath)
-		flag.manage_addProperty(BATCH_ID, batch_id, 'string')
-		flag.manage_addProperty(TASKTYPE, task, 'string')
-		flag.manage_addProperty(FLAG_DESC, desc, 'string')
-	except Exception, e:
-		errmsg = 'SNF0: error creating flag (%s,%s,%s) at %s: %s' \
-					% (batch_id, task, desc, objpath, str(e))
-		luci_log.debug_verbose(errmsg)
-		raise Exception, errmsg
-
-
-
-
-
-
-
-
-
-
-
 def process_cluster_conf_editor(self, req):
 	clustername = req['clustername']
-	msg = '\n'
+	msg_list = list(('\n'))
 	cc = ''
 	if 'new_cluster_conf' in req:
 		cc = req['new_cluster_conf']
-		msg += 'Checking if valid XML - '
+		msg_list.append('Checking if valid XML - ')
 		cc_xml = None
 		try:
 			cc_xml = minidom.parseString(cc)
 		except:
 			pass
 		if cc_xml == None:
-			msg += 'FAILED\n'
-			msg += 'Fix the error and try again:\n'
+			msg_list.append('FAILED\n')
+			msg_list.append('Fix the error and try again:\n')
 		else:
-			msg += 'PASSED\n'
+			msg_list.append('PASSED\n')
 
-			msg += 'Making sure no clustername change has accured - '
+			msg_list.append('Making sure no cluster name change has occurred - ')
 			new_name = cc_xml.firstChild.getAttribute('name')
 			if new_name != clustername:
-				msg += 'FAILED\n'
-				msg += 'Fix the error and try again:\n'
+				msg_list.append('FAILED\n')
+				msg_list.append('Fix the error and try again:\n')
 			else:
-				msg += 'PASSED\n'
+				msg_list.append('PASSED\n')
 
-				msg += 'Increasing cluster version number - '
+				msg_list.append('Incrementing the cluster version number - ')
 				version = cc_xml.firstChild.getAttribute('config_version')
 				version = int(version) + 1
 				cc_xml.firstChild.setAttribute('config_version', str(version))
-				msg += 'DONE\n'
+				msg_list.append('DONE\n')
 
-				msg += 'Propagating new cluster.conf'
+				msg_list.append('Propagating the new cluster.conf')
 				rc = getRicciAgent(self, clustername)
 				if not rc:
 					luci_log.debug_verbose('VFA: unable to find a ricci agent for the %s cluster' % clustername)
-					msg += '\nUnable to contact a ricci agent for cluster ' + clustername + '\n\n'
+					msg_list.append('\nUnable to contact a ricci agent for cluster "%s"\n\n' % clustername)
 				else:
-					batch_id, result = setClusterConf(rc, cc_xml.toxml())
+					batch_id, result = rq.setClusterConf(rc, cc_xml.toxml())
 					if batch_id is None or result is None:
 						luci_log.debug_verbose('VFA: setClusterConf: batchid or result is None')
-						msg += '\nUnable to propagate the new cluster configuration for ' + clustername + '\n\n'
+						msg_list.append('\nUnable to propagate the new cluster configuration for cluster "%s"\n\n' % clustername)
 					else:
-						msg += ' - DONE\n'
+						msg_list.append(' - DONE\n')
 						cc = cc_xml.toxml()
-						msg += '\n\nALL DONE\n\n'
+						msg_list.append('\n\nALL DONE\n\n')
 	else:
 		if getClusterInfo(self, None, req) == {}:
-			msg = 'invalid cluster'
+			msg_list.append('invalid cluster')
 		else:
 			model = req.SESSION.get('model')
 			cc = model.exportModelAsString()
-	return {'msg'              : msg,
-		'cluster_conf'     : cc}
+
+	return {'msg': ''.join(msg_list), 'cluster_conf': cc}
--- conga/luci/site/luci/Extensions/conga_constants.py	2007/03/15 16:50:33	1.39
+++ conga/luci/site/luci/Extensions/conga_constants.py	2007/05/03 20:16:38	1.39.2.1
@@ -1,158 +1,146 @@
-#PAGE_TYPEs
-CLUSTERLIST="3"
-CLUSTERS="4"
-CLUSTER="5"
-CLUSTER_ADD="6"
-CLUSTER_CONFIG="7"
-CLUSTER_PROCESS="8"
-NODE="9"
-NODES="10"
-NODE_LIST="11"
-NODE_GRID="12"
-NODE_CONFIG="14"
-NODE_ADD="15"
-NODE_PROCESS="16"
-NODE_LOGS="17"
-VM_ADD="18"
-VM_CONFIG="19"
-SERVICES="20"
-SERVICE_ADD="21"
-SERVICE_LIST="22"
-SERVICE_CONFIG="23"
-SERVICE="24"
-SERVICE_PROCESS="25"
-SERVICE_START="26"
-SERVICE_STOP="27"
-SERVICE_RESTART="28"
-VM_PROCESS="29"
-RESOURCES="30"
-RESOURCE_ADD="31"
-RESOURCE_LIST="32"
-RESOURCE_CONFIG="33"
-RESOURCE="34"
-RESOURCE_PROCESS="35"
-RESOURCE_REMOVE="36"
-FDOMS="40"
-FDOM_ADD="41"
-FDOM_LIST="42"
-FDOM_CONFIG="43"
-FDOM="44"
-FENCEDEVS="50"
-FENCEDEV_ADD="51"
-FENCEDEV_LIST="52"
-FENCEDEV_CONFIG="53"
-FENCEDEV="54"
-CLUSTER_DAEMON="55"
-SERVICE_DELETE = '56'
-FENCEDEV_DELETE = '57'
-FENCEDEV_NODE_CONFIG = '58'
-SERVICE_MIGRATE = '59'
-
-CONF_EDITOR = '80'
-SYS_SERVICE_MANAGE = '90'
-SYS_SERVICE_UPDATE = '91'
-
-#Cluster tasks
-CLUSTER_STOP = '1000'
-CLUSTER_START = '1001'
-CLUSTER_RESTART = '1002'
-CLUSTER_DELETE = '1003'
-
-#General tasks
-NODE_LEAVE_CLUSTER="100"
-NODE_JOIN_CLUSTER="101"
-NODE_REBOOT="102"
-NODE_FENCE="103"
-NODE_DELETE="104"
-
-BASECLUSTER="201"
-FENCEDAEMON="202"
-MULTICAST="203"
-QUORUMD="204"
+# Cluster area page types
+CLUSTERLIST				= '3'
+CLUSTERS				= '4'
+CLUSTER					= '5'
+CLUSTER_ADD				= '6'
+CLUSTER_CONFIG			= '7'
+CLUSTER_PROCESS			= '8'
+NODE					= '9'
+NODES					= '10'
+NODE_LIST				= '11'
+NODE_GRID				= '12'
+NODE_CONFIG				= '14'
+NODE_ADD				= '15'
+NODE_PROCESS			= '16'
+NODE_LOGS				= '17'
+VM_ADD					= '18'
+VM_CONFIG				= '19'
+SERVICES				= '20'
+SERVICE_ADD				= '21'
+SERVICE_LIST			= '22'
+SERVICE_CONFIG			= '23'
+SERVICE					= '24'
+SERVICE_PROCESS			= '25'
+SERVICE_START			= '26'
+SERVICE_STOP			= '27'
+SERVICE_RESTART			= '28'
+VM_PROCESS				= '29'
+RESOURCES				= '30'
+RESOURCE_ADD			= '31'
+RESOURCE_LIST			= '32'
+RESOURCE_CONFIG			= '33'
+RESOURCE				= '34'
+RESOURCE_PROCESS		= '35'
+RESOURCE_REMOVE			= '36'
+FDOMS					= '40'
+FDOM_ADD				= '41'
+FDOM_LIST				= '42'
+FDOM_CONFIG				= '43'
+FDOM					= '44'
+FENCEDEVS				= '50'
+FENCEDEV_ADD			= '51'
+FENCEDEV_LIST			= '52'
+FENCEDEV_CONFIG			= '53'
+FENCEDEV				= '54'
+CLUSTER_DAEMON			= '55'
+SERVICE_DELETE			= '56'
+FENCEDEV_DELETE			= '57'
+FENCEDEV_NODE_CONFIG	= '58'
+SERVICE_MIGRATE			= '59'
+CONF_EDITOR				= '80'
+SYS_SERVICE_MANAGE		= '90'
+SYS_SERVICE_UPDATE		= '91'
+
+# Cluster tasks
+CLUSTER_STOP	= '1000'
+CLUSTER_START	= '1001'
+CLUSTER_RESTART	= '1002'
+CLUSTER_DELETE	= '1003'
+
+# Node tasks
+NODE_LEAVE_CLUSTER	= '100'
+NODE_JOIN_CLUSTER	= '101'
+NODE_REBOOT			= '102'
+NODE_FENCE			= '103'
+NODE_DELETE			= '104'
+
+# General tasks
+BASECLUSTER	= '201'
+FENCEDAEMON	= '202'
+MULTICAST	= '203'
+QUORUMD		= '204'
 
 PROPERTIES_TAB = 'tab'
 
-PROP_GENERAL_TAB = '1'
-PROP_FENCE_TAB = '2'
-PROP_MCAST_TAB = '3'
-PROP_QDISK_TAB = '4'
-PROP_GULM_TAB = '5'
-
-PAGETYPE="pagetype"
-ACTIONTYPE="actiontype"
-TASKTYPE="tasktype"
-CLUNAME="clustername"
-BATCH_ID="batch_id"
-FLAG_DESC="flag_desc"
-LAST_STATUS="last_status"
+PROP_GENERAL_TAB	= '1'
+PROP_FENCE_TAB		= '2'
+PROP_MCAST_TAB		= '3'
+PROP_QDISK_TAB		= '4'
+PROP_GULM_TAB		= '5'
+
+PAGETYPE	= 'pagetype'
+ACTIONTYPE	= 'actiontype'
+TASKTYPE	= 'tasktype'
+CLUNAME		= 'clustername'
+BATCH_ID	= 'batch_id'
+FLAG_DESC	= 'flag_desc'
+LAST_STATUS	= 'last_status'
 
-PATH_TO_PRIVKEY="/var/lib/luci/var/certs/privkey.pem"
-PATH_TO_CACERT="/var/lib/luci/var/certs/cacert.pem"
+PATH_TO_PRIVKEY	= '/var/lib/luci/var/certs/privkey.pem'
+PATH_TO_CACERT	= '/var/lib/luci/var/certs/cacert.pem'
 
 # Zope DB paths
+PLONE_ROOT = 'luci'
 CLUSTER_FOLDER_PATH = '/luci/systems/cluster/'
 STORAGE_FOLDER_PATH = '/luci/systems/storage/'
 
-#Node states
-NODE_ACTIVE="0"
-NODE_INACTIVE="1"
-NODE_UNKNOWN="2"
-NODE_ACTIVE_STR="Cluster Member"
-NODE_INACTIVE_STR="Not a Cluster Member"
-NODE_UNKNOWN_STR="Unknown State"
-
-FD_VAL_FAIL = 1
-FD_VAL_SUCCESS = 0
-
-#cluster/node create batch task index
-INSTALL_TASK = 1
-DISABLE_SVC_TASK = 2
-REBOOT_TASK = 3
-SEND_CONF = 4
-ENABLE_SVC_TASK = 5
-START_NODE = 6
-RICCI_CONNECT_FAILURE = (-1000)
+# Node states
+NODE_ACTIVE		= '0'
+NODE_INACTIVE	= '1'
+NODE_UNKNOWN	= '2'
+
+NODE_ACTIVE_STR		= 'Cluster Member'
+NODE_INACTIVE_STR	= 'Not a Cluster Member'
+NODE_UNKNOWN_STR	= 'Unknown State'
+
+# cluster/node create batch task index
+INSTALL_TASK			= 1
+DISABLE_SVC_TASK		= 2
+REBOOT_TASK				= 3
+SEND_CONF				= 4
+ENABLE_SVC_TASK			= 5
+START_NODE				= 6
+RICCI_CONNECT_FAILURE	= (-1000)
+
+RICCI_CONNECT_FAILURE_MSG = 'A problem was encountered connecting with this node.  '
 
-RICCI_CONNECT_FAILURE_MSG = "A problem was encountered connecting with this node.  "
-#cluster/node create error messages
+# cluster/node create error messages
 CLUNODE_CREATE_ERRORS = [
-	"An unknown error occurred when creating this node: ",
-	"A problem occurred when installing packages: ",
-	"A problem occurred when disabling cluster services on this node: ",
-	"A problem occurred when rebooting this node: ",
-	"A problem occurred when propagating the configuration to this node: ",
-	"A problem occurred when enabling cluster services on this node: ",
-	"A problem occurred when starting this node: "
+	'An unknown error occurred when creating this node: %s',
+	'A problem occurred when installing packages: %s',
+	'A problem occurred when disabling cluster services on this node: %s',
+	'A problem occurred when rebooting this node: %s',
+	'A problem occurred when propagating the configuration to this node: %s',
+	'A problem occurred when enabling cluster services on this node: %s',
+	'A problem occurred when starting this node: %s'
 ]
 
-#cluster/node create error status messages
-PRE_INSTALL = "The install state is not yet complete"
-PRE_REBOOT = "Installation complete, but reboot not yet complete"
-PRE_CFG = "Reboot stage successful, but configuration for the cluster is not yet distributed"
-PRE_JOIN = "Packages are installed and configuration has been distributed, but the node has not yet joined the cluster."
-
-
-POSSIBLE_REBOOT_MESSAGE = "This node is not currently responding and is probably rebooting as planned. This state should persist for 5 minutes or so..."
-
-REDIRECT_MSG = " You will be redirected in 5 seconds."
-
-
-# Homebase-specific constants
-HOMEBASE_ADD_USER = "1"
-HOMEBASE_ADD_SYSTEM = "2"
-HOMEBASE_PERMS = "3"
-HOMEBASE_DEL_USER = "4"
-HOMEBASE_DEL_SYSTEM = "5"
-HOMEBASE_ADD_CLUSTER = "6"
-HOMEBASE_ADD_CLUSTER_INITIAL = "7"
-HOMEBASE_AUTH = "8"
+# cluster/node create error status messages
+PRE_INSTALL = 'The install state is not yet complete.'
+PRE_REBOOT	= 'Installation complete, but reboot not yet complete.'
+PRE_CFG		= 'Reboot stage successful, but configuration for the cluster is not yet distributed.'
+PRE_JOIN	= 'Packages are installed and configuration has been distributed, but the node has not yet joined the cluster.'
 
-# Cluster node exception attribute flags
-CLUSTER_NODE_NEED_AUTH = 0x01
-CLUSTER_NODE_NOT_MEMBER = 0x02
-CLUSTER_NODE_ADDED = 0x04
+POSSIBLE_REBOOT_MESSAGE = 'This node is not currently responding and is probably rebooting as planned. This state should persist for 5 minutes or so...'
 
-PLONE_ROOT = 'luci'
+REDIRECT_MSG = ' -- You will be redirected in 5 seconds.'
 
-LUCI_DEBUG_MODE = 0
-LUCI_DEBUG_VERBOSITY = 0
+# Cluster node exception attribute flags
+CLUSTER_NODE_NEED_AUTH	= 0x01
+CLUSTER_NODE_NOT_MEMBER	= 0x02
+CLUSTER_NODE_ADDED		= 0x04
+
+# Debugging parameters. Set LUCI_DEBUG_MODE to 1 and LUCI_DEBUG_VERBOSITY
+# to >= 2 to get full debugging output in syslog (LOG_DAEMON/LOG_DEBUG).
+LUCI_DEBUG_MODE			= 0
+LUCI_DEBUG_VERBOSITY	= 0
--- conga/luci/site/luci/Extensions/conga_storage_constants.py	2006/10/15 22:34:54	1.8
+++ conga/luci/site/luci/Extensions/conga_storage_constants.py	2007/05/03 20:16:38	1.8.8.1
@@ -1,65 +1,24 @@
-
-from ricci_defines import *
-
+from ricci_defines import MAPPER_ATARAID_TYPE, MAPPER_CRYPTO_TYPE, MAPPER_iSCSI_TYPE, MAPPER_MDRAID_TYPE, MAPPER_MULTIPATH_TYPE, MAPPER_PT_TYPE, MAPPER_SYS_TYPE, MAPPER_VG_TYPE
 
 ## request vars ##
 
-PAGETYPE="pagetype"
-CLUNAME="clustername"
-STONAME='storagename'
-
-
-
-## pagetypes ##
-
-# CLUSTER PAGE_TYPEs #
-CLUSTERS="4"
-CLUSTER="5"
-CLUSTER_ADD="6"
-CLUSTER_CONFIG="7"
-NODE="9"
-NODES="10"
-NODE_LIST="11"
-NODE_GRID="12"
-NODE_CONFIG="14"
-NODE_ADD="15"
-NODE_PROCESS="16"
-SERVICES="20"
-SERVICE_ADD="21"
-SERVICE_LIST="22"
-SERVICE_CONFIG="23"
-SERVICE="24"
-SERVICE_PROCESS="25"
-RESOURCES="30"
-RESOURCE_ADD="31"
-RESOURCE_LIST="32"
-RESOURCE_CONFIG="33"
-RESOURCE="34"
-RESOURCE_PROCESS="35"
-FDOMS="40"
-FDOM_ADD="41"
-FDOM_LIST="42"
-FDOM_CONFIG="43"
-FDOM="44"
-FENCEDEVS="50"
-FENCEDEV_ADD="51"
-FENCEDEV_LIST="52"
-FENCEDEV_CONFIG="53"
-FENCEDEV="54"
+PAGETYPE = "pagetype"
+CLUNAME = "clustername"
+STONAME = 'storagename'
 
 
 # storage pagetypes #
 
-PT_MAPPER_ID='mapper_id'
-PT_MAPPER_TYPE='mapper_type'
-PT_PATH='bd_path'
-
-STORAGESYS="0"
-STORAGE_CONFIG="43"
-STORAGE="44"
-CLUSTER_STORAGE="45"
+PT_MAPPER_ID = 'mapper_id'
+PT_MAPPER_TYPE = 'mapper_type'
+PT_PATH = 'bd_path'
+
+STORAGESYS = "0"
+STORAGE_CONFIG = "43"
+STORAGE = "44"
+CLUSTER_STORAGE = "45"
 
-STORAGE_COMMIT_CHANGES='commit_changes'
+STORAGE_COMMIT_CHANGES = 'commit_changes'
 
 
 VIEW_MAPPERS = '51'
@@ -84,6 +43,7 @@
                       MAPPER_MULTIPATH_TYPE   : ('Multipath',       'Multipath',      'Path'),
                       MAPPER_CRYPTO_TYPE      : ('Encryption',      'Volume',         'Device'),
                       MAPPER_iSCSI_TYPE       : ('iSCSI',           'Volume',         'BUG: source not defined')}
+
 def get_pretty_mapper_info(mapper_type):
     try:
         return PRETTY_MAPPER_INFO[mapper_type]
@@ -148,6 +108,7 @@
                      'uuid'                    : "UUID",
                      'vendor'                  : "Vendor",
                      'vgname'                  : "Volume Group Name"}
+
 def get_pretty_prop_name(name):
     try:
         return PRETTY_PROP_NAMES[name]
@@ -181,6 +142,7 @@
                    'ocfs2'    : "Oracle Clustered FS v.2",
                    'relayfs'  : "Relay FS",
                    'udf'      : "Universal Disk Format"}
+
 def get_pretty_fs_name(name):
     try:
         return PRETTY_FS_NAMES[name]
@@ -200,6 +162,7 @@
                 MAPPER_MULTIPATH_TYPE   : ('icon_mapper_multipath.png', 'icon_bd_multipath.png', ''),
                 MAPPER_CRYPTO_TYPE      : ('icon_mapper_crypto.png',    'icon_bd_crypto.png',    ''),
                 MAPPER_iSCSI_TYPE       : ('',                          'icon_bd_net.png',       '')}
+
 def get_mapper_icons(mapper_type):
     try:
         return MAPPER_ICONS[mapper_type]
--- conga/luci/site/luci/Extensions/homebase_adapters.py	2007/02/12 20:24:28	1.50
+++ conga/luci/site/luci/Extensions/homebase_adapters.py	2007/05/03 20:16:38	1.50.2.1
@@ -1,34 +1,25 @@
-import re
-import os
-from AccessControl import getSecurityManager
-import cgi
-
 from conga_constants import PLONE_ROOT, CLUSTER_NODE_NEED_AUTH, \
-							HOMEBASE_ADD_CLUSTER, HOMEBASE_ADD_CLUSTER_INITIAL, \
-							HOMEBASE_ADD_SYSTEM, HOMEBASE_ADD_USER, \
-							HOMEBASE_DEL_SYSTEM, HOMEBASE_DEL_USER, HOMEBASE_PERMS, \
-							STORAGE_FOLDER_PATH, CLUSTER_FOLDER_PATH
-
-from ricci_bridge import getClusterConf
-from ricci_communicator import RicciCommunicator, CERTS_DIR_PATH
-from clusterOS import resolveOSType
+	STORAGE_FOLDER_PATH, CLUSTER_FOLDER_PATH
+
+from RicciQueries import getClusterConf
 from LuciSyslog import LuciSyslog
+from HelperFunctions import resolveOSType
+
+# Homebase area page types
+HOMEBASE_ADD_USER				= '1'
+HOMEBASE_ADD_SYSTEM				= '2'
+HOMEBASE_PERMS					= '3'
+HOMEBASE_DEL_USER				= '4'
+HOMEBASE_DEL_SYSTEM				= '5'
+HOMEBASE_ADD_CLUSTER			= '6'
+HOMEBASE_ADD_CLUSTER_INITIAL	= '7'
+HOMEBASE_AUTH					= '8'
 
 try:
 	luci_log = LuciSyslog()
 except:
 	pass
 
-def siteIsSetup(self):
-	try:
-		if os.path.isfile(CERTS_DIR_PATH + 'privkey.pem') and os.path.isfile(CERTS_DIR_PATH + 'cacert.pem'):
-			return True
-	except: pass
-	return False
-
-def strFilter(regex, replaceChar, arg):
-	return re.sub(regex, replaceChar, arg)
-
 def validateDelSystem(self, request):
 	errors = list()
 	messages = list()
@@ -42,7 +33,7 @@
 			if dsResult:
 				errors.append(dsResult)
 			else:
-				messages.append('Removed storage system \"%s\" successfully' % i)
+				messages.append('Removed storage system "%s" successfully' % i)
 
 	if '__CLUSTER' in request.form:
 		cluNames = request.form['__CLUSTER']
@@ -53,7 +44,7 @@
 			if dcResult:
 				errors.append(dcResult)
 			else:
-				messages.append('Removed cluster \"%s\" successfully' % i)
+				messages.append('Removed cluster "%s" successfully' % i)
 
 	if len(errors) > 0:
 		retCode = False
@@ -76,27 +67,27 @@
 		if not user:
 			raise Exception, 'user %s does not exist' % userId
 	except:
-		return (False, {'errors': [ 'No such user: \"' + userId + '\"' ] })
+		return (False, {'errors': [ 'No such user: "%s"' % userId ] })
 
 	for i in getClusters(self):
 		try:
 			i[1].manage_delLocalRoles([userId])
 		except:
-			errors.append('Error deleting roles from cluster \"' + i[0] + '\" for user \"' + userId + '\"')
+			errors.append('Error deleting roles from cluster "%s" for user "%s"' % (i[0], userId))
 
 	for i in getStorage(self):
 		try:
 			i[1].manage_delLocalRoles([userId])
 		except:
-			errors.append('Error deleting roles from storage system \"' + i[0] + '\" for user \"' + userId + '\"')
+			errors.append('Error deleting roles from storage system "%s" for user "%s"' % (i[0], userId))
 
 	try:
 		self.acl_users.userFolderDelUsers([userId])
 	except:
-		errors.append('Unable to delete user \"' + userId + '\"')
+		errors.append('Unable to delete user "%s"' % userId)
 		return (False, {'errors': errors })
 
-	messages.append('User \"' + userId + '\" has been deleted')
+	messages.append('User "%s" has been deleted' % userId)
 	return (True, {'errors': errors, 'messages': messages })
 
 def validateAddUser(self, request):
@@ -112,7 +103,7 @@
 	user = request.form['newUserName']
 
 	if self.portal_membership.getMemberById(user):
-		return (False, {'errors': ['The user \"' + user + '\" already exists']})
+		return (False, {'errors': ['The user "%s" already exists' % user ]})
 
 	passwd = request.form['newPassword']
 	pwconfirm = request.form['newPasswordConfirm']
@@ -121,14 +112,14 @@
 		return (False, {'errors': ['The passwords do not match']})
 
 	try:
-		self.portal_registration.addMember(user, passwd, properties = { 'username': user, 'password': passwd, 'confirm': passwd, 'roles': ['Member'], 'domains':[], 'email': user + '@example.com' })
+		self.portal_registration.addMember(user, passwd, properties = { 'username': user, 'password': passwd, 'confirm': passwd, 'roles': ['Member'], 'domains':[], 'email': '%s at example.com' % user })
 	except:
-		return (False, {'errors': [ 'Unable to add new user \"' + user + '\"' ] })
+		return (False, {'errors': [ 'Unable to add new user "%s"' % user ] })
 
 	if not self.portal_membership.getMemberById(user):
-		return (False, {'errors': [ 'Unable to add new user \"' + user + '\"'] })
+		return (False, {'errors': [ 'Unable to add new user "%s"' % user ] })
 
-	messages.append('Added new user \"' + user + '\" successfully')
+	messages.append('Added new user "%s" successfully' % user)
 	return (True, {'messages': messages, 'params': { 'user': user }})
 
 def validateAddClusterInitial(self, request):
@@ -206,13 +197,13 @@
 	if not check_certs or cur_host_trusted:
 		try:
 			if cur_host_fp is not None and cur_host_fp != cur_fp[1]:
-				errmsg = 'The key fingerprint for %s has changed from under us. It was \"%s\" and is now \"%s\".' \
+				errmsg = 'The key fingerprint for %s has changed from under us. It was "%s" and is now "%s."' \
 					% (cur_host, cur_host_fp, cur_fp[1])
 				request.SESSION.set('add_cluster_initial', cur_entry)
 				luci_log.info('SECURITY: %s' % errmsg)
 				return (False, { 'errors': [ errmsg ] })
 			if trust_shown is True and cur_host_trusted is False:
-				errmsg = 'You must elect to trust \"%s\" or abort the addition of the cluster to Luci.' % cur_host
+				errmsg = 'You must elect to trust "%s" or abort the addition of the cluster to Luci.' % cur_host
 				request.SESSION.set('add_cluster_initial', cur_entry)
 				return (False, { 'errors': [ errmsg ] })
 			rc.trust()
@@ -259,7 +250,7 @@
 			errmsg = 'Unable to authenticate to the ricci agent on %s: %s' % (cur_host, str(e))
 			luci_log.debug_verbose('vACI5: %s: %s' % (cur_host, str(e)))
 			request.SESSION.set('add_cluster_initial', cur_entry)
-			return (False, { 'errors': [ 'Unable to authenticate to the ricci agent on \"%s\"' % cur_host ] })
+			return (False, { 'errors': [ 'Unable to authenticate to the ricci agent on "%s"' % cur_host ] })
 
 	del cur_entry
 
@@ -276,9 +267,9 @@
 				pass
 
 		if not cluster_info:
-			errmsg = 'An error occurred while attempting to retrieve the cluster.conf file from \"%s\"' % cur_host
+			errmsg = 'An error occurred while attempting to retrieve the cluster.conf file from "%s"' % cur_host
 		else:
-			errmsg = '\"%s\" reports is not a member of any cluster.' % cur_host
+			errmsg = '"%s" reports is not a member of any cluster.' % cur_host
 		return (False, { 'errors': [ errmsg ] })
 
 	cluster_name = cluster_info[0]
@@ -301,10 +292,10 @@
 	# Make sure a cluster with this name is not already managed before
 	# going any further.
 	try:
-		dummy = self.restrictedTraverse(CLUSTER_FOLDER_PATH + cluster_name)
+		dummy = self.restrictedTraverse('%s%s' % (CLUSTER_FOLDER_PATH, cluster_name))
 		if not dummy:
 			raise Exception, 'no existing cluster'
-		errors.append('A cluster named \"%s\" is already managed.')
+		errors.append('A cluster named "%s" is already managed.')
 		if not prev_auth:
 			try:
 				rc.unauth()
@@ -320,7 +311,7 @@
 				rc.unauth()
 			except:
 				pass
-		return (False, { 'errors': [ 'Error retrieving the nodes list for cluster \"%s\" from node \"%s\"' % (cluster_name, cur_host) ] })
+		return (False, { 'errors': [ 'Error retrieving the nodes list for cluster "%s" from node "%s"' % (cluster_name, cur_host) ] })
 
 	same_node_passwds = False
 	try:
@@ -369,7 +360,7 @@
 				raise Exception, 'no hostname'
 			cur_host = sysData[0]
 			if cur_host in system_list:
-				errors.append('You have added \"%s\" more than once.' % cur_host)
+				errors.append('You have added "%s" more than once.' % cur_host)
 				raise Exception, '%s added more than once' % cur_host
 		except:
 			i += 1
@@ -408,7 +399,7 @@
 				if cur_set_trust is True and cur_fp is not None:
 					cur_system['fp'] = cur_fp
 					if cur_fp != fp[1]:
-						errmsg = '1The key fingerprint for %s has changed from under us. It was \"%s\" and is now \"%s\".' % (cur_host, cur_fp, fp[1])
+						errmsg = 'The key fingerprint for %s has changed from under us. It was "%s" and is now "%s."' % (cur_host, cur_fp, fp[1])
 						errors.append(errmsg)
 						luci_log.info('SECURITY: %s' % errmsg)
 						cur_system['error'] = True
@@ -446,7 +437,7 @@
 				if not rc.trusted() and (trust_shown is True and cur_set_trust is False):
 					incomplete = True
 					cur_system['error'] = True
-					errors.append('You must either trust \"%s\" or remove it.' % cur_host)
+					errors.append('You must either trust "%s" or remove it.' % cur_host)
 				else:
 					# The user doesn't care. Trust the system.
 					rc.trust()
@@ -563,7 +554,7 @@
 					cur_cluster_name = cluster_info[1]
 
 				if cur_cluster_name:
-					err_msg = 'Node %s reports it is in cluster \"%s\" and we expect \"%s\"' \
+					err_msg = 'Node %s reports it is in cluster "%s" and we expect "%s"' \
 						% (cur_host, cur_cluster_name % cluster_name)
 				else:
 					err_msg = 'Node %s reports it is not a member of any cluster' % cur_host
@@ -580,7 +571,7 @@
 
 			cur_os = resolveOSType(rc.os())
 			if cur_os != cluster_os:
-				luci_log.debug_verbose('VAC5a: \"%s\" / \"%s\" -> \"%s\"' \
+				luci_log.debug_verbose('VAC5a: "%s" / "%s" -> "%s"' \
 					% (cluster_os, rc.os(), cur_os))
 				incomplete = True
 				cur_system['errors'] = True
@@ -657,7 +648,7 @@
 				errors.append(csResult)
 			else:
 				delete_keys.append(i)
-				messages.append('Added storage system \"%s\" successfully' \
+				messages.append('Added storage system "%s" successfully' \
 					% cur_host)
 
 	for i in delete_keys:
@@ -687,109 +678,118 @@
 	return (return_code, { 'errors': errors, 'messages': messages})
 
 def validatePerms(self, request):
-	userId = None
 	messages = list()
 	errors = list()
 
-	try:
-		userId = request.form['userList']
-	except:
-		return (False, {'errors': [ 'No user specified' ], 'params': { 'user': userId }})
+	username = None
+	if not request.form.has_key('userList'):
+		luci_log.debug_verbose('VP0: no user given')
+		errors.append('No user name was given.')
+	else:
+		username = request.form['userList'].strip()
 
-	user = self.portal_membership.getMemberById(userId)
-	if not user:
-		return (False, {'errors': [ 'Invalid user specified' ], 'params': { 'user': userId }})
+	user_id = None
+	if username is not None:
+		try:
+			user = self.portal_membership.getMemberById(username)
+			if not user:
+				raise Exception, 'no user'
+			user_id = user.getUserId()
+		except Exception, e:
+			luci_log.debug_verbose('VP1: no user "%s": %s' % (username, str(e)))
+			errors.append('An invalid user "%s" was given.' % username)
 
-	userId = user.getUserId()
+	if len(errors) > 0:
+		return (False, { 'errors': errors })
 
-	clusters = self.restrictedTraverse(PLONE_ROOT + '/systems/cluster/objectItems')('Folder')
-	if not '__CLUSTER' in request.form:
+	clusters = self.restrictedTraverse('%s/systems/cluster/objectItems' % PLONE_ROOT)('Folder')
+	if not request.form.has_key('__CLUSTER'):
 		for i in clusters:
 			try:
 				if user.has_role('View', i[1]):
-					roles = list(i[1].get_local_roles_for_userid(userId))
+					roles = list(i[1].get_local_roles_for_userid(user_id))
 					roles.remove('View')
 
 					if roles:
-						i[1].manage_setLocalRoles(userId, roles)
+						i[1].manage_setLocalRoles(user_id, roles)
 					else:
-						i[1].manage_delLocalRoles([userId])
-					messages.append('Removed permission for ' + userId + ' for cluster ' + i[0])
+						i[1].manage_delLocalRoles([ user_id ])
+					messages.append('Removed permission for user "%s" for cluster "%s"' % (user_id, i[0]))
 			except:
-				errors.append('Failed to remove permission for ' + userId + ' for cluster ' + i[0])
+				errors.append('Failed to remove permission for user "%s" for cluster "%s"' % (user_id, i[0]))
 	else:
 		for i in clusters:
 			if i[0] in request.form['__CLUSTER']:
 				try:
 					if not user.has_role('View', i[1]):
-						roles = list(i[1].get_local_roles_for_userid(userId))
+						roles = list(i[1].get_local_roles_for_userid(user_id))
 						roles.append('View')
-						i[1].manage_setLocalRoles(userId, roles)
-						messages.append('Added permission for ' + userId + ' for cluster ' + i[0])
+						i[1].manage_setLocalRoles(user_id, roles)
+						messages.append('Added permission for user "%s" for cluster "%s"' % (user_id, i[0]))
 				except:
-					errors.append('Failed to add permission for ' + userId + ' for cluster ' + i[0])
+					errors.append('Failed to add permission for user "%s" for cluster "%s"' % (user_id, i[0]))
 			else:
 				try:
 					if user.has_role('View', i[1]):
-						roles = list(i[1].get_local_roles_for_userid(userId))
+						roles = list(i[1].get_local_roles_for_userid(user_id))
 						roles.remove('View')
 
 						if roles:
-							i[1].manage_setLocalRoles(userId, roles)
+							i[1].manage_setLocalRoles(user_id, roles)
 						else:
-							i[1].manage_delLocalRoles([userId])
+							i[1].manage_delLocalRoles([ user_id ])
 
-						messages.append('Removed permission for ' + userId + ' for cluster ' + i[0])
+						messages.append('Removed permission for user "%s" for cluster "%s"' % (user_id, i[0]))
 				except:
-					errors.append('Failed to remove permission for ' + userId + ' for cluster ' + i[0])
+					errors.append('Failed to remove permission for user "%s" for cluster "%s"' % (user_id, i[0]))
 
-	storage = self.restrictedTraverse(PLONE_ROOT + '/systems/storage/objectItems')('Folder')
-	if not '__SYSTEM' in request.form:
+	storage = self.restrictedTraverse('%s/systems/storage/objectItems' % PLONE_ROOT)('Folder')
+	if not request.form.has_key('__SYSTEM'):
 		for i in storage:
 			try:
 				if user.has_role('View', i[1]):
-					roles = list(i[1].get_local_roles_for_userid(userId))
+					roles = list(i[1].get_local_roles_for_userid(user_id))
 					roles.remove('View')
 
 					if roles:
-						i[1].manage_setLocalRoles(userId, roles)
+						i[1].manage_setLocalRoles(user_id, roles)
 					else:
-						i[1].manage_delLocalRoles([userId])
-					messages.append('Removed permission for ' + userId + ' for ' + i[0])
+						i[1].manage_delLocalRoles([ user_id ])
+					messages.append('Removed permission for user "%s" for system "%s"' % (user_id, i[0]))
 			except:
-				errors.append('Failed to remove permission for ' + userId + ' for ' + i[0])
+				errors.append('Failed to remove permission for user "%s" for system "%s"' % (user_id, i[0]))
 	else:
 		for i in storage:
 			if i[0] in request.form['__SYSTEM']:
 				try:
 					if not user.has_role('View', i[1]):
-						roles = list(i[1].get_local_roles_for_userid(userId))
+						roles = list(i[1].get_local_roles_for_userid(user_id))
 						roles.append('View')
-						i[1].manage_setLocalRoles(userId, roles)
-						messages.append('Added permission for ' + userId + ' for system ' + i[0])
+						i[1].manage_setLocalRoles(user_id, roles)
+						messages.append('Added permission for user "%s" for system "%s"' % (user_id, i[0]))
 				except:
-					errors.append('Failed to add permission for ' + userId + ' for system ' + i[0])
+					errors.append('Failed to add permission for user "%s" for system "%s"' % (user_id, i[0]))
 			else:
 				try:
 					if user.has_role('View', i[1]):
-						roles = list(i[1].get_local_roles_for_userid(userId))
+						roles = list(i[1].get_local_roles_for_userid(user_id))
 						roles.remove('View')
 
 						if roles:
-							i[1].manage_setLocalRoles(userId, roles)
+							i[1].manage_setLocalRoles(user_id, roles)
 						else:
-							i[1].manage_delLocalRoles([userId])
+							i[1].manage_delLocalRoles([ user_id ])
 
-						messages.append('Removed permission for ' + userId + ' for system ' + i[0])
+						messages.append('Removed permission for user "%s" for system "%s"' % (user_id, i[0]))
 				except:
-					errors.append('Failed to remove permission for ' + userId + ' for system ' + i[0])
+					errors.append('Failed to remove permission for user "%s" for system "%s"' % (user_id, i[0]))
 
 	if len(errors) > 0:
-		returnCode = False
+		ret = False
 	else:
-		returnCode = True
+		ret = True
 
-	return (returnCode, {'errors': errors, 'messages': messages, 'params': {'user': userId }})
+	return (ret , {'errors': errors, 'messages': messages, 'params': {'user': user_id }})
 
 def validateAuthenticate(self, request):
 	try:
@@ -861,17 +861,11 @@
 		except:
 			pass
 
-	if len(errors) > 0:
-		return_code = False
-	else:
-		return_code = True
-
 	if incomplete:
 		try:
 			request.SESSION.set('auth_systems', system_list)
 		except Exception, e:
 			luci_log.debug_verbose('validateAuthenticate2: %s' % str(e))
-		return_code = False
 	else:
 		try:
 			request.SESSION.delete('auth_systems')
@@ -897,28 +891,6 @@
 	validateAuthenticate
 ]
 
-def userAuthenticated(self):
-	try:
-		if (isAdmin(self) or getSecurityManager().getUser().has_role('Authenticated', self.restrictedTraverse(PLONE_ROOT))):
-			return True
-	except Exception, e:
-		luci_log.debug_verbose('UA0: %s' % str(e)) 
-	return False
-
-def isAdmin(self):
-	try:
-		return getSecurityManager().getUser().has_role('Owner', self.restrictedTraverse(PLONE_ROOT))
-	except Exception, e:
-		luci_log.debug_verbose('IA0: %s' % str(e)) 
-	return False
-
-def userIsAdmin(self, userId):
-	try:
-		return self.portal_membership.getMemberById(userId).has_role('Owner', self.restrictedTraverse(PLONE_ROOT))
-	except Exception, e:
-		luci_log.debug_verbose('UIA0: %s: %s' % (userId, str(e)))
-	return False
-
 def homebaseControlPost(self, request):
 	if 'ACTUAL_URL' in request:
 		url = request['ACTUAL_URL']
@@ -932,7 +904,7 @@
 			request.SESSION.set('checkRet', {})
 		except:
 			pass
-		return homebasePortal(self, request, '.', '0')
+		return homebasePortal(self, '.', '0')
 
 	try:
 		validatorFn = formValidators[pagetype - 1]
@@ -941,7 +913,7 @@
 			request.SESSION.set('checkRet', {})
 		except:
 			pass
-		return homebasePortal(self, request, '.', '0')
+		return homebasePortal(self, '.', '0')
 
 	ret = validatorFn(self, request)
 	params = None
@@ -951,7 +923,7 @@
 			params = ret[1]['params']
 		request.SESSION.set('checkRet', ret[1])
 
-	return homebasePortal(self, request, url, pagetype, params)
+	return homebasePortal(self, url, pagetype, params)
 
 def homebaseControl(self, request):
 	if request.REQUEST_METHOD == 'POST':
@@ -972,9 +944,9 @@
 	else:
 		pagetype = '0'
 
-	return homebasePortal(self, request, url, pagetype)
+	return homebasePortal(self, url, pagetype)
 
-def homebasePortal(self, request=None, url=None, pagetype=None, params=None):
+def homebasePortal(self, url=None, pagetype=None, params=None):
 	ret = {}
 	temp = list()
 	index = 0
@@ -990,7 +962,7 @@
 		if havePermAddStorage(self):
 			addSystem = {}
 			addSystem['Title'] = 'Add a System'
-			addSystem['absolute_url'] = url + '?pagetype=' + HOMEBASE_ADD_SYSTEM
+			addSystem['absolute_url'] = '%s?pagetype=%s' % (url, HOMEBASE_ADD_SYSTEM)
 			addSystem['Description'] = 'Add a system to the Luci storage management interface.'
 			if pagetype == HOMEBASE_ADD_SYSTEM:
 				cur = addSystem
@@ -1007,7 +979,7 @@
 		if havePermAddCluster(self):
 			addCluster = {}
 			addCluster['Title'] = 'Add an Existing Cluster'
-			addCluster['absolute_url'] = url + '?pagetype=' + HOMEBASE_ADD_CLUSTER_INITIAL
+			addCluster['absolute_url'] = '%s?pagetype=%s' % (url, HOMEBASE_ADD_CLUSTER_INITIAL)
 			addCluster['Description'] = 'Add an existing cluster to the Luci cluster management interface.'
 			if pagetype == HOMEBASE_ADD_CLUSTER_INITIAL or pagetype == HOMEBASE_ADD_CLUSTER:
 				addCluster['currentItem'] = True
@@ -1027,7 +999,7 @@
 		if (havePermRemStorage(self) and havePermRemCluster(self) and (getStorage(self) or getClusters(self))):
 			remSystem = {}
 			remSystem['Title'] = 'Manage Systems'
-			remSystem['absolute_url'] = url + '?pagetype=' + HOMEBASE_DEL_SYSTEM
+			remSystem['absolute_url'] = '%s?pagetype=%s' % (url, HOMEBASE_DEL_SYSTEM)
 			remSystem['Description'] = 'Update or remove storage systems and clusters.'
 			if pagetype == HOMEBASE_DEL_SYSTEM:
 				remSystem['currentItem'] = True
@@ -1037,7 +1009,8 @@
 				remSystem['currentItem'] = False
 			index += 1
 			temp.append(remSystem)
-	except: pass
+	except:
+		pass
 
 #
 # Add a Luci user.
@@ -1047,7 +1020,7 @@
 		if havePermAddUser(self):
 			addUser = {}
 			addUser['Title'] = 'Add a User'
-			addUser['absolute_url'] = url + '?pagetype=' + HOMEBASE_ADD_USER
+			addUser['absolute_url'] = '%s?pagetype=%s' % (url, HOMEBASE_ADD_USER)
 			addUser['Description'] = 'Add a user to the Luci interface.'
 			if pagetype == HOMEBASE_ADD_USER:
 				addUser['currentItem'] = True
@@ -1057,7 +1030,8 @@
 				addUser['currentItem'] = False
 			index += 1
 			temp.append(addUser)
-	except: pass
+	except:
+		pass
 
 #
 # Delete a Luci user
@@ -1067,7 +1041,7 @@
 		if (self.portal_membership.listMembers() and havePermDelUser(self)):
 			delUser = {}
 			delUser['Title'] = 'Delete a User'
-			delUser['absolute_url'] = url + '?pagetype=' + HOMEBASE_DEL_USER
+			delUser['absolute_url'] = '%s?pagetype=%s' % (url, HOMEBASE_DEL_USER)
 			delUser['Description'] = 'Delete a Luci user.'
 			if pagetype == HOMEBASE_DEL_USER:
 				delUser['currentItem'] = True
@@ -1086,7 +1060,7 @@
 		if (havePermEditPerms(self) and self.portal_membership.listMembers() and (getStorage(self) or getClusters(self))):
 			userPerm = {}
 			userPerm['Title'] = 'User Permissions'
-			userPerm['absolute_url'] = url + '?pagetype=' + HOMEBASE_PERMS
+			userPerm['absolute_url'] = '%s?pagetype=%s' % (url, HOMEBASE_PERMS)
 			userPerm['Description'] = 'Set permissions for Luci users.'
 			if pagetype == HOMEBASE_PERMS:
 				userPerm['currentItem'] = True
@@ -1102,9 +1076,13 @@
 		ret['curIndex'] = 0
 
 	if cur and 'absolute_url' in cur and params:
+		import cgi
 		cur['base_url'] = cur['absolute_url']
+		param_list = list()
 		for i in params:
-			cur['absolute_url'] += '&' + cgi.escape(i) + '=' + cgi.escape(params[i])
+			param_list.append('&%s=%s' % (cgi.escape(i), cgi.escape(params[i])))
+		temp = '%s%s' % (cur['absolute_url'], ''.join(param_list))
+		cur['absolute_url'] = temp
 	elif cur and 'absolute_url' in cur:
 		cur['base_url'] = cur['absolute_url']
 	else:
@@ -1114,111 +1092,9 @@
 	ret['children'] = temp
 	return ret
 
-def getClusterSystems(self, clusterName):
-	if isAdmin(self):
-		try:
-			return self.restrictedTraverse(CLUSTER_FOLDER_PATH + clusterName + '/objectItems')('Folder')
-		except Exception, e:
-			luci_log.debug_verbose('GCSy0: %s: %s' % (clusterName, str(e)))
-			return None
-
-	try:
-		i = getSecurityManager().getUser()
-		if not i:
-			raise Exception, 'security manager says no user'
-	except Exception, e:
-		luci_log.debug_verbose('GCSy1: %s: %s' % (clusterName, str(e)))
-		return None
-
-	try:
-		csystems = self.restrictedTraverse(CLUSTER_FOLDER_PATH + clusterName + '/objectItems')('Folder')
-		if not csystems or len(csystems) < 1:
-			return None
-	except Exception, e:
-		luci_log.debug_verbose('GCSy2: %s: %s' % (clusterName, str(e)))
-		return None
-
-	allowedCSystems = list()
-	for c in csystems:
-		try:
-			if i.has_role('View', c[1]):
-				allowedCSystems.append(c)
-		except Exception, e:
-			luci_log.debug_verbose('GCSy3: %s: %s: %s' \
-				% (clusterName, c[0], str(e)))
-
-	return allowedCSystems
-
-def getClusters(self):
-	if isAdmin(self):
-		try:
-			return self.restrictedTraverse(PLONE_ROOT + '/systems/cluster/objectItems')('Folder')
-		except Exception, e:
-			luci_log.debug_verbose('GC0: %s' % str(e))
-			return None
-	try:
-		i = getSecurityManager().getUser()
-		if not i:
-			raise Exception, 'GSMGU failed'
-	except Exception, e:
-		luci_log.debug_verbose('GC1: %s' % str(e))
-		return None
-
-	try:
-		clusters = self.restrictedTraverse(PLONE_ROOT + '/systems/cluster/objectItems')('Folder')
-		if not clusters or len(clusters) < 1:
-			return None
-	except Exception, e:
-		luci_log.debug_verbose('GC2: %s' % str(e))
-		return None
-
-	allowedClusters = list()
-	for c in clusters:
-		try:
-			if i.has_role('View', c[1]):
-				allowedClusters.append(c)
-		except Exception, e:
-			luci_log.debug_verbose('GC3: %s: %s' % (c[0], str(e)))
-
-	return allowedClusters
-
-def getStorage(self):
-	if isAdmin(self):
-		try:
-			return self.restrictedTraverse(PLONE_ROOT + '/systems/storage/objectItems')('Folder')
-		except Exception, e:
-			luci_log.debug_verbose('GS0: %s' % str(e))
-			return None
-
-	try:
-		i = getSecurityManager().getUser()
-		if not i:
-			raise Exception, 'GSMGU failed'
-	except Exception, e:
-		luci_log.debug_verbose('GS1: %s' % str(e))
-		return None
-
-	try:
-		storage = self.restrictedTraverse(PLONE_ROOT + '/systems/storage/objectItems')('Folder')
-		if not storage or len(storage) < 1:
-			return None
-	except Exception, e:
-		luci_log.debug_verbose('GS2: %s' % str(e))
-		return None
-
-	allowedStorage = list()
-	for s in storage:
-		try:
-			if i.has_role('View', s[1]):
-				allowedStorage.append(s)
-		except Exception, e:
-			luci_log.debug_verbose('GS3: %s' % str(e))
-
-	return allowedStorage
-
 def createSystem(self, host, passwd):
 	try:
-		dummy = self.restrictedTraverse(str(STORAGE_FOLDER_PATH + host)).objectItems()
+		dummy = self.restrictedTraverse('%s%s' % (STORAGE_FOLDER_PATH, host)).objectItems()
 		luci_log.debug_verbose('CS0: %s already exists' % host)
 		return 'Storage system %s is already managed' % host
 	except:
@@ -1249,7 +1125,7 @@
 		return 'Authentication for storage system %s failed' % host
 
 	try:
-		dummy = self.restrictedTraverse(str(STORAGE_FOLDER_PATH + host)).objectItems()
+		dummy = self.restrictedTraverse('%s%s' % (STORAGE_FOLDER_PATH, host)).objectItems()
 		luci_log.debug_verbose('CS4 %s already exists' % host)
 		return 'Storage system %s is already managed' % host
 	except:
@@ -1263,7 +1139,7 @@
 
 	try:
 		ssystem.manage_addFolder(host, '__luci__:system')
-		newSystem = self.restrictedTraverse(STORAGE_FOLDER_PATH + host)
+		newSystem = self.restrictedTraverse('%s%s' % (STORAGE_FOLDER_PATH, host))
 	except Exception, e:
 		luci_log.debug_verbose('CS6 %s: %s' % (host, str(e)))
 		return 'Unable to create DB entry for storage system %s' % host
@@ -1277,283 +1153,6 @@
 
 	return None
 
-def abortManageCluster(self, request):
-	pass
-
-def manageCluster(self, clusterName, node_list, cluster_os):
-	clusterName = str(clusterName)
-
-	try:
-		clusters = self.restrictedTraverse(CLUSTER_FOLDER_PATH)
-		if not clusters:
-			raise Exception, 'cannot find the cluster entry in the DB'
-	except Exception, e:
-		luci_log.debug_verbose('MC0: %s: %s' % (clusterName, str(e)))
-		return 'Unable to create cluster %s: the cluster directory is missing.' % clusterName
-
-	try:
-		newCluster = self.restrictedTraverse(CLUSTER_FOLDER_PATH + clusterName)
-		if newCluster:
-			luci_log.debug_verbose('MC1: cluster %s: already exists' % clusterName)
-			return 'A cluster named %s is already managed by Luci' % clusterName
-	except:
-		pass
-
-	try:
-		clusters.manage_addFolder(clusterName, '__luci__:cluster')
-		newCluster = self.restrictedTraverse(CLUSTER_FOLDER_PATH + clusterName)
-		if not newCluster:
-			raise Exception, 'unable to create the cluster DB entry for %s' % clusterName
-	except Exception, e:
-		luci_log.debug_verbose('MC2: %s: %s' % (clusterName, str(e)))
-		return 'Unable to create cluster %s: %s' % (clusterName, str(e))
-
-	try:
-		newCluster.manage_acquiredPermissions([])
-		newCluster.manage_role('View', ['Access Contents Information', 'View'])
-	except Exception, e:
-		luci_log.debug_verbose('MC3: %s: %s' % (clusterName, str(e)))
-		try:
-			clusters.manage_delObjects([clusterName])
-		except Exception, e:
-			luci_log.debug_verbose('MC4: %s: %s' % (clusterName, str(e)))
-		return 'Unable to set permissions on new cluster: %s: %s' % (clusterName, str(e))
-
-	try:
-		newCluster.manage_addProperty('cluster_os', cluster_os, 'string')
-	except Exception, e:
-		luci_log.debug_verbose('MC5: %s: %s: %s' \
-			% (clusterName, cluster_os, str(e)))
-
-	for i in node_list:
-		host = node_list[i]['host']
-
-		try:
-			newCluster.manage_addFolder(host, '__luci__:csystem:' + clusterName)
-			newSystem = self.restrictedTraverse(str(CLUSTER_FOLDER_PATH + clusterName + '/' + host))
-			if not newSystem:
-				raise Exception, 'unable to create cluster system DB entry for node %s' % host
-			newSystem.manage_acquiredPermissions([])
-			newSystem.manage_role('View', [ 'Access contents information' , 'View' ])
-		except Exception, e:
-			try:
-				clusters.manage_delObjects([clusterName])
-			except Exception, e:
-				luci_log.debug_verbose('MC6: %s: %s: %s' \
-					% (clusterName, host, str(e)))
-
-			luci_log.debug_verbose('MC7: %s: %s: %s' \
-				% (clusterName, host, str(e)))
-			return 'Unable to create cluster node %s for cluster %s: %s' \
-				% (host, clusterName, str(e))
-
-	try:
-		ssystem = self.restrictedTraverse(STORAGE_FOLDER_PATH)
-		if not ssystem:
-			raise Exception, 'The storage DB entry is missing'
-	except Exception, e:
-		luci_log.debug_verbose('MC8: %s: %s: %s' % (clusterName, host, str(e)))
-		return 'Error adding storage node %s: %s' % (host, str(e))
-
-	# Only add storage systems if the cluster and cluster node DB
-	# objects were added successfully.
-	for i in node_list:
-		host = node_list[i]['host']
-
-		try:
-			# It's already there, as a storage system, no problem.
-			dummy = self.restrictedTraverse(str(STORAGE_FOLDER_PATH + host)).objectItems()
-			continue
-		except:
-			pass
-
-		try:
-			ssystem.manage_addFolder(host, '__luci__:system')
-			newSystem = self.restrictedTraverse(STORAGE_FOLDER_PATH + host)
-			newSystem.manage_acquiredPermissions([])
-			newSystem.manage_role('View', [ 'Access contents information' , 'View' ])
-		except Exception, e:
-			luci_log.debug_verbose('MC9: %s: %s: %s' % (clusterName, host, str(e)))
-
-def createClusterSystems(self, clusterName, node_list):
-	try:
-		clusterObj = self.restrictedTraverse(CLUSTER_FOLDER_PATH + clusterName)
-		if not clusterObj:
-			raise Exception, 'cluster %s DB entry is missing' % clusterName
-	except Exception, e:
-		luci_log.debug_verbose('CCS0: %s: %s' % (clusterName, str(e)))
-		return 'No cluster named \"%s\" is managed by Luci' % clusterName
-
-	for x in node_list:
-		i = node_list[x]
-		host = str(i['host'])
-
-		try:
-			clusterObj.manage_addFolder(host, '__luci__:csystem:' + clusterName)
-		except Exception, e:
-			luci_log.debug_verbose('CCS0a: %s: %s: %s' % (clusterName, host, str(e)))
-		try:
-			newSystem = self.restrictedTraverse(CLUSTER_FOLDER_PATH + clusterName + '/' + host)
-			if not newSystem:
-				raise Exception, 'cluster node DB entry for %s disappeared from under us' % host
-					
-			newSystem.manage_acquiredPermissions([])
-			newSystem.manage_role('View', [ 'Access contents information' , 'View' ])
-		except Exception, e:
-			luci_log.debug_verbose('CCS1: %s: %s: %s' % (clusterName, host, str(e)))
-			return 'Unable to create cluster node %s for cluster %s: %s' \
-				% (host, clusterName, str(e))
-
-	try:
-		ssystem = self.restrictedTraverse(STORAGE_FOLDER_PATH)
-		if not ssystem:
-			raise Exception, 'storage DB entry is missing'
-	except Exception, e:
-		# This shouldn't fail, but if it does, it's harmless right now
-		luci_log.debug_verbose('CCS2: %s: %s' % (clusterName, host, str(e)))
-		return None
-
-	# Only add storage systems if the and cluster node DB
-	# objects were added successfully.
-	for x in node_list:
-		i = node_list[x]
-		host = str(i['host'])
-
-		try:
-			# It's already there, as a storage system, no problem.
-			dummy = self.restrictedTraverse(str(STORAGE_FOLDER_PATH + host)).objectItems()
-			continue
-		except:
-			pass
-
-		try:
-			ssystem.manage_addFolder(host, '__luci__:system')
-			newSystem = self.restrictedTraverse(STORAGE_FOLDER_PATH + host)
-			newSystem.manage_acquiredPermissions([])
-			newSystem.manage_role('View', [ 'Access contents information' , 'View' ])
-		except Exception, e:
-			luci_log.debug_verbose('CCS3: %s: %s' % (clusterName, host, str(e)))
-
-def delSystem(self, systemName):
-	try:
-		ssystem = self.restrictedTraverse(STORAGE_FOLDER_PATH)
-		if not ssystem:
-			raise Exception, 'storage DB entry is missing'
-	except Exception, e:
-		luci_log.debug_verbose('delSystem0: %s: %s' % (systemName, str(e)))
-		return 'Unable to find storage system %s: %s' % (systemName, str(e))
-
-	try:
-		rc = RicciCommunicator(systemName, enforce_trust=False)
-		if rc is None:
-			raise Exception, 'rc is None'
-	except Exception, e:
-		try:
-			ssystem.manage_delObjects([ systemName ])
-		except Exception, e:
-			luci_log.debug_verbose('delSystem1: %s: %s' % (systemName, str(e)))
-			return 'Unable to delete the storage system %s' % systemName
-		luci_log.debug_verbose('delSystem2: %s: %s' % (systemName, str(e)))
-		return
-
-	# Only unauthenticate if the system isn't a member of
-	# a managed cluster.
-	cluster_info = rc.cluster_info()
-	if not cluster_info:
-		cluster_name = None
-	elif not cluster_info[0]:
-		cluster_name = cluster_info[1]
-	else:
-		cluster_name = cluster_info[0]
-
-	unauth = False
-	if not cluster_name:
-		# If it's a member of no cluster, unauthenticate
-		unauth = True
-	else:
-		try:
-			dummy = self.restrictedTraverse(str(CLUSTER_FOLDER_PATH + cluster_name + '/' + systemName)).objectItems()
-		except Exception, e:
-			# It's not a member of a managed cluster, so unauthenticate.
-			unauth = True
-
-	if unauth is True:
-		try:
-			rc.unauth()
-		except:
-			pass
-
-	try:
-		ssystem.manage_delObjects([ systemName ])
-	except Exception, e:
-		luci_log.debug_verbose('delSystem3: %s: %s' % (systemName, str(e)))
-		return 'Unable to delete storage system %s: %s' \
-			% (systemName, str(e))
-
-def delCluster(self, clusterName):
-	try:
-		clusters = self.restrictedTraverse(CLUSTER_FOLDER_PATH)
-		if not clusters:
-			raise Exception, 'clusters DB entry is missing'
-	except Exception, e:
-		luci_log.debug_verbose('delCluster0: %s' % str(e))
-		return 'Unable to find cluster %s' % clusterName
-
-	err = delClusterSystems(self, clusterName)
-	if err:
-		return err
-
-	try:
-		clusters.manage_delObjects([ clusterName ])
-	except Exception, e:
-		luci_log.debug_verbose('delCluster1: %s' % str(e))
-		return 'Unable to delete cluster %s' % clusterName
-
-def delClusterSystem(self, cluster, systemName):
-	try:
-		dummy = self.restrictedTraverse(str(STORAGE_FOLDER_PATH + systemName)).objectItems()
-	except:
-		# It's not a storage system, so unauthenticate.
-		try:
-			rc = RicciCommunicator(systemName, enforce_trust=False)
-			rc.unauth()
-		except Exception, e:
-			luci_log.debug_verbose('delClusterSystem0: ricci error for %s: %s' \
-				% (systemName, str(e)))
-
-	try:
-		cluster.manage_delObjects([ systemName ])
-	except Exception, e:
-		err_str = 'Error deleting cluster object %s: %s' % (systemName, str(e))
-		luci_log.debug_verbose('delClusterSystem1: %s' % err_str)
-		return err_str
-
-def delClusterSystems(self, clusterName):
-	try:
-		cluster = self.restrictedTraverse(CLUSTER_FOLDER_PATH + clusterName)
-		if not cluster:
-			raise Exception, 'cluster DB entry is missing'
-
-		try:
-			csystems = getClusterSystems(self, clusterName)
-			if not csystems or len(csystems) < 1:
-				return None
-		except Exception, e:
-			luci_log.debug_verbose('delCluSystems0: %s' % str(e))
-			return None
-	except Exception, er:
-		luci_log.debug_verbose('delCluSystems1: error for %s: %s' \
-			% (clusterName, str(er)))
-		return str(er)
-
-	errors = ''
-	for i in csystems:
-		err = delClusterSystem(self, cluster, i[0])
-		if err:
-			errors += 'Unable to delete the cluster system %s: %s\n' % (i[0], err)
-			luci_log.debug_verbose('delCluSystems2: %s' % err)
-	return errors
-
 def getDefaultUser(self, request):
 	try:
 		user = request.form['userList']
@@ -1595,8 +1194,8 @@
 		perms[userName]['storage'] = {}
 
 		try:
-			clusters = self.restrictedTraverse(PLONE_ROOT + '/systems/cluster/objectItems')('Folder')
-			storage = self.restrictedTraverse(PLONE_ROOT + '/systems/storage/objectItems')('Folder')
+			clusters = self.restrictedTraverse('%s/systems/cluster/objectItems' % PLONE_ROOT)('Folder')
+			storage = self.restrictedTraverse('%s/systems/storage/objectItems' % PLONE_ROOT)('Folder')
 		except Exception, e:
 			luci_log.debug_verbose('getUserPerms1: user %s: %s' % (userName, str(e)))
 			continue
@@ -1607,7 +1206,6 @@
 			except Exception, e:
 				luci_log.debug_verbose('getUserPerms2: user %s, obj %s: %s' \
 					% (userName, c[0], str(e)))
-				continue
 				
 		for s in storage:
 			try:
@@ -1615,125 +1213,8 @@
 			except Exception, e:
 				luci_log.debug_verbose('getUserPerms2: user %s, obj %s: %s' \
 					% (userName, s[0], str(e)))
-				continue
-
 	return perms
 
-# In case we want to give access to non-admin users in the future
-
-def havePermCreateCluster(self):
-	return isAdmin(self)
-
-def havePermAddStorage(self):
-	return isAdmin(self)
-
-def havePermAddCluster(self):
-	return isAdmin(self)
-
-def havePermAddUser(self):
-	return isAdmin(self)
-
-def havePermDelUser(self):
-	return isAdmin(self)
-
-def havePermRemStorage(self):
-	return isAdmin(self) 
-
-def havePermRemCluster(self):
-	return isAdmin(self) 
-
-def havePermEditPerms(self):
-	return isAdmin(self) 
-
-def getClusterConfNodes(clusterConfDom):
-	cur = clusterConfDom
-	clusterNodes = list()
-
-	for i in cur.childNodes:
-		cur = i
-		if i.nodeName == 'clusternodes':
-			for i in cur.childNodes:
-				if i.nodeName == 'clusternode':
-					clusterNodes.append(i.getAttribute('name'))
-			return clusterNodes
-	return clusterNodes
-
-def getSystems(self):
-	storage = getStorage(self)
-	clusters = getClusters(self)
-	storageList = list()
-	ret = [{}, [], {}]
-
-	need_auth_hash = {}
-	for i in storage:
-		storageList.append(i[0])
-		if testNodeFlag(i[1], CLUSTER_NODE_NEED_AUTH) != False:
-			need_auth_hash[i[0]] = i[1]
-
-	chash = {}
-	for i in clusters:
-		csystems = getClusterSystems(self, i[0])
-		cslist = list()
-		for c in csystems:
-			if testNodeFlag(c[1], CLUSTER_NODE_NEED_AUTH) != False:
-				need_auth_hash[c[0]] = c[1]
-			cslist.append(c[0])
-		chash[i[0]] = cslist
-
-	ret[0] = chash
-	ret[1] = storageList
-	ret[2] = need_auth_hash
-	return ret
-
-def getClusterNode(self, nodename, clustername):
-	try:
-		cluster_node = self.restrictedTraverse(CLUSTER_FOLDER_PATH + str(clustername) + '/' + str(nodename))
-		if not cluster_node:
-			raise Exception, 'cluster node is none'
-		return cluster_node
-	except Exception, e:
-		luci_log.debug_verbose('getClusterNode0: %s %s: %s' \
-			% (nodename, clustername, str(e)))
-		return None
-
-def getStorageNode(self, nodename):
-	try:
-		storage_node = self.restrictedTraverse(STORAGE_FOLDER_PATH + str(nodename))
-		if not storage_node:
-			raise Exception, 'storage node is none'
-		return storage_node
-	except Exception, e:
-		luci_log.debug_verbose('getStorageNode0: %s: %s' % (nodename, str(e)))
-		return None
-
-def testNodeFlag(node, flag_mask):
-	try:
-		flags = node.getProperty('flags')
-		if flags is None:
-			return False
-		return flags & flag_mask != 0
-	except Exception, e:
-		luci_log.debug_verbose('testNodeFlag0: %s' % str(e))
-	return False
-
-def setNodeFlag(node, flag_mask):
-	try:
-		flags = node.getProperty('flags')
-		if flags is None:
-			flags = 0
-		node.manage_changeProperties({ 'flags': flags | flag_mask })
-	except:
-		try:
-			node.manage_addProperty('flags', flag_mask, 'int')
-		except Exception, e:
-			luci_log.debug_verbose('setNodeFlag0: %s' % str(e))
-
-def delNodeFlag(node, flag_mask):
-	try:
-		flags = node.getProperty('flags')
-		if flags is None:
-			return
-		if flags & flag_mask != 0:
-			node.manage_changeProperties({ 'flags': flags & ~flag_mask })
-	except Exception, e:
-		luci_log.debug_verbose('delNodeFlag0: %s' % str(e))
+def getClusterConfNodes(conf_dom):
+	cluster_nodes = conf_dom.getElementsByTagName('clusternodes')
+	return (lambda x: str(x.getAttribute('name')), cluster_nodes)
--- conga/luci/site/luci/Extensions/ricci_communicator.py	2007/02/12 20:24:28	1.25
+++ conga/luci/site/luci/Extensions/ricci_communicator.py	2007/05/03 20:16:38	1.25.2.1
@@ -24,8 +24,8 @@
         self.__timeout_short = 6
         self.__timeout_long  = 600
         
-        self.__privkey_file = CERTS_DIR_PATH + 'privkey.pem'
-        self.__cert_file = CERTS_DIR_PATH + 'cacert.pem'
+        self.__privkey_file = '%sprivkey.pem' % CERTS_DIR_PATH
+        self.__cert_file = '%scacert.pem' % CERTS_DIR_PATH
         
         try:
             self.ss = SSLSocket(self.__hostname,
@@ -152,7 +152,7 @@
         except:
             errstr = 'Error authenticating to host %s: %s' \
                         % (self.__hostname, str(ret))
-            luci_log.debug_verbose('RC:unauth2:' + errstr)
+            luci_log.debug_verbose('RC:unauth2: %s' % errstr)
             raise RicciError, errstr
         return True
 
@@ -212,31 +212,28 @@
                     batch_node = node.cloneNode(True)
         if batch_node == None:
             luci_log.debug_verbose('RC:PB4: batch node missing <batch/>')
-            raise RicciError, 'missing <batch/> in ricci\'s response from %s' \
+            raise RicciError, 'missing <batch/> in ricci\'s response from "%s"' \
                     % self.__hostname
 
         return batch_node
     
     def batch_run(self, batch_str, async=True):
         try:
-            batch_xml_str = '<?xml version="1.0" ?><batch>' + batch_str + '</batch>'
-            luci_log.debug_verbose('RC:BRun0: attempting batch \"%s\" for host %s' \
-                % (batch_xml_str, self.__hostname))
+            batch_xml_str = '<?xml version="1.0" ?><batch>%s</batch>' % batch_str
+            luci_log.debug_verbose('RC:BRun0: attempting batch "%s" for host "%s"' % (batch_xml_str, self.__hostname))
             batch_xml = minidom.parseString(batch_xml_str).firstChild
         except Exception, e:
-            luci_log.debug_verbose('RC:BRun1: received invalid batch XML for %s: \"%s\": %s' \
-                % (self.__hostname, batch_xml_str, str(e)))
+            luci_log.debug_verbose('RC:BRun1: received invalid batch XML for %s: "%s": "%s"' % (self.__hostname, batch_xml_str, str(e)))
             raise RicciError, 'batch XML is malformed'
 
         try:
             ricci_xml = self.process_batch(batch_xml, async)
             try:
-                luci_log.debug_verbose('RC:BRun2: received XML \"%s\" from host %s in response to batch command.' \
-                    % (ricci_xml.toxml(), self.__hostname))
+                luci_log.debug_verbose('RC:BRun2: received XML "%s" from host %s in response to batch command.' % (ricci_xml.toxml(), self.__hostname))
             except:
                 pass
         except:
-            luci_log.debug_verbose('RC:BRun3: An error occurred while trying to process the batch job: \"%s\"' % batch_xml_str)
+            luci_log.debug_verbose('RC:BRun3: An error occurred while trying to process the batch job: "%s"' % batch_xml_str)
             return None
 
         doc = minidom.Document()
@@ -244,8 +241,7 @@
         return doc
 
     def batch_report(self, batch_id):
-        luci_log.debug_verbose('RC:BRep0: [auth=%d] asking for batchid# %s for host %s' \
-            % (self.__authed, batch_id, self.__hostname))
+        luci_log.debug_verbose('RC:BRep0: [auth=%d] asking for batchid# %s for host %s' % (self.__authed, batch_id, self.__hostname))
 
         if not self.authed():
             raise RicciError, 'Not authenticated to host %s' % self.__hostname
@@ -282,19 +278,18 @@
     
     
     def __send(self, xml_doc, timeout):
-        buff = xml_doc.toxml() + '\n'
+        buff = '%s\n' % xml_doc.toxml()
         try:
             self.ss.send(buff, timeout)
         except Exception, e:
-            luci_log.debug_verbose('RC:send0: Error sending XML \"%s\" to %s: %s' \
-                                   % (buff, self.__hostname, str(e)))
+            luci_log.debug_verbose('RC:send0: Error sending XML "%s" to %s: %s' % (buff, self.__hostname, str(e)))
             raise RicciError, 'write error while sending XML to host %s' \
                   % self.__hostname
         except:
             raise RicciError, 'write error while sending XML to host %s' \
                   % self.__hostname
         try:
-            luci_log.debug_verbose('RC:send1: Sent XML \"%s\" to host %s' \
+            luci_log.debug_verbose('RC:send1: Sent XML "%s" to host %s' \
                 % (xml_doc.toxml(), self.__hostname))
         except:
             pass
@@ -311,14 +306,14 @@
             raise RicciError, 'Error reading data from host %s' % self.__hostname
         except:
             raise RicciError, 'Error reading data from host %s' % self.__hostname
-        luci_log.debug_verbose('RC:recv1: Received XML \"%s\" from host %s' \
+        luci_log.debug_verbose('RC:recv1: Received XML "%s" from host %s' \
             % (xml_in, self.__hostname))
 
         try:
             if doc == None:
                 doc = minidom.parseString(xml_in)
         except Exception, e:
-            luci_log.debug_verbose('RC:recv2: Error parsing XML \"%s" from %s' \
+            luci_log.debug_verbose('RC:recv2: Error parsing XML "%s" from %s' \
                 % (xml_in, str(e)))
             raise RicciError, 'Error parsing XML from host %s: %s' \
                     % (self.__hostname, str(e))
@@ -330,9 +325,8 @@
         
         try:        
             if doc.firstChild.nodeName != 'ricci':
-                luci_log.debug_verbose('RC:recv3: Expecting \"ricci\" got XML \"%s\" from %s' %
-                    (xml_in, self.__hostname))
-                raise Exception, 'Expecting first XML child node to be \"ricci\"'
+                luci_log.debug_verbose('RC:recv3: Expecting "ricci" got XML "%s" from %s' % (xml_in, self.__hostname))
+                raise Exception, 'Expecting first XML child node to be "ricci"'
         except Exception, e:
             raise RicciError, 'Invalid XML ricci response from host %s' \
                     % self.__hostname
@@ -348,8 +342,7 @@
     try:
         return RicciCommunicator(hostname)
     except Exception, e:
-        luci_log.debug_verbose('RC:GRC0: Error creating a ricci connection to %s: %s' \
-            % (hostname, str(e)))
+        luci_log.debug_verbose('RC:GRC0: Error creating a ricci connection to %s: %s' % (hostname, str(e)))
         return None
     pass
 
@@ -418,8 +411,7 @@
                     last = last + 1
                     last = last - 2 * last
     try:
-        luci_log.debug_verbose('RC:BS1: Returning (%d, %d) for batch_status(\"%s\")' \
-            % (last, total, batch_xml.toxml()))
+        luci_log.debug_verbose('RC:BS1: Returning (%d, %d) for batch_status("%s")' % (last, total, batch_xml.toxml()))
     except:
         luci_log.debug_verbose('RC:BS2: Returning last, total')
 
@@ -447,7 +439,7 @@
 # * error_msg:  error message
 def extract_module_status(batch_xml, module_num=1):
     if batch_xml.nodeName != 'batch':
-        luci_log.debug_verbose('RC:EMS0: Expecting \"batch\" got \"%s\"' % batch_xml.toxml())
+        luci_log.debug_verbose('RC:EMS0: Expecting "batch" got "%s"' % batch_xml.toxml())
         raise RicciError, 'Invalid XML node; expecting a batch node'
 
     c = 0
@@ -491,5 +483,5 @@
                     elif status == '5':
                         return -103, 'module removed from schedule'
     
-    raise RicciError, str('no ' + str(module_num) + 'th module in the batch, or malformed response')
+    raise RicciError, 'no %dth module in the batch, or malformed response' % module_num
 
--- conga/luci/site/luci/Extensions/ricci_defines.py	2006/05/30 20:17:21	1.1
+++ conga/luci/site/luci/Extensions/ricci_defines.py	2007/05/03 20:16:38	1.1.8.1
@@ -1,14 +1,11 @@
+REQUEST_TAG   = 'request'
+RESPONSE_TAG  = 'response'
 
+FUNC_CALL_TAG = 'function_call'
+FUNC_RESP_TAG = 'function_response'
+SEQUENCE_TAG  = 'sequence'
 
-REQUEST_TAG   ='request'
-RESPONSE_TAG  ='response'
-
-FUNC_CALL_TAG ="function_call"
-FUNC_RESP_TAG ="function_response"
-SEQUENCE_TAG  ='sequence'
-
-
-VARIABLE_TAG  ='var'
+VARIABLE_TAG  = 'var'
 
 VARIABLE_TYPE_INT        = 'int'
 VARIABLE_TYPE_INT_SEL    = 'int_select'
@@ -21,50 +18,46 @@
 VARIABLE_TYPE_LIST_STR   = 'list_str'
 VARIABLE_TYPE_LIST_XML   = 'list_xml'
 
-
 VARIABLE_TYPE_LISTENTRY  = 'listentry'
 VARIABLE_TYPE_FLOAT      = 'float'
 
 
-
-
-BD_TYPE = "block_device"
-BD_HD_TYPE = "hard_drive"
-BD_LV_TYPE = "logical_volume"
-BD_PARTITION_TYPE = "partition"
+BD_TYPE = 'block_device'
+BD_HD_TYPE = 'hard_drive'
+BD_LV_TYPE = 'logical_volume'
+BD_PARTITION_TYPE = 'partition'
 
 BD_TEMPLATE = 'block_device_template'
 
 
-
-MAPPER_TYPE           = "mapper"
-MAPPER_SYS_TYPE       = "hard_drives"
-MAPPER_VG_TYPE        = "volume_group"
-MAPPER_PT_TYPE        = "partition_table"
-MAPPER_MDRAID_TYPE    = "mdraid"
-MAPPER_ATARAID_TYPE   = "ataraid"
-MAPPER_MULTIPATH_TYPE = "multipath"
-MAPPER_CRYPTO_TYPE    = "crypto"
-MAPPER_iSCSI_TYPE     = "iSCSI"
-
-
-SYSTEM_PREFIX = MAPPER_SYS_TYPE + ":"
-VG_PREFIX     = MAPPER_VG_TYPE + ":"
-PT_PREFIX     = MAPPER_PT_TYPE + ":"
-MDRAID_PREFIX = MAPPER_MDRAID_TYPE + ':'
-
-
-MAPPER_SOURCES_TAG = "sources"
-MAPPER_TARGETS_TAG = "targets"
-MAPPER_MAPPINGS_TAG = "mappings"
-MAPPER_NEW_SOURCES_TAG = "new_sources"
-MAPPER_NEW_TARGETS_TAG = "new_targets"
+MAPPER_TYPE           = 'mapper'
+MAPPER_SYS_TYPE       = 'hard_drives'
+MAPPER_VG_TYPE        = 'volume_group'
+MAPPER_PT_TYPE        = 'partition_table'
+MAPPER_MDRAID_TYPE    = 'mdraid'
+MAPPER_ATARAID_TYPE   = 'ataraid'
+MAPPER_MULTIPATH_TYPE = 'multipath'
+MAPPER_CRYPTO_TYPE    = 'crypto'
+MAPPER_iSCSI_TYPE     = 'iSCSI'
+
+
+SYSTEM_PREFIX = ':%s' % MAPPER_SYS_TYPE
+VG_PREFIX     = ':%s' % MAPPER_VG_TYPE
+PT_PREFIX     = ':%s' % MAPPER_PT_TYPE
+MDRAID_PREFIX = ':%s' % MAPPER_MDRAID_TYPE
+
+
+MAPPER_SOURCES_TAG = 'sources'
+MAPPER_TARGETS_TAG = 'targets'
+MAPPER_MAPPINGS_TAG = 'mappings'
+MAPPER_NEW_SOURCES_TAG = 'new_sources'
+MAPPER_NEW_TARGETS_TAG = 'new_targets'
 
 
 
-CONTENT_TYPE = "content"
-CONTENT_FS_TYPE = "filesystem"
-CONTENT_NONE_TYPE = "none"
+CONTENT_TYPE = 'content'
+CONTENT_FS_TYPE = 'filesystem'
+CONTENT_NONE_TYPE = 'none'
 CONTENT_MS_TYPE = 'mapper_source'
 CONTENT_HIDDEN_TYPE = 'hidden'
 
@@ -75,7 +68,4 @@
 
 
 
-PROPS_TAG = "properties"
-
-
-
+PROPS_TAG = 'properties'
--- conga/luci/site/luci/Extensions/storage_adapters.py	2006/12/06 22:34:09	1.9
+++ conga/luci/site/luci/Extensions/storage_adapters.py	2007/05/03 20:16:38	1.9.4.1
@@ -44,7 +44,7 @@
   sdata = {}
   sdata['Title'] = "System List"
   sdata['cfg_type'] = "storages"
-  sdata['absolute_url'] = url + "?pagetype=" + STORAGESYS
+  sdata['absolute_url'] = "%s?pagetype=%s" % (url, STORAGESYS)
   sdata['Description'] = "Systems available for storage configuration"
   if pagetype == STORAGESYS or pagetype == '0':
     sdata['currentItem'] = True
@@ -56,7 +56,7 @@
     sdata['show_children'] = False
   
   
-  syslist= list()
+  syslist = list()
   if sdata['show_children']:
     #display_clusters = True
     display_clusters = False
@@ -97,11 +97,12 @@
   if 'nodes' in system_data:
     title = system_data['name']
     if system_data['alias'] != '':
-      title = system_data['alias'] + ' (' + title + ')'
-    ssys['Title'] = 'Cluster ' + title
+      title = '%s (%s)' % (system_data['alias'], title)
+    ssys['Title'] = 'Cluster %s' % title
     ssys['cfg_type'] = "storage"
-    ssys['absolute_url'] = url + '?' + PAGETYPE + '=' + CLUSTER_STORAGE + "&" + CLUNAME + "=" + system_data['name']
-    ssys['Description'] = "Configure shared storage of cluster " + title
+    ssys['absolute_url'] = '%s?%s=%s&%s=%s' \
+      % (url, PAGETYPE, CLUSTER_STORAGE, CLUNAME, system_data['name'])
+    ssys['Description'] = "Configure shared storage of cluster %s" % title
     ssys['currentItem'] = False
     ssys['show_children'] = True
     kids = []
@@ -117,8 +118,9 @@
       return
     ssys['Title'] = system_data['hostname']
     ssys['cfg_type'] = "storage"
-    ssys['absolute_url'] = url + '?' + PAGETYPE + '=' + STORAGE + "&" + STONAME + "=" + system_data['hostname']
-    ssys['Description'] = "Configure storage on " + system_data['hostname']
+    ssys['absolute_url'] = '%s?%s=%s&%s=%s' \
+      % (url, PAGETYPE, STORAGE, STONAME, system_data['hostname'])
+    ssys['Description'] = "Configure storage on %s" % system_data['hostname']
     
     if pagetype == STORAGE:
       if stoname == system_data['hostname']:
@@ -167,18 +169,18 @@
   
   
   buff, dummy1, dummy2 = get_pretty_mapper_info(mapper_type)
-  pretty_names = buff + 's'
-  pretty_names_desc = 'Manage ' + buff + 's'
-  pretty_name = buff
-  pretty_name_desc = 'Manage ' + buff
-  pretty_new_name = 'New ' + buff
-  pretty_new_name_desc = 'Create New ' + buff
+  pretty_names = '%ss' % buff
+  pretty_names_desc = 'Manage %ss' % buff
+  pretty_name_desc = 'Manage %s' % buff
+  pretty_new_name = 'New %s' % buff
+  pretty_new_name_desc = 'Create New %s' % buff
   
   
   srs_p = {}
   srs_p['Title'] = pretty_names
   srs_p['cfg_type'] = "nodes"
-  srs_p['absolute_url'] = url + '?' + PAGETYPE + '=' + VIEW_MAPPERS + '&' + STONAME + '=' + hostname + '&' + PT_MAPPER_TYPE + '=' + mapper_type
+  srs_p['absolute_url'] = '%s?%s=%s&%s=%s&%s=%s' \
+    % (url, PAGETYPE, VIEW_MAPPERS, STONAME, hostname, PT_MAPPER_TYPE, mapper_type)
   srs_p['Description'] = pretty_names_desc
   if (pagetype_req == VIEW_MAPPERS or pagetype_req == VIEW_MAPPER or pagetype_req == ADD_SOURCES or pagetype_req == CREATE_MAPPER or pagetype_req == VIEW_BD) and mapper_type_req == mapper_type:
     srs_p['show_children'] = True
@@ -196,7 +198,8 @@
     sr = {}
     sr['Title'] = pretty_new_name
     sr['cfg_type'] = "nodes"
-    sr['absolute_url'] = url + '?' + PAGETYPE + '=' + CREATE_MAPPER + '&' + STONAME + '=' + hostname + '&' + PT_MAPPER_TYPE + '=' + mapper_type
+    sr['absolute_url'] = '%s?%s=%s&%s=%s&%s=%s' \
+      % (url, PAGETYPE, CREATE_MAPPER, STONAME, hostname, PT_MAPPER_TYPE, mapper_type)
     sr['Description'] = pretty_new_name_desc
     sr['show_children'] = False
     
@@ -210,7 +213,7 @@
   # existing mappers
   for sr_xml in mapper_list:
     sr_id = sr_xml.getAttribute('mapper_id')
-    srname = sr_id.replace(mapper_type + ':', '').replace('/dev/', '')
+    srname = sr_id.replace('%s:' % mapper_type, '').replace('/dev/', '')
     
     if srname == '' and mapper_type == MAPPER_VG_TYPE and sr_id == VG_PREFIX:
       #srname = 'Uninitialized PVs'
@@ -219,7 +222,8 @@
     sr = {}
     sr['Title'] = srname
     sr['cfg_type'] = "nodes"
-    sr['absolute_url'] = url + '?' + PAGETYPE + '=' + VIEW_MAPPER + '&' + STONAME + '=' + hostname + '&' + PT_MAPPER_TYPE + '=' + mapper_type + '&' + PT_MAPPER_ID + '=' + sr_id
+    sr['absolute_url'] = '%s?%s=%s&%s=%s&%s=%s&%s=%s' \
+      % (url, PAGETYPE, VIEW_MAPPER, STONAME, hostname, PT_MAPPER_TYPE, mapper_type, PT_MAPPER_ID, sr_id)
     sr['Description'] = pretty_name_desc
     
     if (pagetype_req == VIEW_MAPPER or pagetype_req == ADD_SOURCES or pagetype_req == VIEW_BD) and mapper_id_req == sr_id:
@@ -238,7 +242,8 @@
       tg = {}
       tg['Title'] = tgname
       tg['cfg_type'] = "nodes"
-      tg['absolute_url'] = url + '?' + PAGETYPE + '=' + VIEW_BD + '&' + STONAME + '=' + hostname + '&' + PT_MAPPER_TYPE + '=' + mapper_type + '&' + PT_MAPPER_ID + '=' + sr_id + '&' + PT_PATH + '=' + tg_path
+      tg['absolute_url'] = '%s?%s=%s&%s=%s&%s=%s&%s=%s&%s=%s' \
+        % (url, PAGETYPE, VIEW_BD, STONAME, hostname, PT_MAPPER_TYPE, mapper_type, PT_MAPPER_ID, sr_id, PT_PATH, tg_path)
       tg['Description'] = tgname
       tg['show_children'] = False
       
@@ -294,8 +299,9 @@
   hds_p = {}
   hds_p['Title'] = hds_pretty_name
   hds_p['cfg_type'] = "nodes"
-  hds_p['absolute_url'] = url + '?' + PAGETYPE + '=' + VIEW_BDS + '&' + STONAME + '=' + hostname + '&' + PT_MAPPER_TYPE + '=' + MAPPER_SYS_TYPE + '&' + PT_MAPPER_ID + '=' + SYSTEM_PREFIX
-  hds_p['Description'] = "Manage " + hds_pretty_name
+  hds_p['absolute_url'] = '%s?%s=%s&%s=%s&%s=%s&%s=%s' \
+    % (url, PAGETYPE, VIEW_BDS, STONAME, hostname, PT_MAPPER_TYPE, MAPPER_SYS_TYPE, PT_MAPPER_ID, SYSTEM_PREFIX)
+  hds_p['Description'] = "Manage %s" % hds_pretty_name
   if (pagetype == VIEW_BDS or pagetype == VIEW_BD) and mapper_type == MAPPER_SYS_TYPE:
     hds_p['show_children'] = True
   else:
@@ -315,8 +321,9 @@
     hd = {}
     hd['Title'] = hd_path.replace('/dev/', '')
     hd['cfg_type'] = "nodes"
-    hd['absolute_url'] = url + '?' + PAGETYPE + '=' + VIEW_BD + '&' + STONAME + '=' + hostname + '&' + PT_MAPPER_TYPE + '=' + MAPPER_SYS_TYPE + '&' + PT_MAPPER_ID + '=' + sys_id + '&' + PT_PATH + '=' + hd_path
-    hd['Description'] = 'Manage ' + hd_pretty_name
+    hd['absolute_url'] = '%s?%s=%s&%s=%s&%s=%s&%s=%s&%s=%s' \
+      % (url, PAGETYPE, VIEW_BD, STONAME, hostname, PT_MAPPER_TYPE, MAPPER_SYS_TYPE, PT_MAPPER_ID, sys_id, PT_PATH, hd_path)
+    hd['Description'] = 'Manage %s' % hd_pretty_name
     hd['show_children'] = False
     
     if pagetype == VIEW_BD and mapper_id == sys_id and path == hd_path:
@@ -337,18 +344,18 @@
   mappers_dir = storage_report.get_mappers_dir()
   mapper_templs_dir = storage_report.get_mapper_temps_dir()
   glo_dir = {}
-  for type in mappers_dir:
-    glo_dir[type] = [mappers_dir[type], []]
-  for type in mapper_templs_dir:
-    if type not in glo_dir:
-      glo_dir[type] = [[], mapper_templs_dir[type]]
+  for cur_type in mappers_dir:
+    glo_dir[cur_type] = [mappers_dir[cur_type], []]
+  for cur_type in mapper_templs_dir:
+    if cur_type not in glo_dir:
+      glo_dir[cur_type] = [[], mapper_templs_dir[cur_type]]
     else:
-      glo_dir[type][1] = mapper_templs_dir[type]
+      glo_dir[cur_type][1] = mapper_templs_dir[cur_type]
   
-  for type in glo_dir:
-    if type == MAPPER_SYS_TYPE:
+  for cur_type in glo_dir:
+    if cur_type == MAPPER_SYS_TYPE:
       continue
-    item = create_mapper_subitem(storage_report, request, glo_dir[type][0], glo_dir[type][1])
+    item = create_mapper_subitem(storage_report, request, glo_dir[cur_type][0], glo_dir[cur_type][1])
     if item == None:
       continue
     else:
@@ -362,11 +369,7 @@
 def getStorageURL(self, request, hostname):
   # return URL to manage this storage system
   try:
-    url = request['URL']
+    baseurl = request['URL']
   except KeyError, e:
-    url = "."
-  
-  url += '?' + PAGETYPE + '=' + str(STORAGE)
-  url += '&' + STONAME + '=' + hostname
-  return url
-
+    baseurl = "."
+  return '%s?%s=%s&%s=%s' % (baseurl, PAGETYPE, str(STORAGE), STONAME, hostname)  
--- conga/luci/site/luci/Extensions/system_adapters.py	2007/02/24 07:02:42	1.2
+++ conga/luci/site/luci/Extensions/system_adapters.py	2007/05/03 20:16:38	1.2.2.1
@@ -1,5 +1,5 @@
 from ricci_communicator import RicciCommunicator
-from ricci_bridge import list_services, updateServices, svc_manage
+from RicciQueries import list_services, updateServices, svc_manage
 from LuciSyslog import LuciSyslog
 from xml.dom import minidom
 




More information about the Cluster-devel mailing list