extras-buildsys-temp/automation2 ChangeLog, NONE, 1.1 README, NONE, 1.1

Daniel Williams (dcbw) fedora-extras-commits at redhat.com
Thu May 12 02:31:39 UTC 2005


Author: dcbw

Update of /cvs/fedora/extras-buildsys-temp/automation2
In directory cvs-int.fedora.redhat.com:/tmp/cvs-serv8563

Added Files:
	ChangeLog README 
Log Message:
2005-05-11  Dan Williams  <dcbw at redhat.com>

    * Add README file explaining stuff about build system architecture




--- NEW FILE ChangeLog ---
2005-05-11  Dan Williams  <dcbw at redhat.com>

    * Add README file explaining stuff about build system architecture


--- NEW FILE README ---
Fedora Extras Build System

The build system is composed of a single build server, and multiple build clients.  Both clients and server must be on the same LAN, or at least have access to the same shared storage.  Clients run an XMLRPC server to which the build-server delivers build jobs.  The build server runs an XMLRPC server to allow submission of jobs, and to retrieve basic status information about both clients and the build system as a whole.

The Build Client (ArchWelder):
------------------------------------------

usage: archwelder.py <address> <architectures>
ie   : archwelder.py localhost sparc sparcv9 sparcv8

Currently, archwelders are limited to building one job at a time.  This limitation may be removed in the future.  They do not queue pending jobs, but will reject build requests when something is already building.  The build server is expected to queue and manage jobs at this time, and serialize requests to build clients.  This may not be the case in the future.

main()
  `- Creates: XMLRPCArchWelderServer
               `- Creates: i386Arch, x86_64Arch, PPCArch, etc

The client creates an XMLRPC server object (XMLRPCArchWelderServer), and then processes requests in an infinite loop.  Every so often (currently 5 seconds) the server allows each build job that is still in process to update its status and perform work.  The XMLRPCArchWelderServer keeps a list of local build jobs, which are architecture specific, and forwards requests and commands for each job to that specific job, keyed off a unique id.

Each build job (ArchWelderMach and its architecture-specific subclasses like i386Arch) has a number of states, that directly correspond to the actions that 'mach' must take to build the package.  Each time the job is given time to process (which is done by calling the ArchWelderMach.process(), which in turn is called from XMLRPCArchWelderServer._process()) it checks its state, and advances to the next state when the previous state is complete.  Communication with mach and retrieval of status from mach are done with popen2.Popen4() so that mach does not block the XMLRPC server from talking to the build server.

The Build Server:
------------------------------------------

usage: bm_server.py

The build server runs two threads.  The first, the XMLRPC server (XMLRPCBuildMaster class), accepts requests to enqueue jobs for build and stuffs them into an sqlite database which contains all job details.  The second thread, the Build Master (BuildMaster class), pulls 'waiting' jobs from the database and builds them.  A third top-level object that runs in the same thread as the Build Master is the ArchWelderManager, which keeps track of build clients (ArchWelders) and their status.

main()
  |- Creates: XMLRPCBuildMaster
  |- Creates: ArchWelderManager
  |-           `- Creates: ArchWelderInstance (one for each arch on each ArchWelder)
  |-                         `- Creates: ArchWelderJob (one for each build job on each arch)
  `- Creates: BuildMaster
                `- Creates: BuildJob (one for each build job)

The ArchWelderManager object serves as a central location for all tracking and status information about each build job on each arch.  It creates an ArchWelderInstance for each supported architecture of each build client (ie, each ArchWelder).  The ArchWelderInstance keeps track of specific jobs building on that single architecture on that single build client.  It also serves as the XMLRPC client of the ArchWelder on the remote build client, proxying status information from it.

BuildJobs must request that the ArchWelderManager create a new ArchWelderJob for each build on each architecture the BuildJob needs.  If there is an available ArchWelder (since ArchWelders only build one job at a time across all arches they support), the ArchWelderManager will pass the request to the arch-specific ArchWelderInstance, which creates the new arch-specific ArchWelderJob, and pass it back through the ArchWelderManager to the parent BuildJob.  If there is no available ArchWelder for the request, the BuildJob must periodically re-issue the build request to the ArchWelderManager.

ArchWelderManager has a periodic processing routine that is called from the BuildMaster thread.  This processing routine calls the ArchWelderInstance.process() routine on each ArchWelderInstance, which in turn updates its view of the remote build client/ArchWelder's status.  Thus, the ArchWelderManager, through each ArchWelderInstance, knows the status and currently building job on each remote build client.

BuildJobs track a single SRPM build through the entire build system.  They are created from the BuildMaster thread whenever the BuildMaster finds a job entry in the sqlite database with the status of 'waiting'.  BuildJobs proceed through a number of states: "initialize", "checkout", "make_srpm", "prep", "building", "finished", "cleanup", "failed", and "needsign".

Flow goes like this:

initialize => checkout
checkout => make_srpm
make_srpm => prep
prep => building
building
    - All build jobs finished or failed? => finished
    - otherwise => building
finished => cleanup
cleanup
    - failed jobs? => failed
    - otherwise => needsign

The BuildJob updates its status when it is periodically told to do so by the BuildManager.  At this point, it will advance to the next state, or spawn build jobs that have not yet started if ArchWelders for those architectures are now available.  It stays in the "building" state until all jobs are first spawned, and then either completed or failed.






More information about the fedora-extras-commits mailing list