[RFC, PATCH] Build multiple srpms

Michael_E_Brown at Dell.com Michael_E_Brown at Dell.com
Wed May 10 21:32:58 UTC 2006


Thanks for the comments. Responses below.

> -----Original Message-----
> From: fedora-buildsys-list-bounces at redhat.com 
> [mailto:fedora-buildsys-list-bounces at redhat.com] On Behalf Of 
> Clark Williams
> Sent: Wednesday, May 10, 2006 3:59 PM
> To: Discussion of Fedora build system
> Subject: Re: [RFC, PATCH] Build multiple srpms
> 
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> Michael_E_Brown at Dell.com wrote:
> > [RFC, PATCH] Build multiple srpms
> >
> > Here is a patch for review. I have made the "obvious" 
> changes in order 
> > to be able to do the following:
> >         mock -r CFG  srpm1 srpm2 srpm3 srpm4
> >
> 
> I like the patch.  It doesn't change the currently expected 
> behavior with one srpm, just adds the ability to build 
> multiple srpms from one chroot.

Exactly my intent.

> 
> Have you tried it with plague? If not, I'll put your version 
> on my local plague server and see if it causes any grief 
> (don't expect it to).

I did not see an *easy* way to set up a plague server, so we do not have
one, yet.

> >
> > Limitations:
> > - There are no checks for dependencies between the srpms, so they 
> > should be independent. (no re-ordering of SRPMS)
> > - Building one SRPM can leave the buildroot in an 
> inconsistent state 
> > for the next SRPM. (no clean of chroot between builds)
> >
> I don't have a problem with this. I'm not sure I buy the 
> argument that we need to do a clean of the chroot every time. 
> Partially that's because I do a lot of cross-tools stuff 
> which requires that I keep a chroot around for multiple 
> builds. But even discounting that, I don't see what building 
> an srpm in a chroot can do that will corrupt the chroot so 
> that a subsequent build will fail or be incorrect. Mostly 
> you're in there because you want a particular set of binaries 
> (programs and libraries). Once those are installed, who cares 
> if the rpm database gets trashed or the passwd file has some 
> crufty entries in it?


My thoughts as well. :)


> 
> > - Failure to build one SRPM stops the whole process.
> >
> 
> I'm not sure that I would consider the "failure stops 
> everything" a limitation, since it saves you having to dig 
> through tons of log file entries to find where the failure 
> occurred (I never liked that make option anyway :)). You 
> could probably get away with removing the
> sys.exit() in the for loop, but then you'd have to remember 
> the exit status, etc.


Same here. 


> 
> > - Resulting RPMs are not installed into the build 
> environment for use 
> > by subsequent SRPMS.
> >
> > Reasoning:
> >         One problem I have been having with mock is speed. We are 
> > using mock to build for 16 distinct configurations. Doing 
> prep on each 
> > configuration was taking a minute and a half to two minutes. As an 
> > optimization, our mock wrapper script was doing a single prep per 
> > chroot and then using --no-clean, but this complicates our wrapper 
> > script. I am building a set of related RPMS, so I don have much 
> > concern about cross-pollution. This simple patch simplifies 
> my wrapper 
> > script considerably.
> >
> 
> What sort of speed improvement are you seeing now?

None, yet. Right now, this patch really just functions to make the rest
of my scripting simpler. I believe that the speed boost will probably
happen in stage 3. This patch is stage 1, and the patch I just sent next
is stage 2. In stage 3, I would like to overlap prep() of one build root
with building in another.

> 
> > Future:
> >         One future direction I'd like to take this is a 
> parallel mock 
> > that can prep/build multiple configurations at the same 
> time to try to 
> > amortize the cost of prep stage by running prep of one 
> environment in 
> > parallel with build of a different configuration.
> > Currently, it is taking me an hour to build 16 sets of RPMS 
> (5 SRPMS 
> > per set), and I am hoping to get this down. We have already 
> > implemented squid and some other measures to try to speed things up.
> >
> You could probably achieve the same result by using plague 
> and invoking multiple plague-client builds from your wrapper 
> script. You wouldn't be able to "pipeline" the builds (i.e. 
> wait for build1 to reach build state, then start build2), but 
> this will be simpler.
> Certainly simpler than threading mock.

Plague looked a bit complicated to set up when I last looked at it. I
will probably look at it again soon.
--
Michael




More information about the Fedora-buildsys-list mailing list