From skvidal at phy.duke.edu Mon May 2 08:41:01 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Mon, 02 May 2005 04:41:01 -0400 Subject: new automation stuff Message-ID: <1115023261.3781.15.camel@cutter> Hey folks, I worked a fair bit on the build system automation this weekend. It's not complete but it's better than it was: http://linux.duke.edu/~skvidal/misc/buildsys/ it's two files now - archwelder.py and buildingsucks.py archwelder keeps all the classes for building via mach for any given arch, the xml-rpc communication class for the above and the xmlrpc server side to run on the build hosts. buildingsucks contains the 'tobuild' monitoring class and the queuer for the builds. I've tested the xmlrpc communication for building using the python shell as the 'client'. It looks like it works. I'm sure there are some unfun bugs I've not noticed but it's an interesting place to work from. Structure: buildingsucks sets up a monitor instance which checks out and parses extras/common/tobuild from cvs. Then it starts up a queuer instance for each package to be built. The queuer checks the package out of cvs, makes the srpm and starts up jobs for building. It uses the archwelder classes to do this. The queuer doesn't care what archwelder instance it has or where the build is run as long as it can access the same methods for any archwelder class. The archwelder classes expect things to be in a known location as does the queuer b/c it moves the files around. packages are handled in stages: active = being built/handled needsign = built successfully but need to be signed failed = did not succeed to build on any/all arches success = succeeded and signed. - we actually don't care about this one The code is not the most heinous thing in the world but there is a lot of room for configuration to be extracted from the code (hah) and improvement. Known Bugs: I'm having trouble getting the log output back across the xmlrpc connection. I think it has something to do with some goofball characters in the log output that the handler is retching at. That'll get sorted soon enough. -sv From sopwith at redhat.com Mon May 2 16:14:31 2005 From: sopwith at redhat.com (Elliot Lee) Date: Mon, 2 May 2005 12:14:31 -0400 (EDT) Subject: new automation stuff In-Reply-To: <1115023261.3781.15.camel@cutter> References: <1115023261.3781.15.camel@cutter> Message-ID: On Mon, 2 May 2005, seth vidal wrote: > I worked a fair bit on the build system automation this weekend. It's > not complete but it's better than it was: > http://linux.duke.edu/~skvidal/misc/buildsys/ Rock on! > The code is not the most heinous thing in the world but there is a lot > of room for configuration to be extracted from the code (hah) and > improvement. > > Known Bugs: > I'm having trouble getting the log output back across the xmlrpc > connection. I think it has something to do with some goofball characters > in the log output that the handler is retching at. That'll get sorted > soon enough. binhex the log messages before sending them? -- Elliot From skvidal at phy.duke.edu Wed May 4 04:40:58 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Wed, 04 May 2005 00:40:58 -0400 Subject: buildsystem stuff Message-ID: <1115181658.15831.45.camel@cutter> Hey folks, I've been doing a lot of tests today and I have some good news to report. 1. the xml-rpc communication is working pretty well. We can spawn builds out to different hosts than the queuer and get feedback on what broke in the build and/or why. 2. I've cleaned up the code and on my set of 2 systems (x86_64 and ppc) I can build for all 3 architectures w/o having to manually run anything. You can see the code here: http://linux.duke.edu/~skvidal/misc/buildsys/ Gist of how it is used: The queuer runs, figures out what needs to be built by getting the list from the /cvs/extras/common/tobuild file. It preps for the build by: - checking out the tag from cvs - making all the necessary dirs (name/v-r/a) - making the srpm Then it dispatches the build to the right archwelder classes. These classes build the packages and put the files into the right places if they succeed; or output the logs if they fail. After the build runs the queuer will notify the person requesting the build of success or failure. In the event of a failure on any one architecture the other builds are stopped and logs are reported. That's the short version of how it works - a list of todos are at the top of the two important files. I've got a few more tests to do and then I'll probably make the 'tobuild' file open to anyone with cvs commit access so you can run 'make build' to request your own builds. let me know what you think, even if I've wasted my time. -sv From skvidal at phy.duke.edu Thu May 5 08:32:23 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Thu, 05 May 2005 04:32:23 -0400 Subject: build system glue scripts and requirements In-Reply-To: <1111763049.25887.47.camel@bree.local.net> References: <1111128752.30200.39.camel@cutter> <1111376479.5620.34.camel@bree.local.net> <1111745377.20715.44.camel@cutter> <1111763049.25887.47.camel@bree.local.net> Message-ID: <1115281943.24970.4.camel@cutter> > That's the easiest way to do it. A file gets dropped in specific > location (BUILD-SUCCESS, BUILD-FAIL) that can just be watched for. We > don't have to have instantaneous knowledge of build success. Right now it's just if the package name/v-r dir shows up in the right 'stages' dir. See: http://extras64.linux.duke.edu/needsign/development for examples. > It just sounds like query is mostly needed. And for the nice, easy to > use build target from the makefile, open bug/add comments. Those should > both be present. I've not heard the latest status on the package tracking interface in bugzilla. Does it sound like it is near or far? Near == 2 weeks, Far == 2 months. > > 2. Users cannot directly kick off builds, they have to wait (waaaaaaah) > > So long as the polling is done frequently enough, this isn't a huge > deal. right now it's every 5 minutes. > > 4. Ordering of builds based on dependency > > FIFO. You need to request your builds in dep order. Sucks a little, > but is easy to implement :) > agreed. > > Also, another thing I've thought about (and it's run through my head a > few times, this is just the first time I've remembered to type it out). > One thing that would be nice would be able to have multiple roots for > the same release in mach easily (ie, without having to make copies of > the config files with tweaked paths :-). At the same time, that's > probably not something that matters in the short term as it could easily > be added as an optimization later. I thought about that some. We'd need something that tracks what root it is in and make sure to request that same one again. For mach we'd need to open it up to the possibility that the root name is not the root location. I'm not sure how hard/easy that will be w/o looking more closely. -sv From ville.skytta at iki.fi Thu May 5 12:59:09 2005 From: ville.skytta at iki.fi (Ville =?ISO-8859-1?Q?Skytt=E4?=) Date: Thu, 05 May 2005 15:59:09 +0300 Subject: Making mach quiet Message-ID: <1115297949.17931.9.camel@bobcat.mine.nu> Any objections against applying the attached patch? It makes mach quieter and gets rid of the spinner crap that ends up in current build logs. This junk appears to make browsers (at least Firefox) not to open the build log but to ask for an app to handle it even though it's sent as text/plain. For example: http://extras64.linux.duke.edu/failed/development/uudeview/0.5.20-6/x86_64/uudeview-0.5.20-6.failure.log -------------- next part -------------- A non-text attachment was scrubbed... Name: mach-quiet.patch Type: text/x-patch Size: 2017 bytes Desc: not available URL: From skvidal at phy.duke.edu Thu May 5 13:04:05 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Thu, 05 May 2005 09:04:05 -0400 Subject: Making mach quiet In-Reply-To: <1115297949.17931.9.camel@bobcat.mine.nu> References: <1115297949.17931.9.camel@bobcat.mine.nu> Message-ID: <1115298245.24970.14.camel@cutter> On Thu, 2005-05-05 at 15:59 +0300, Ville Skytt? wrote: > Any objections against applying the attached patch? It makes mach > quieter and gets rid of the spinner crap that ends up in current build > logs. > > This junk appears to make browsers (at least Firefox) not to open the > build log but to ask for an app to handle it even though it's sent as > text/plain. For example: > http://extras64.linux.duke.edu/failed/development/uudeview/0.5.20-6/x86_64/uudeview-0.5.20-6.failure.log I'd much rather kill all the spinner crap from mach and dispense and still get other info back from mach (ie: w/o -q) I don't much see the point of the spinner stuff for what we're doing and while I acknowledge it is cute it's not useful. -sv From dcbw at redhat.com Thu May 5 13:37:46 2005 From: dcbw at redhat.com (Dan Williams) Date: Thu, 05 May 2005 09:37:46 -0400 Subject: buildsystem stuff In-Reply-To: <1115181658.15831.45.camel@cutter> References: <1115181658.15831.45.camel@cutter> Message-ID: <1115300266.16211.19.camel@dcbw.boston.redhat.com> On Wed, 2005-05-04 at 00:40 -0400, seth vidal wrote: > Hey folks, > I've been doing a lot of tests today and I have some good news to > report. > > 1. the xml-rpc communication is working pretty well. We can spawn builds > out to different hosts than the queuer and get feedback on what broke in > the build and/or why. > > 2. I've cleaned up the code and on my set of 2 systems (x86_64 and ppc) > I can build for all 3 architectures w/o having to manually run anything. A couple of comments, some of which I'd even be willing to do the work for :) 1) CVS/tobuild - Could this code be separated into an XML-RPC client? So you'd have the Queuer be an XMLRPC server, and clients would connect to it and feed it build jobs. For Aurora at least, we're probably not going to have CVS of this type, and "tobuild" seems somewhat cumbersome. I'd like to build a client that parses the output of Beehive emails and queues a build (so that whenever something gets built internally at Red Hat, it could get built into an Aurora build system) There would be some type of authentication so the Queuer might only accept connections from a certain host[s], eventually by SSL certificate or something once that code gets done. 2) Config info - Man that's spread around a lot... Anyway, what's the suggested configuration tool to use? I don't really like ConfigParser all that much, but I don't want to do something else that people don't like either. How about a simple "config.py" file that contains the configuration as dict entries or something, and then the source would use "config.get('localarch')"? Fairly simple, but suggestions accepted. 3) Could you explain a bit about the flow? The client/server separation isn't extremely clear currently, such that the server includes the builder code as well and calls some of it. Which scripts do you launch on which machines? With which arguments? Could we separate the server/queuer code from the XMLRPC/builder bits explicitly? > let me know what you think, even if I've wasted my time. Certainly not wasted... Cheers, Dan From katzj at redhat.com Thu May 5 16:04:43 2005 From: katzj at redhat.com (Jeremy Katz) Date: Thu, 05 May 2005 12:04:43 -0400 Subject: buildsystem stuff In-Reply-To: <1115300266.16211.19.camel@dcbw.boston.redhat.com> References: <1115181658.15831.45.camel@cutter> <1115300266.16211.19.camel@dcbw.boston.redhat.com> Message-ID: <1115309084.16366.70.camel@bree.local.net> On Thu, 2005-05-05 at 09:37 -0400, Dan Williams wrote: > A couple of comments, some of which I'd even be willing to do the work > for :) > > 1) CVS/tobuild - Could this code be separated into an XML-RPC client? > So you'd have the Queuer be an XMLRPC server, and clients would connect > to it and feed it build jobs. For Aurora at least, we're probably not > going to have CVS of this type, and "tobuild" seems somewhat cumbersome. This was definitely intended as the eventual direction. But we wanted to get something going quickly, and write a file was far simpler. :-) Especially since at first, there wasn't going to be the need for XML-RPC -- that came later as a need for kicking off the ppc builds. Any help down this path would definitely be appreciated, otherwise it's at least on my todo list. The key is to keep the client as simple as possible so that we don't have a big list of deps like you end up with for the beehive client. Basic python + included modules should be fairly sane, though. And using the client certs for the fedora account system stuff seems like the obvious auth mechanism then. Jeremy From dcbw at redhat.com Thu May 5 16:31:14 2005 From: dcbw at redhat.com (Dan Williams) Date: Thu, 05 May 2005 12:31:14 -0400 Subject: buildsystem stuff In-Reply-To: <1115309084.16366.70.camel@bree.local.net> References: <1115181658.15831.45.camel@cutter> <1115300266.16211.19.camel@dcbw.boston.redhat.com> <1115309084.16366.70.camel@bree.local.net> Message-ID: <1115310674.16211.35.camel@dcbw.boston.redhat.com> On Thu, 2005-05-05 at 12:04 -0400, Jeremy Katz wrote: > On Thu, 2005-05-05 at 09:37 -0400, Dan Williams wrote: > > A couple of comments, some of which I'd even be willing to do the work > > for :) > > > > 1) CVS/tobuild - Could this code be separated into an XML-RPC client? > > So you'd have the Queuer be an XMLRPC server, and clients would connect > > to it and feed it build jobs. For Aurora at least, we're probably not > > going to have CVS of this type, and "tobuild" seems somewhat cumbersome. > > This was definitely intended as the eventual direction. But we wanted > to get something going quickly, and write a file was far simpler. :-) > Especially since at first, there wasn't going to be the need for XML-RPC > -- that came later as a need for kicking off the ppc builds. > > Any help down this path would definitely be appreciated, otherwise it's > at least on my todo list. The key is to keep the client as simple as > possible so that we don't have a big list of deps like you end up with > for the beehive client. Basic python + included modules should be > fairly sane, though. And using the client certs for the fedora account > system stuff seems like the obvious auth mechanism then. Ok, I'll try to do this in the next couple days then. Is there CVS somewhere for the scripts? I don't want to do the work and then do manual merges really. Dan From skvidal at phy.duke.edu Thu May 5 17:53:36 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Thu, 05 May 2005 13:53:36 -0400 Subject: buildsystem stuff In-Reply-To: <1115310674.16211.35.camel@dcbw.boston.redhat.com> References: <1115181658.15831.45.camel@cutter> <1115300266.16211.19.camel@dcbw.boston.redhat.com> <1115309084.16366.70.camel@bree.local.net> <1115310674.16211.35.camel@dcbw.boston.redhat.com> Message-ID: <1115315616.3669.8.camel@cutter> On Thu, 2005-05-05 at 12:31 -0400, Dan Williams wrote: > On Thu, 2005-05-05 at 12:04 -0400, Jeremy Katz wrote: > > On Thu, 2005-05-05 at 09:37 -0400, Dan Williams wrote: > > > A couple of comments, some of which I'd even be willing to do the work > > > for :) > > > > > > 1) CVS/tobuild - Could this code be separated into an XML-RPC client? > > > So you'd have the Queuer be an XMLRPC server, and clients would connect > > > to it and feed it build jobs. For Aurora at least, we're probably not > > > going to have CVS of this type, and "tobuild" seems somewhat cumbersome. > > > > This was definitely intended as the eventual direction. But we wanted > > to get something going quickly, and write a file was far simpler. :-) > > Especially since at first, there wasn't going to be the need for XML-RPC > > -- that came later as a need for kicking off the ppc builds. > > > > Any help down this path would definitely be appreciated, otherwise it's > > at least on my todo list. The key is to keep the client as simple as > > possible so that we don't have a big list of deps like you end up with > > for the beehive client. Basic python + included modules should be > > fairly sane, though. And using the client certs for the fedora account > > system stuff seems like the obvious auth mechanism then. > > Ok, I'll try to do this in the next couple days then. Is there CVS > somewhere for the scripts? I don't want to do the work and then do > manual merges really. > code is in extras-buildsys-temp in /cvs/fedora look in the dir 'automation' in that module. If you want the code that does the 2-way auth via ssl certs you should look at m2crypto. CCing this to icon so he can tell you where to look for the code. -sv From skvidal at phy.duke.edu Fri May 6 20:58:27 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Fri, 06 May 2005 16:58:27 -0400 Subject: buildsys info updates Message-ID: <1115413107.6805.72.camel@cutter> Hi, Had a short conference call with Dan and Jeremy today. Gist was working on the bits I mentioned to this list a few days ago. Dan is going to work on the config bits and changing around the classes a bit so as to make it more clear what bits do what. We also 'decided' that it would be worthwhile for the queuer to run an xmlrpc server for 'make build' to communicate with It will store the list of things to be built in a db of some kind. The process would be something like: - user runs 'make build TARGET=development' this runs an xml-rpc client program which uses the ~/.fedora.cert to connect and auth to the queuer. It submits the build request and exits - the queuer takes the list of packages to build and farms them out to the buildhosts. Right now the archwelders are xml-rpc servers that the queuer connects to to tell them what to do (run, die, logs, status, etc). Dan mentioned some interest in making them polling daemons instead - discussion of this is welcome. - the archwelder finishes, the queuer gets notified of the results and: - updates the queuer db/list with the status - sends notices/info to the user who requested the build - moves the files around like it needs to for the repositories That's most of what we talked about. Ideas: - making it so the buildhosts and the queuer don't have to have immediate access to the same file space (currently via nfs) - (along with the above) making it so the archwelders/buildhosts can be anywhere in the world for building packages. Dan, Jeremy, feel free to fill in anything I missed or said wrong. -sv From ivazquez at ivazquez.net Tue May 10 19:12:12 2005 From: ivazquez at ivazquez.net (Ignacio Vazquez-Abrams) Date: Tue, 10 May 2005 15:12:12 -0400 Subject: mach and disttag Message-ID: <1115752332.21984.67.camel@ignacio.ignacio.lan> So the CVS stuff handles %dist properly, which is good. Unfortunately the build system doesn't, which is not-so-good. Rather than leave the job half-done, I've come up with a patch that should fix it, which I've attached. It can be removed when the disttag changes go into redhat-rpm- config, but until then there's this. -- Ignacio Vazquez-Abrams http://fedora.ivazquez.net/ gpg --keyserver hkp://subkeys.pgp.net --recv-key 38028b72 -------------- next part -------------- A non-text attachment was scrubbed... Name: mach-disttag.patch Type: text/x-patch Size: 2000 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: This is a digitally signed message part URL: From skvidal at phy.duke.edu Tue May 10 20:34:16 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Tue, 10 May 2005 16:34:16 -0400 Subject: mach and disttag In-Reply-To: <1115752332.21984.67.camel@ignacio.ignacio.lan> References: <1115752332.21984.67.camel@ignacio.ignacio.lan> Message-ID: <1115757256.10234.104.camel@cutter> On Tue, 2005-05-10 at 15:12 -0400, Ignacio Vazquez-Abrams wrote: > So the CVS stuff handles %dist properly, which is good. Unfortunately > the build system doesn't, which is not-so-good. Rather than leave the > job half-done, I've come up with a patch that should fix it, which I've > attached. It can be removed when the disttag changes go into redhat-rpm- > config, but until then there's this. > Why not have a buildsys-macros package we put in the buildgroups repository that puts this file in place in /etc? that would mean we could update it w/o updating the buildsystem itself. -sv From ivazquez at ivazquez.net Tue May 10 21:15:42 2005 From: ivazquez at ivazquez.net (Ignacio Vazquez-Abrams) Date: Tue, 10 May 2005 17:15:42 -0400 Subject: mach and disttag In-Reply-To: <1115757256.10234.104.camel@cutter> References: <1115752332.21984.67.camel@ignacio.ignacio.lan> <1115757256.10234.104.camel@cutter> Message-ID: <1115759742.21984.71.camel@ignacio.ignacio.lan> On Tue, 2005-05-10 at 16:34 -0400, seth vidal wrote: > On Tue, 2005-05-10 at 15:12 -0400, Ignacio Vazquez-Abrams wrote: > > So the CVS stuff handles %dist properly, which is good. Unfortunately > > the build system doesn't, which is not-so-good. Rather than leave the > > job half-done, I've come up with a patch that should fix it, which I've > > attached. It can be removed when the disttag changes go into redhat-rpm- > > config, but until then there's this. > > > > Why not have a buildsys-macros package we put in the buildgroups > repository that puts this file in place in /etc? > > that would mean we could update it w/o updating the buildsystem itself. Sounds like a plan. Gimme a mo... -- Ignacio Vazquez-Abrams http://fedora.ivazquez.net/ gpg --keyserver hkp://subkeys.pgp.net --recv-key 38028b72 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: This is a digitally signed message part URL: From ivazquez at ivazquez.net Tue May 10 21:38:06 2005 From: ivazquez at ivazquez.net (Ignacio Vazquez-Abrams) Date: Tue, 10 May 2005 17:38:06 -0400 Subject: mach and disttag In-Reply-To: <1115759742.21984.71.camel@ignacio.ignacio.lan> References: <1115752332.21984.67.camel@ignacio.ignacio.lan> <1115757256.10234.104.camel@cutter> <1115759742.21984.71.camel@ignacio.ignacio.lan> Message-ID: <1115761086.21984.73.camel@ignacio.ignacio.lan> On Tue, 2005-05-10 at 17:15 -0400, Ignacio Vazquez-Abrams wrote: > On Tue, 2005-05-10 at 16:34 -0400, seth vidal wrote: > > On Tue, 2005-05-10 at 15:12 -0400, Ignacio Vazquez-Abrams wrote: > > > So the CVS stuff handles %dist properly, which is good. Unfortunately > > > the build system doesn't, which is not-so-good. Rather than leave the > > > job half-done, I've come up with a patch that should fix it, which I've > > > attached. It can be removed when the disttag changes go into redhat-rpm- > > > config, but until then there's this. > > > > > > > Why not have a buildsys-macros package we put in the buildgroups > > repository that puts this file in place in /etc? > > > > that would mean we could update it w/o updating the buildsystem itself. > > Sounds like a plan. Gimme a mo... http://fedora.ivazquez.net/files/buildsys-macros-1.0-1.src.rpm -- Ignacio Vazquez-Abrams http://fedora.ivazquez.net/ gpg --keyserver hkp://subkeys.pgp.net --recv-key 38028b72 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: This is a digitally signed message part URL: From skvidal at phy.duke.edu Wed May 11 14:30:33 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Wed, 11 May 2005 10:30:33 -0400 Subject: mach and disttag In-Reply-To: <1115761086.21984.73.camel@ignacio.ignacio.lan> References: <1115752332.21984.67.camel@ignacio.ignacio.lan> <1115757256.10234.104.camel@cutter> <1115759742.21984.71.camel@ignacio.ignacio.lan> <1115761086.21984.73.camel@ignacio.ignacio.lan> Message-ID: <1115821833.10234.183.camel@cutter> > > Sounds like a plan. Gimme a mo... > > http://fedora.ivazquez.net/files/buildsys-macros-1.0-1.src.rpm > I ended up using the buildsys-macros that spot sent. However, they didn't seem to do anything. I defined the system version for the package, built the package and I've checked to make sure they're in the chroots when the rpms get built but it doesn't seem to having the desired affect. Something I should be looking for to debug this? thanks, -sv From ivazquez at ivazquez.net Wed May 11 15:38:48 2005 From: ivazquez at ivazquez.net (Ignacio Vazquez-Abrams) Date: Wed, 11 May 2005 11:38:48 -0400 Subject: mach and disttag In-Reply-To: <1115821833.10234.183.camel@cutter> References: <1115752332.21984.67.camel@ignacio.ignacio.lan> <1115757256.10234.104.camel@cutter> <1115759742.21984.71.camel@ignacio.ignacio.lan> <1115761086.21984.73.camel@ignacio.ignacio.lan> <1115821833.10234.183.camel@cutter> Message-ID: <1115825928.21984.88.camel@ignacio.ignacio.lan> On Wed, 2005-05-11 at 10:30 -0400, seth vidal wrote: > I ended up using the buildsys-macros that spot sent. However, they > didn't seem to do anything. I defined the system version for the > package, built the package and I've checked to make sure they're in the > chroots when the rpms get built but it doesn't seem to having the > desired affect. Something I should be looking for to debug this? I just updated to the latest mach and did some testing, and it seems that the file in /etc/mach/dist.d is pointing to buildgroups/$arch when it should be pointing to buildgroups/$version/$arch. -- Ignacio Vazquez-Abrams http://fedora.ivazquez.net/ gpg --keyserver hkp://subkeys.pgp.net --recv-key 38028b72 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: This is a digitally signed message part URL: From skvidal at phy.duke.edu Wed May 11 17:51:04 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Wed, 11 May 2005 13:51:04 -0400 Subject: mach and disttag In-Reply-To: <1115825928.21984.88.camel@ignacio.ignacio.lan> References: <1115752332.21984.67.camel@ignacio.ignacio.lan> <1115757256.10234.104.camel@cutter> <1115759742.21984.71.camel@ignacio.ignacio.lan> <1115761086.21984.73.camel@ignacio.ignacio.lan> <1115821833.10234.183.camel@cutter> <1115825928.21984.88.camel@ignacio.ignacio.lan> Message-ID: <1115833864.10234.214.camel@cutter> On Wed, 2005-05-11 at 11:38 -0400, Ignacio Vazquez-Abrams wrote: > On Wed, 2005-05-11 at 10:30 -0400, seth vidal wrote: > > I ended up using the buildsys-macros that spot sent. However, they > > didn't seem to do anything. I defined the system version for the > > package, built the package and I've checked to make sure they're in the > > chroots when the rpms get built but it doesn't seem to having the > > desired affect. Something I should be looking for to debug this? > > I just updated to the latest mach and did some testing, and it seems > that the file in /etc/mach/dist.d is pointing to buildgroups/$arch when > it should be pointing to buildgroups/$version/$arch. > no, that's not the problem. I had already fixed that on the build system's mach configuration. The buildsys-macro package IS getting installed - it's just not having any affect. -sv From ivazquez at ivazquez.net Wed May 11 18:12:46 2005 From: ivazquez at ivazquez.net (Ignacio Vazquez-Abrams) Date: Wed, 11 May 2005 14:12:46 -0400 Subject: mach and disttag In-Reply-To: <1115833864.10234.214.camel@cutter> References: <1115752332.21984.67.camel@ignacio.ignacio.lan> <1115757256.10234.104.camel@cutter> <1115759742.21984.71.camel@ignacio.ignacio.lan> <1115761086.21984.73.camel@ignacio.ignacio.lan> <1115821833.10234.183.camel@cutter> <1115825928.21984.88.camel@ignacio.ignacio.lan> <1115833864.10234.214.camel@cutter> Message-ID: <1115835166.21984.94.camel@ignacio.ignacio.lan> On Wed, 2005-05-11 at 13:51 -0400, seth vidal wrote: > On Wed, 2005-05-11 at 11:38 -0400, Ignacio Vazquez-Abrams wrote: > > On Wed, 2005-05-11 at 10:30 -0400, seth vidal wrote: > > > I ended up using the buildsys-macros that spot sent. However, they > > > didn't seem to do anything. I defined the system version for the > > > package, built the package and I've checked to make sure they're in the > > > chroots when the rpms get built but it doesn't seem to having the > > > desired affect. Something I should be looking for to debug this? > > > > I just updated to the latest mach and did some testing, and it seems > > that the file in /etc/mach/dist.d is pointing to buildgroups/$arch when > > it should be pointing to buildgroups/$version/$arch. > > > > no, that's not the problem. I had already fixed that on the build > system's mach configuration. The buildsys-macro package IS getting > installed - it's just not having any affect. Found the problem. The macros should be defined as: %fedora $VERSION %dist .fc$VERSION -- Ignacio Vazquez-Abrams http://fedora.ivazquez.net/ gpg --keyserver hkp://subkeys.pgp.net --recv-key 38028b72 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: This is a digitally signed message part URL: From skvidal at phy.duke.edu Thu May 12 05:00:41 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Thu, 12 May 2005 01:00:41 -0400 Subject: spm and buildreqs Message-ID: <1115874041.4604.10.camel@cutter> Going through this a few more times as I work on some bits inside the buildsystem. We're given an srpm - we don't know where it was made, on what arch, nothing - so we cannot trust the buildreqs it provides. If we're inside the chroot and on the arch we want to build on then running: rpm -Uvh /path/to/our/srpm rpmbuild -bs --nodeps /path/to/the/generated/spec should result in a srpm for us that will have valid build reqs. So that if we grab the requires from that srpm we'll have a pretty good idea of what we'll need to install to build the package. is that correct/accurate/etc? -sv From laroche at redhat.com Thu May 12 05:39:55 2005 From: laroche at redhat.com (Florian La Roche) Date: Thu, 12 May 2005 07:39:55 +0200 Subject: spm and buildreqs In-Reply-To: <1115874041.4604.10.camel@cutter> References: <1115874041.4604.10.camel@cutter> Message-ID: <20050512053955.GA3070@dudweiler.stuttgart.redhat.com> On Thu, May 12, 2005 at 01:00:41AM -0400, seth vidal wrote: > Going through this a few more times as I work on some bits inside the > buildsystem. > > We're given an srpm - we don't know where it was made, on what arch, > nothing - so we cannot trust the buildreqs it provides. > > If we're inside the chroot and on the arch we want to build on then > running: > rpm -Uvh /path/to/our/srpm > rpmbuild -bs --nodeps /path/to/the/generated/spec > > should result in a srpm for us that will have valid build reqs. > So that if we grab the requires from that srpm we'll have a pretty good > idea of what we'll need to install to build the package. > > is that correct/accurate/etc? Yes, that should work. Nice idea... greetings, Florian La Roche From enrico.scholz at informatik.tu-chemnitz.de Thu May 12 05:40:16 2005 From: enrico.scholz at informatik.tu-chemnitz.de (Enrico Scholz) Date: Thu, 12 May 2005 07:40:16 +0200 Subject: spm and buildreqs In-Reply-To: <1115874041.4604.10.camel@cutter> (seth vidal's message of "Thu, 12 May 2005 01:00:41 -0400") References: <1115874041.4604.10.camel@cutter> Message-ID: <87fywtxarj.fsf@kosh.bigo.ensc.de> skvidal at phy.duke.edu (seth vidal) writes: > We're given an srpm - we don't know where it was made, on what arch, > nothing - so we cannot trust the buildreqs it provides. > > If we're inside the chroot and on the arch we want to build on then > running: > rpm -Uvh /path/to/our/srpm > rpmbuild -bs --nodeps /path/to/the/generated/spec > > should result in a srpm for us that will have valid build reqs. To be correct, the used algorithm should be: deps = '' do { old-deps = deps rpm -Uvh --nodeps ....src.rpm rpmbuild -bs --nodeps --force ....spec * calculate-deps install deps } while deps != old-deps Else, wrong results will be produced for cases like | BuildRequires: foo | %macrofoo whereas the macro is defined in /etc/rpm/macros.foo (shipped by package 'foo') and expands to | BuildRequires: bar Enrico -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 480 bytes Desc: not available URL: From skvidal at phy.duke.edu Sun May 15 21:41:36 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Sun, 15 May 2005 17:41:36 -0400 Subject: Mock/Mach In-Reply-To: <42850519.3060808@six-by-nine.com.au> References: <4283D09A.4090106@six-by-nine.com.au> <42843405.2040101@redhat.com> <4284365B.8030706@six-by-nine.com.au> <42850519.3060808@six-by-nine.com.au> Message-ID: <1116193297.12777.37.camel@cutter> In order to make the world a bit more simple I've been working on a fork of mach that simplifies the feature set dramatically. Right now it does the things we need and only those things: it does: 1. makes a chroot 2. installs and remakes the srpm from the srpm to get the buildreqs right. 3. installs the build reqs 4. rebuilds the srpm into binary rpms 5. returns logs and what not intelligently. 6. does all this as quickly as possible. It doesn't do any of the spec file parsing or build order sorting that mach does. It only deals with srpms to build from and it only deals with them one at a time. Right now I'm calling it 'mock' b/c it's a fake or lesser version of mach. You can see the packages and what not I've got so far here: http://linux.duke.edu/~skvidal/mock/ Steps to run it: 1. make sure you're a member of the 'mock' group. 2. mock -r name-of-chroot(look in /etc/mock for names) /path/to/srpm thats it - it should tell you where to look for the resulting packages or the logs. I'll be checking it into fedora cvs shortly and then working on integrating it with the new buildsystem code that dcbw has been putting together. If all goes as I hope then we'll no longer need the common nfs share for writing out resulting packages/logs. We can just ship the packages from the build host back over the wire to the queuing host via the xml-rpc connection already in place. If that all works then we'll be able to have buildhosts virtually anywhere. (w/i reason of course) Everything seems to 'work' in my tests - I'm sure there are bugs but I'm equally sure that y'all will tell me all about them. -sv From skvidal at phy.duke.edu Mon May 16 03:08:50 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Sun, 15 May 2005 23:08:50 -0400 Subject: mock and other plans Message-ID: <1116212930.12777.63.camel@cutter> I checked mock into fedora cvs /cvs/extras. Here's what I think we should work on doing: 1. move the automation2 directory from extras-buildsys-stemp and into it's own module. 2. remove the extras-buildsys-temp dirs from cvs 3. setup makefiles and specfiles for the automation2 code dcbw has been doing 4. get mock and the automation2 code together and work out the rest of the bits 5. deploy it for building and figure out where the bugs lay. :) What Dan and I have been discussing is to try to make it easier for the main queuing agent and the build hosts to be on very separate networks and still work. There's still ground to cover but it's getting closer. I think the items left to look at are: 1. thread out the archwelder servers so they don't stall 2. xmlrpc ssl auth using .fedora.cert files 3. xmlrpc client from the make build side 4. download of resulting packages/logs from the archwelder servers to the queuing agent. 5. monitoring and status information from the queuer for updates/notices/etc. anyone else want to pitch in? -sv -sv From jkeating at j2solutions.net Mon May 16 17:39:15 2005 From: jkeating at j2solutions.net (Jesse Keating) Date: Mon, 16 May 2005 10:39:15 -0700 Subject: Mock/Mach In-Reply-To: <1116193297.12777.37.camel@cutter> References: <4283D09A.4090106@six-by-nine.com.au> <42843405.2040101@redhat.com> <4284365B.8030706@six-by-nine.com.au> <42850519.3060808@six-by-nine.com.au> <1116193297.12777.37.camel@cutter> Message-ID: <1116265156.5928.21.camel@jkeating2.hq.pogolinux.com> On Sun, 2005-05-15 at 17:41 -0400, seth vidal wrote: > > I'll be checking it into fedora cvs shortly and then working on > integrating it with the new buildsystem code that dcbw has been > putting > together. > > If all goes as I hope then we'll no longer need the common nfs share > for > writing out resulting packages/logs. We can just ship the packages > from > the build host back over the wire to the queuing host via the xml-rpc > connection already in place. If that all works then we'll be able to > have buildhosts virtually anywhere. (w/i reason of course) > > Everything seems to 'work' in my tests - I'm sure there are bugs but > I'm > equally sure that y'all will tell me all about them. Oooh! This looks better for Legacy needs than full mach. I hope to start testing this soon (hopefully on a certain x86_64 box...) -- Jesse Keating RHCE (geek.j2solutions.net) Fedora Legacy Team (www.fedoralegacy.org) GPG Public Key (geek.j2solutions.net/jkeating.j2solutions.pub) Was I helpful? Let others know: http://svcs.affero.net/rm.php?r=jkeating From thomas at apestaart.org Thu May 19 16:10:44 2005 From: thomas at apestaart.org (Thomas Vander Stichele) Date: Thu, 19 May 2005 18:10:44 +0200 Subject: spm and buildreqs In-Reply-To: <1115874041.4604.10.camel@cutter> References: <1115874041.4604.10.camel@cutter> Message-ID: <1116519044.16226.1.camel@otto.amantes> On Thu, 2005-05-12 at 01:00 -0400, seth vidal wrote: > Going through this a few more times as I work on some bits inside the > buildsystem. > > We're given an srpm - we don't know where it was made, on what arch, > nothing - so we cannot trust the buildreqs it provides. > > If we're inside the chroot and on the arch we want to build on then > running: > rpm -Uvh /path/to/our/srpm > rpmbuild -bs --nodeps /path/to/the/generated/spec > > should result in a srpm for us that will have valid build reqs. > So that if we grab the requires from that srpm we'll have a pretty good > idea of what we'll need to install to build the package. > > is that correct/accurate/etc? It will fail for specs that express buildrequires using a macro that gets its result from a program that ought to be installed. Think "python --version" and then buildrequiring the correct version by package name. Thomas Dave/Dina : future TV today ! - http://www.davedina.org/ <-*- thomas (dot) apestaart (dot) org -*-> Don't change your name keep it the same for fear I may lose you again I know you won't it's just that I am unorganized and I want to find you when Something good happens <-*- thomas (at) apestaart (dot) org -*-> URGent, best radio on the net - 24/7 ! - http://urgent.fm/ From skvidal at phy.duke.edu Thu May 19 17:19:04 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Thu, 19 May 2005 13:19:04 -0400 Subject: spm and buildreqs In-Reply-To: <1116519044.16226.1.camel@otto.amantes> References: <1115874041.4604.10.camel@cutter> <1116519044.16226.1.camel@otto.amantes> Message-ID: <1116523144.13979.23.camel@cutter> On Thu, 2005-05-19 at 18:10 +0200, Thomas Vander Stichele wrote: > On Thu, 2005-05-12 at 01:00 -0400, seth vidal wrote: > > Going through this a few more times as I work on some bits inside the > > buildsystem. > > > > We're given an srpm - we don't know where it was made, on what arch, > > nothing - so we cannot trust the buildreqs it provides. > > > > If we're inside the chroot and on the arch we want to build on then > > running: > > rpm -Uvh /path/to/our/srpm > > rpmbuild -bs --nodeps /path/to/the/generated/spec > > > > should result in a srpm for us that will have valid build reqs. > > So that if we grab the requires from that srpm we'll have a pretty good > > idea of what we'll need to install to build the package. > > > > is that correct/accurate/etc? > > It will fail for specs that express buildrequires using a macro that > gets its result from a program that ought to be installed. > > Think "python --version" and then buildrequiring the correct version by > package name. > Right and I think we agreed that doing that could be banned from packages. -sv From enrico.scholz at informatik.tu-chemnitz.de Thu May 19 17:22:56 2005 From: enrico.scholz at informatik.tu-chemnitz.de (Enrico Scholz) Date: Thu, 19 May 2005 19:22:56 +0200 Subject: spm and buildreqs In-Reply-To: <1116523144.13979.23.camel@cutter> (seth vidal's message of "Thu, 19 May 2005 13:19:04 -0400") References: <1115874041.4604.10.camel@cutter> <1116519044.16226.1.camel@otto.amantes> <1116523144.13979.23.camel@cutter> Message-ID: <87sm0ji0zz.fsf@kosh.bigo.ensc.de> skvidal at phy.duke.edu (seth vidal) writes: >> > If we're inside the chroot and on the arch we want to build on then >> > running: >> > rpm -Uvh /path/to/our/srpm >> > rpmbuild -bs --nodeps /path/to/the/generated/spec >> > >> > should result in a srpm for us that will have valid build reqs. >> .... >> It will fail for specs that express buildrequires using a macro that >> gets its result from a program that ought to be installed. >> >> Think "python --version" and then buildrequiring the correct version by >> package name. >> > > Right and I think we agreed that doing that could be banned from > packages. ... or be solved by the buildsystem... Enrico From dcbw at redhat.com Thu May 19 17:55:28 2005 From: dcbw at redhat.com (Dan Williams) Date: Thu, 19 May 2005 13:55:28 -0400 Subject: spm and buildreqs In-Reply-To: <1116519044.16226.1.camel@otto.amantes> References: <1115874041.4604.10.camel@cutter> <1116519044.16226.1.camel@otto.amantes> Message-ID: <1116525328.20671.23.camel@dcbw.boston.redhat.com> On Thu, 2005-05-19 at 18:10 +0200, Thomas Vander Stichele wrote: > On Thu, 2005-05-12 at 01:00 -0400, seth vidal wrote: > > Going through this a few more times as I work on some bits inside the > > buildsystem. > > > > We're given an srpm - we don't know where it was made, on what arch, > > nothing - so we cannot trust the buildreqs it provides. > > > > If we're inside the chroot and on the arch we want to build on then > > running: > > rpm -Uvh /path/to/our/srpm > > rpmbuild -bs --nodeps /path/to/the/generated/spec > > > > should result in a srpm for us that will have valid build reqs. > > So that if we grab the requires from that srpm we'll have a pretty good > > idea of what we'll need to install to build the package. > > > > is that correct/accurate/etc? > > It will fail for specs that express buildrequires using a macro that > gets its result from a program that ought to be installed. > > Think "python --version" and then buildrequiring the correct version by > package name. What's a case where this would be used? Ie, you want to tie the package you're building to a _specific_ version of python? Wouldn't that be better done by actually just hardcoding the python version? If this is the usecase, that sounds lazy to me. But if not, what are some valid ones? Dan From skvidal at phy.duke.edu Thu May 19 18:09:10 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Thu, 19 May 2005 14:09:10 -0400 Subject: spm and buildreqs In-Reply-To: <1116523144.13979.23.camel@cutter> References: <1115874041.4604.10.camel@cutter> <1116519044.16226.1.camel@otto.amantes> <1116523144.13979.23.camel@cutter> Message-ID: <1116526150.13979.25.camel@cutter> > > Think "python --version" and then buildrequiring the correct version by > > package name. > > > > Right and I think we agreed that doing that could be banned from > packages. > If we take Enrico's suggestion of doing the srpm/builddep items in a loop until the buildeps are the same then we would get them all. -sv From thomas at apestaart.org Sat May 21 14:24:20 2005 From: thomas at apestaart.org (Thomas Vander Stichele) Date: Sat, 21 May 2005 16:24:20 +0200 Subject: spm and buildreqs In-Reply-To: <1116525328.20671.23.camel@dcbw.boston.redhat.com> References: <1115874041.4604.10.camel@cutter> <1116519044.16226.1.camel@otto.amantes> <1116525328.20671.23.camel@dcbw.boston.redhat.com> Message-ID: <1116685460.1458.16.camel@otto.amantes> Hi, > What's a case where this would be used? Ie, you want to tie the package > you're building to a _specific_ version of python? No, you want your spec to build on dists *without* knowing what the version is, ie without tieing it to a specific version. IIRC that's what pyvault does in a lot of spec files. It's also done in other spec files that were thrown at mach for f.us and livna.org - xmms plugins are an example IIRC, but I can't be sure atm. Thomas Dave/Dina : future TV today ! - http://www.davedina.org/ <-*- thomas (dot) apestaart (dot) org -*-> I know someday you'll have a beautiful life I know you'll be a star in someone else's sky but why oh why oh why why can't it be mine ? <-*- thomas (at) apestaart (dot) org -*-> URGent, best radio on the net - 24/7 ! - http://urgent.fm/ From mwaite at redhat.com Sun May 22 17:40:58 2005 From: mwaite at redhat.com (Michael Waite) Date: Sun, 22 May 2005 13:40:58 -0400 Subject: Netgear WG311 card Message-ID: <1116783658.4109.2.camel@localhost.localdomain> anyone got an rpm for this card? I am trying to get FC3 (rawhide) online for a summer intern that is here this summer. Or, is there a card that is known to work with rawhide? Thanks. ------Mike -- Michael Waite 978-943-9042 mwaite at redhat.com 10 Technology Park Drive Westford, Ma 01876 Learn, Network and Experience Open Source. Red Hat Summit, New Orleans 2005 http://www.redhat.com/promo/summit/ From wtogami at redhat.com Mon May 23 04:22:09 2005 From: wtogami at redhat.com (Warren Togami) Date: Sun, 22 May 2005 18:22:09 -1000 Subject: Netgear WG311 card In-Reply-To: <1116783658.4109.2.camel@localhost.localdomain> References: <1116783658.4109.2.camel@localhost.localdomain> Message-ID: <42915A71.4060201@redhat.com> Michael Waite wrote: > anyone got an rpm for this card? > I am trying to get FC3 (rawhide) online for a summer intern that is here > this summer. > > Or, is there a card that is known to work with rawhide? > > Thanks. > > > ------Mike > > Why are you asking about a network card driver on the build system development list? Warren Togami wtogami at redhat.com From wtogami at redhat.com Mon May 23 04:42:45 2005 From: wtogami at redhat.com (Warren Togami) Date: Sun, 22 May 2005 18:42:45 -1000 Subject: spm and buildreqs In-Reply-To: <1116523144.13979.23.camel@cutter> References: <1115874041.4604.10.camel@cutter> <1116519044.16226.1.camel@otto.amantes> <1116523144.13979.23.camel@cutter> Message-ID: <42915F45.9060104@redhat.com> seth vidal wrote: >>Think "python --version" and then buildrequiring the correct version by >>package name. >> > > > Right and I think we agreed that doing that could be banned from > packages. > There is no agreement here. Core uses hacks in several packages that do just this. It has never been a problem, and it makes it easier to maintain those spec files in the long term because you avoid changes and the behavior is well understood. https://www.redhat.com/archives/fedora-buildsys-list/2005-May/msg00021.html It is no problem if you use this algorithm. Warren Togami wtogami at redhat.com From pjones at redhat.com Mon May 23 17:13:46 2005 From: pjones at redhat.com (Peter Jones) Date: Mon, 23 May 2005 13:13:46 -0400 Subject: spm and buildreqs In-Reply-To: <42915F45.9060104@redhat.com> References: <1115874041.4604.10.camel@cutter> <1116519044.16226.1.camel@otto.amantes> <1116523144.13979.23.camel@cutter> <42915F45.9060104@redhat.com> Message-ID: <1116868427.9419.5.camel@localhost.localdomain> On Sun, 2005-05-22 at 18:42 -1000, Warren Togami wrote: > seth vidal wrote: > >>Think "python --version" and then buildrequiring the correct version by > >>package name. > >> > > > > > > Right and I think we agreed that doing that could be banned from > > packages. > > > > There is no agreement here. > > Core uses hacks in several packages that do just this. It has never > been a problem, and it makes it easier to maintain those spec files in > the long term because you avoid changes and the behavior is well understood. You don't happen to have a list of which packages those are, do you? -- Peter From pjones at redhat.com Mon May 23 17:17:57 2005 From: pjones at redhat.com (Peter Jones) Date: Mon, 23 May 2005 13:17:57 -0400 Subject: spm and buildreqs In-Reply-To: <1116685460.1458.16.camel@otto.amantes> References: <1115874041.4604.10.camel@cutter> <1116519044.16226.1.camel@otto.amantes> <1116525328.20671.23.camel@dcbw.boston.redhat.com> <1116685460.1458.16.camel@otto.amantes> Message-ID: <1116868678.9419.10.camel@localhost.localdomain> On Sat, 2005-05-21 at 16:24 +0200, Thomas Vander Stichele wrote: > Hi, > > > > What's a case where this would be used? Ie, you want to tie the package > > you're building to a _specific_ version of python? > > No, you want your spec to build on dists *without* knowing what the > version is, ie without tieing it to a specific version. What does this solve? The build requirement (as written to the src rpm) then becomes "if by some coincidence this srpm built with this other package installed, dissalow rebuilds except against the current version of that package". And even that doesn't really guarantee anything, because it was probably built with "-bs", so it didn't even see if the build *worked* to get that far. Every time this gets brought up, people use this example to justify it. I agree that there are packages that _have_ this stuff in them (obviously), but I still haven't seen a good explanation of how the example isn't completely contrived. -- Peter From ivazquez at ivazquez.net Tue May 24 18:39:48 2005 From: ivazquez at ivazquez.net (Ignacio Vazquez-Abrams) Date: Tue, 24 May 2005 14:39:48 -0400 Subject: spm and buildreqs In-Reply-To: <1115874041.4604.10.camel@cutter> References: <1115874041.4604.10.camel@cutter> Message-ID: <1116959988.14947.7.camel@ignacio.ignacio.lan> On Thu, 2005-05-12 at 01:00 -0400, seth vidal wrote: > Going through this a few more times as I work on some bits inside the > buildsystem. > > We're given an srpm - we don't know where it was made, on what arch, > nothing - so we cannot trust the buildreqs it provides. > > If we're inside the chroot and on the arch we want to build on then > running: > rpm -Uvh /path/to/our/srpm > rpmbuild -bs --nodeps /path/to/the/generated/spec > > should result in a srpm for us that will have valid build reqs. > So that if we grab the requires from that srpm we'll have a pretty good > idea of what we'll need to install to build the package. > > is that correct/accurate/etc? You can actually glean that info from the spec without having to build a SRPM: rpmbuild foo.spec 2>&1 | awk 'BEGIN { FS = "[ \t]" } $1 !~ /^error:$/ {print $2}' -- Ignacio Vazquez-Abrams http://fedora.ivazquez.net/ gpg --keyserver hkp://subkeys.pgp.net --recv-key 38028b72 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: This is a digitally signed message part URL: From skvidal at phy.duke.edu Sat May 28 17:29:37 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Sat, 28 May 2005 13:29:37 -0400 Subject: mock 0.2 Message-ID: <1117301377.19135.55.camel@cutter> Hey all, I've put up a new mock release - mock. Mock 0.2 A number of bugs fixed and the config format has been changed. Most importantly is it should work nicely for a user with any uid or gid - as long as they're in the mock group. No longer does it require uid 500, gid 500. http://linux.duke.edu/~skvidal/mock/ binary rpm is built for rawhide i386. let me know what breaks for you. -sv