From katzj at redhat.com Tue Mar 1 01:37:16 2005 From: katzj at redhat.com (Jeremy Katz) Date: Mon, 28 Feb 2005 20:37:16 -0500 Subject: yum+mach2 for fedora-development tree pseudo-release In-Reply-To: <1109631793.21503.62.camel@cutter> References: <1109631793.21503.62.camel@cutter> Message-ID: <1109641036.2899.43.camel@bree.local.net> On Mon, 2005-02-28 at 18:03 -0500, seth vidal wrote: > this mostly works like it should. I tested it on an fc3 system and a >rawhide system with some success. Oooh, pretty. And a step in the (right) direction of getting to a working buildsystem. >mach setup >mach rebuild /my/favorite/srpm > >and read the output. > >see if it works for you. Seems to work okay in some quick testing on a rawhide i386 box. anaconda built fine. On the other hand, the mach package is missing a buildrequires on libselinux-devel ;-) If I'm not snowed in tomorrow, I'll try to do an install on an x86_64 box and see how things go there. Jeremy From skvidal at phy.duke.edu Tue Mar 1 07:04:12 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Tue, 01 Mar 2005 02:04:12 -0500 Subject: yum+mach2 for fedora-development tree pseudo-release In-Reply-To: <1109641036.2899.43.camel@bree.local.net> References: <1109631793.21503.62.camel@cutter> <1109641036.2899.43.camel@bree.local.net> Message-ID: <1109660652.23615.26.camel@cutter> >Seems to work okay in some quick testing on a rawhide i386 box. >anaconda built fine. On the other hand, the mach package is missing a >buildrequires on libselinux-devel ;-) > >If I'm not snowed in tomorrow, I'll try to do an install on an x86_64 >box and see how things go there. I installed on an x86_64 box. FC3 but made an installroot of rawhide. The installroot works fine - but rpm commands flake out when you run them from the outside of the chroot. Oddly yum --installroot commands work just fine from the outside. anyway - if we run on the same rpm version we're building for it shouldn't be a problem. I'll install the other opteron tomorrow and see if I can get a mach x86 chroot running on x86_64 w/setarch reasonably sanely. if we can get there then maybe a mass rebuild can be: for pkg in * do if [ -d $pkg/devel ]; then make srpm fi done find ./ -name \*.src.rpm | xargs mach rebuild with a lot of disk space available :) -sv From skvidal at phy.duke.edu Tue Mar 1 07:45:05 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Tue, 01 Mar 2005 02:45:05 -0500 Subject: yum+mach2 for fedora-development tree pseudo-release In-Reply-To: <1109641036.2899.43.camel@bree.local.net> References: <1109631793.21503.62.camel@cutter> <1109641036.2899.43.camel@bree.local.net> Message-ID: <1109663105.23615.36.camel@cutter> On Mon, 2005-02-28 at 20:37 -0500, Jeremy Katz wrote: >On Mon, 2005-02-28 at 18:03 -0500, seth vidal wrote: >> this mostly works like it should. I tested it on an fc3 system and a >>rawhide system with some success. > >Oooh, pretty. And a step in the (right) direction of getting to a >working buildsystem. > A couple of minor issues: yum will install file deps by: yum install /path/to/file/you/need but it won't do virtual provides in that syntax so 'yum install foo' when foo is not a package name doesn't do anything. I compromised on this syntax by adding resolvedep. So you can do: 'yum resolvedep foo' which spits back 1 package that provides foo. Then you can pass that output to yum install and you're on your way again. That's going to require a bit more hacking I think to make it all work. There are a fair number of virtual provides in packages that make some of this not work. I could easily add virtual provides support into 'yum install' in cvs-HEAD and this problem goes away - I'm just wondering would it make more sense to hack yum to make it comply with mach2 or to hack mach2 to make it work with yum. -sv From pmatilai at welho.com Tue Mar 1 08:11:40 2005 From: pmatilai at welho.com (Panu Matilainen) Date: Tue, 1 Mar 2005 10:11:40 +0200 (EET) Subject: yum+mach2 for fedora-development tree pseudo-release In-Reply-To: <1109663105.23615.36.camel@cutter> References: <1109631793.21503.62.camel@cutter> <1109641036.2899.43.camel@bree.local.net> <1109663105.23615.36.camel@cutter> Message-ID: On Tue, 1 Mar 2005, seth vidal wrote: > A couple of minor issues: > > yum will install file deps by: > yum install /path/to/file/you/need > > but it won't do virtual provides in that syntax > > so 'yum install foo' when foo is not a package name doesn't do anything. > I compromised on this syntax by adding resolvedep. So you can do: > 'yum resolvedep foo' which spits back 1 package that provides foo. Then > you can pass that output to yum install and you're on your way again. > That's going to require a bit more hacking I think to make it all work. > There are a fair number of virtual provides in packages that make some > of this not work. Heh, oh the memories this reminds me of... Apt had/has seriously tough time trying to deal with all the virtual provides in the ways people are using it, especially as buildrequires and the build-dep operation's logic was something I never could digest really. :) > > I could easily add virtual provides support into 'yum install' in > cvs-HEAD and this problem goes away - I'm just wondering would it make > more sense to hack yum to make it comply with mach2 or to hack mach2 to > make it work with yum. You mean "hack mach to make it work with yum" as in "import yum..." kind of thing? If so, I think that's far better approach than invoking it from through command line with all the overhead of re-re-re-reading in metadata etc - it'd not only improve speed but give far more control over things. Regardless of that, supporting virtual provides in 'yum install' would be a nice little addition to yum anyway. - Panu - From fedora at leemhuis.info Tue Mar 1 08:32:14 2005 From: fedora at leemhuis.info (Thorsten Leemhuis) Date: Tue, 01 Mar 2005 09:32:14 +0100 Subject: yum+mach2 for fedora-development tree pseudo-release In-Reply-To: <1109663105.23615.36.camel@cutter> References: <1109631793.21503.62.camel@cutter> <1109641036.2899.43.camel@bree.local.net> <1109663105.23615.36.camel@cutter> Message-ID: <1109665934.7576.9.camel@thl.ct.heise.de> Am Dienstag, den 01.03.2005, 02:45 -0500 schrieb seth vidal: > On Mon, 2005-02-28 at 20:37 -0500, Jeremy Katz wrote: > >On Mon, 2005-02-28 at 18:03 -0500, seth vidal wrote: > > > > > A couple of minor issues: > > yum will install file deps by: > yum install /path/to/file/you/need > > but it won't do virtual provides in that syntax [...] > I could easily add virtual provides support into 'yum install' in > cvs-HEAD and this problem goes away - I'm just wondering would it make > more sense to hack yum to make it comply with mach2 or to hack mach2 to > make it work with yum. IMHO it should be done in yum. I often ran into this problem already during normal yum usage (e.g. built tests outside a build system / directly from CVS). Typing $ yum install XFree86-devel is easier and a lot faster faster then $ yum provides XFree86-devel $ yum install xorg-x11-devel CU thl From skvidal at phy.duke.edu Wed Mar 2 06:29:38 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Wed, 02 Mar 2005 01:29:38 -0500 Subject: yum+mach2 for fedora-development tree pseudo-release In-Reply-To: References: <1109631793.21503.62.camel@cutter> <1109641036.2899.43.camel@bree.local.net> <1109663105.23615.36.camel@cutter> Message-ID: <1109744978.5543.38.camel@cutter> >> >> I could easily add virtual provides support into 'yum install' in >> cvs-HEAD and this problem goes away - I'm just wondering would it make >> more sense to hack yum to make it comply with mach2 or to hack mach2 to >> make it work with yum. > >You mean "hack mach to make it work with yum" as in "import yum..." kind >of thing? If so, I think that's far better approach than invoking it from >through command line with all the overhead of re-re-re-reading in metadata >etc - it'd not only improve speed but give far more control over things. >Regardless of that, supporting virtual provides in 'yum install' would be >a nice little addition to yum anyway. > okay I added in the virtual provides to yum install options. It's just sorta hacked in right now it needs to be abstracted and cleaned up a bit, but it does what you'd expect and takes all sorts of whacked out things like: yum install "foo > 1.1" yum install "foo == 1.1" yum install "foo = 1.1" (same as above) yum install "foo < 1.1" <=, >= are also taken. != is not accepted and don't make me slap anyone who asks. :) etc etc etc. its only an option to install atm, though, not to update, for obvious reasons so yum 2.3.1 should work just fine with mach for what it wants to do. -sv -------------- next part -------------- An HTML attachment was scrubbed... URL: From skvidal at phy.duke.edu Wed Mar 2 06:30:35 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Wed, 02 Mar 2005 01:30:35 -0500 Subject: yum+mach2 for fedora-development tree pseudo-release In-Reply-To: <1109665934.7576.9.camel@thl.ct.heise.de> References: <1109631793.21503.62.camel@cutter> <1109641036.2899.43.camel@bree.local.net> <1109663105.23615.36.camel@cutter> <1109665934.7576.9.camel@thl.ct.heise.de> Message-ID: <1109745035.5543.40.camel@cutter> >> I could easily add virtual provides support into 'yum install' in >> cvs-HEAD and this problem goes away - I'm just wondering would it make >> more sense to hack yum to make it comply with mach2 or to hack mach2 to >> make it work with yum. > >IMHO it should be done in yum. I often ran into this problem already >during normal yum usage (e.g. built tests outside a build system / >directly from CVS). Typing >$ yum install XFree86-devel >is easier and a lot faster faster then >$ yum provides XFree86-devel > >$ yum install xorg-x11-devel > you could have done this all along, though, with resolvedep. However, it's been implemented. The major reason I don't like it in 'install' is that it further complicates the things a user could put there, it's just more code to sift through when someone breaks something. -sv From skvidal at phy.duke.edu Wed Mar 2 22:38:14 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Wed, 02 Mar 2005 17:38:14 -0500 Subject: Build system ideas/requirements Message-ID: <1109803094.16819.25.camel@cutter> Items for thoughts: 1. build system using comps.xml for chroot install definitions (base, build, minimal) - it would make sense and we could leverage the groupinstall/update/remove mechanism in yum. 2. I talked to Jeremy some about this and I think if we do all rpmdb transactions from OUTSIDE of the rpmdb and only build in the chroot then we should be able to safely juggle multiple rpmdb versions from host to chroot systems. 3. there's no reason to not develop a specialized script that uses the yum modules that can be run by something like mach-helper for making chroots reasonably correctly. 4. we're going to run into problems with contention for the rpm transaction lock on the host system b/c rpm likes to lock the rpmdb on the host system even when operating on the chroot system. A queuing mechanism for access to the that lock so we know what else is left in the process is not a bad idea. >From what I can think breaking up the build system into: - something that watches cvs for things to be built - something that makes/handles/cleans up the chroots - something that spawns the builds - something that deals with the results Is it reasonable to focus on these as modules to be developed? -sv From skvidal at phy.duke.edu Wed Mar 2 22:41:17 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Wed, 02 Mar 2005 17:41:17 -0500 Subject: Build system ideas/requirements In-Reply-To: <1109803094.16819.25.camel@cutter> References: <1109803094.16819.25.camel@cutter> Message-ID: <1109803277.16819.27.camel@cutter> > > - something that watches cvs for things to be built > - something that makes/handles/cleans up the chroots > - something that spawns the builds > - something that deals with the results > >Is it reasonable to focus on these as modules to be developed? One more addendum - having the last 3 be a separate package end users could interact with would be very useful I suspect. -sv From katzj at redhat.com Wed Mar 2 22:48:03 2005 From: katzj at redhat.com (Jeremy Katz) Date: Wed, 02 Mar 2005 17:48:03 -0500 Subject: Build system ideas/requirements In-Reply-To: <1109803094.16819.25.camel@cutter> References: <1109803094.16819.25.camel@cutter> Message-ID: <1109803683.3333.6.camel@bree.local.net> On Wed, 2005-03-02 at 17:38 -0500, seth vidal wrote: > 4. we're going to run into problems with contention for the rpm > transaction lock on the host system b/c rpm likes to lock the rpmdb on > the host system even when operating on the chroot system. A queuing > mechanism for access to the that lock so we know what else is left in > the process is not a bad idea. Frankly, we should probably consider this a bug and get it fixed. I _think_ it's actually fixed in rpm-4_4, so we should be able to backport it and fix this. Actually, it's apparently even fixed in 4.3.3 (move global /var/lock/rpm/transaction to dbpath in CHANGES) > >From what I can think breaking up the build system into: > - something that watches cvs for things to be built One thing that comes to my mind is that you probably don't want to be watching CVS. Having it be an explicit "request a build now" makes more sense (which can then be integrated as a makefile target eventually, etc). I just tend to prefer having "do a build" be an explicit action rather than a side effect. > - something that makes/handles/cleans up the chroots Yes. > - something that spawns the builds > - something that deals with the results These two are likely to be fairly related. Perhaps even the same thing. > Is it reasonable to focus on these as modules to be developed? The two big things are probably the "handle chroots" piece and "spawn builds". Especially if we want to go the route of a new chroot for every build. So I'd mostly focus on those two first and I think the other stuff will mostly fall out on its own. Jeremy From katzj at redhat.com Wed Mar 2 22:49:39 2005 From: katzj at redhat.com (Jeremy Katz) Date: Wed, 02 Mar 2005 17:49:39 -0500 Subject: Build system ideas/requirements In-Reply-To: <1109803277.16819.27.camel@cutter> References: <1109803094.16819.25.camel@cutter> <1109803277.16819.27.camel@cutter> Message-ID: <1109803779.3333.9.camel@bree.local.net> On Wed, 2005-03-02 at 17:41 -0500, seth vidal wrote: > > - something that watches cvs for things to be built > > - something that makes/handles/cleans up the chroots > > - something that spawns the builds > > - something that deals with the results > > > >Is it reasonable to focus on these as modules to be developed? > > One more addendum - having the last 3 be a separate package end users > could interact with would be very useful I suspect. Definitely the second and some part of the third. Having the other two in the set of "stuff we let you install and have your own personal copy of the buildsystem" may be overkill. Although all of it should be available, it's just what we advertise makes sense for people to do as their setup. Jeremy From thias at spam.spam.spam.spam.spam.spam.spam.egg.and.spam.freshrpms.net Wed Mar 2 23:01:41 2005 From: thias at spam.spam.spam.spam.spam.spam.spam.egg.and.spam.freshrpms.net (Matthias Saou) Date: Thu, 3 Mar 2005 00:01:41 +0100 Subject: Build system ideas/requirements In-Reply-To: <1109803683.3333.6.camel@bree.local.net> References: <1109803094.16819.25.camel@cutter> <1109803683.3333.6.camel@bree.local.net> Message-ID: <20050303000141.543dd281@python2> Jeremy Katz wrote : > > - something that spawns the builds > > - something that deals with the results > > These two are likely to be fairly related. Perhaps even the same thing. Yes and no : Since we're probably going to want to support different archs coming from different build machines some day, the build spawning should be per build host, whereas the dealing with the results will be partly per build host (the part that you consider fairly related) but also partly in a central location which will gather everything, right? Other than that, I agree with Jeremy regarding the fact the builds should be explicitly requested and not automatic for every CVS commit, or even every commit matching certain criterias (i.e. a change in version and/or release). Matthias -- Clean custom Red Hat Linux rpm packages : http://freshrpms.net/ Fedora Core release 3 (Heidelberg) - Linux kernel 2.6.10-1.770_FC3 Load : 0.17 0.29 0.65 From skvidal at phy.duke.edu Wed Mar 2 23:08:52 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Wed, 02 Mar 2005 18:08:52 -0500 Subject: Build system ideas/requirements In-Reply-To: <20050303000141.543dd281@python2> References: <1109803094.16819.25.camel@cutter> <1109803683.3333.6.camel@bree.local.net> <20050303000141.543dd281@python2> Message-ID: <1109804932.16819.43.camel@cutter> >Yes and no : Since we're probably going to want to support different archs >coming from different build machines some day, the build spawning should be >per build host, whereas the dealing with the results will be partly per >build host (the part that you consider fairly related) but also partly in a >central location which will gather everything, right? well from a queuing standpoint - submitting build reqs to a central system should be able to deal with the 'for which archs' and 'where do you go for those archs' questions. >Other than that, I agree with Jeremy regarding the fact the builds should >be explicitly requested and not automatic for every CVS commit, or even >every commit matching certain criterias (i.e. a change in version and/or >release). sure - I guess I was thinking of what gafton had mentioned before. Some way for someone to tag a release as 'buildmeplease' in cvs and have it just do it. -sv -------------- next part -------------- An HTML attachment was scrubbed... URL: From gafton at redhat.com Wed Mar 2 23:11:50 2005 From: gafton at redhat.com (Cristian Gafton) Date: Wed, 2 Mar 2005 18:11:50 -0500 (EST) Subject: Build system ideas/requirements In-Reply-To: <1109803683.3333.6.camel@bree.local.net> References: <1109803094.16819.25.camel@cutter> <1109803683.3333.6.camel@bree.local.net> Message-ID: On Wed, 2 Mar 2005, Jeremy Katz wrote: >>> From what I can think breaking up the build system into: >> - something that watches cvs for things to be built > > One thing that comes to my mind is that you probably don't want to be > watching CVS. Having it be an explicit "request a build now" makes more > sense (which can then be integrated as a makefile target eventually, > etc). I just tend to prefer having "do a build" be an explicit action > rather than a side effect. I agree, I would rather have a "cvs tag build" or "cvs tag build-test" or something like that. That will queue a build request and provide some sort of URL where one could watch the status. > The two big things are probably the "handle chroots" piece and "spawn > builds". Especially if we want to go the route of a new chroot for > every build. So I'd mostly focus on those two first and I think the > other stuff will mostly fall out on its own. But wait, there is more! Ok, so we have chroots, we're spawning build, what do we do with the resulting packages? what is their path through the process? So far we have: A. Buildroot provisioning - yum-based scriplet - users can run it themselves and create their own trees - for speed reasonsm can we assume that buildroots are generic (ie, have the devel stuff installed, but are not customized for the needs of any particular src.rpm build) B. Spawning builds - assuming a queue of some sort of things that need building - do we have a master controller for builds or do we let all buildhosts fight to empty out the build queue? - once a buildroot is chosen: - we customize it according to the src.rpm's buildrequires - launch the "chroot ... rpmbuild --rebuild ..." job - stdout and stderr go to a log accessible online in real time? - extract the binary packages and drop them somewhere - after the build is done - dispose of the buildroot? - set up a new buildroot again (async?) C. Package management - we have a bunch of new packages built for a particular tree - what is the qualification process? - QA? - puching stuff out? Anybody else has any other big components we need to concentrate on? Cristian -- ---------------------------------------------------------------------- Cristian Gafton -- gafton at redhat.com -- Red Hat, Inc. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ "Linux is a leprosy; and is having a deleterious effect on the U.S. IT industry because it is steadily depreciating the value of the software industry sector." -- Kenneth Brown, President, Alexis de Tocqueville Institution From roland at redhat.com Wed Mar 2 23:16:20 2005 From: roland at redhat.com (Roland McGrath) Date: Wed, 2 Mar 2005 15:16:20 -0800 Subject: Build system ideas/requirements In-Reply-To: Jeremy Katz's message of Wednesday, 2 March 2005 17:49:39 -0500 <1109803779.3333.9.camel@bree.local.net> Message-ID: <200503022316.j22NGKkV024582@magilla.sf.frob.com> > On Wed, 2005-03-02 at 17:41 -0500, seth vidal wrote: > > > - something that watches cvs for things to be built > > > - something that makes/handles/cleans up the chroots > > > - something that spawns the builds > > > - something that deals with the results > > > > > >Is it reasonable to focus on these as modules to be developed? > > > > One more addendum - having the last 3 be a separate package end users > > could interact with would be very useful I suspect. > > Definitely the second and some part of the third. Having the other two > in the set of "stuff we let you install and have your own personal copy > of the buildsystem" may be overkill. Although all of it should be > available, it's just what we advertise makes sense for people to do as > their setup. All of it is useful. We want people to be able to set up their own build engines running their favorite not-really-supported platform, including their option to drive it automagically from our repositories, and to produce results web pages that look just like the canonical ones. From roland at redhat.com Wed Mar 2 23:31:43 2005 From: roland at redhat.com (Roland McGrath) Date: Wed, 2 Mar 2005 15:31:43 -0800 Subject: Build system ideas/requirements In-Reply-To: Cristian Gafton's message of Wednesday, 2 March 2005 18:11:50 -0500 Message-ID: <200503022331.j22NVhcB024608@magilla.sf.frob.com> > I agree, I would rather have a "cvs tag build" or "cvs tag build-test" or > something like that. That will queue a build request and provide some sort > of URL where one could watch the status. Any reason this should be a weird cvs hook liks this? Why not just "curl http://build-it.fedora.redhat.com/pkgname#cvstag" (that is of course done by "make build")? > - assuming a queue of some sort of things that need building > - do we have a master controller for builds or do we let all buildhosts > fight to empty out the build queue? MCR does something in between about this. There may be some wisdom there to be had from experience with central queueing and driving disparate and sometimes flaky build iron. (Internally, contact testing at redhat.com for the hackers who know MCR.) > - stdout and stderr go to a log accessible online in real time? Oh please yes. There is cruft around from tinderbox-like hacks to htmlify build logs and give good highlighting and easy navigation to finding the error messages, which is a lot quicker for developers than grovelling plain logs like beehive users do today. > - after the build is done > - dispose of the buildroot? In the case of failed builds, it would be a nice improvement to have the loser state sit around for a brief time so a developer can go investigate why the buildsystem barfed. Perhaps move it aside, and asynchronously nuke LRU buildroots triggered by free disk space checks. OTOH, with developers always able to do the chroot builds themselves first, perhaps this will not come up nearly so often as it does with beehive. From sopwith at redhat.com Wed Mar 2 23:33:50 2005 From: sopwith at redhat.com (Elliot Lee) Date: Wed, 2 Mar 2005 18:33:50 -0500 (EST) Subject: Build system ideas/requirements In-Reply-To: <200503022331.j22NVhcB024608@magilla.sf.frob.com> References: <200503022331.j22NVhcB024608@magilla.sf.frob.com> Message-ID: On Wed, 2 Mar 2005, Roland McGrath wrote: > > - assuming a queue of some sort of things that need building > > - do we have a master controller for builds or do we let all buildhosts > > fight to empty out the build queue? You probably want a queue manager and scheduler to handle things like prioritization and failover. -- Elliot From enrico.scholz at informatik.tu-chemnitz.de Thu Mar 3 00:40:33 2005 From: enrico.scholz at informatik.tu-chemnitz.de (Enrico Scholz) Date: Thu, 03 Mar 2005 01:40:33 +0100 Subject: Build system ideas/requirements In-Reply-To: <1109803094.16819.25.camel@cutter> (seth vidal's message of "Wed, 02 Mar 2005 17:38:14 -0500") References: <1109803094.16819.25.camel@cutter> Message-ID: <87bra11rzy.fsf@kosh.ultra.csn.tu-chemnitz.de> skvidal at phy.duke.edu (seth vidal) writes: > 4. we're going to run into problems with contention for the rpm > transaction lock on the host system b/c rpm likes to lock the rpmdb on > the host system even when operating on the chroot system. Should not be a problem. Just create a new namespace, mount the rpm database both into the host and the chroot system and execute rpm then. Enrico From thomas at apestaart.org Thu Mar 3 08:36:59 2005 From: thomas at apestaart.org (Thomas Vander Stichele) Date: Thu, 03 Mar 2005 09:36:59 +0100 Subject: Build system ideas/requirements In-Reply-To: <1109803094.16819.25.camel@cutter> References: <1109803094.16819.25.camel@cutter> Message-ID: <1109839019.5529.7.camel@otto.amantes> Hi, > 1. build system using comps.xml for chroot install definitions (base, > build, minimal) - it would make sense and we could leverage the > groupinstall/update/remove mechanism in yum. Not sure what this would achieve ? In mach, these three "target" names mean the following: - minimal: a minimal set of packages that allows you to chroot into it and run bash - base: the same set of packages, but with all packages needed to make the rpm db consistent added - build: a bunch of additional packages that rpmbuild likes to have (patch, gcc, ...) Not sure what (some magic link to comps.xml) would bring more. > 2. I talked to Jeremy some about this and I think if we do all rpmdb > transactions from OUTSIDE of the rpmdb and only build in the chroot then > we should be able to safely juggle multiple rpmdb versions from host to > chroot systems. Yep - that's how mach 2 has always done it after long discussions with jbj. The alternative is to put specially compiled rpms in the root. > 3. there's no reason to not develop a specialized script that uses the > yum modules that can be run by something like mach-helper for making > chroots reasonably correctly. What does "reasonably correctly" mean here ? I mean, is there anything wrong with the chroots that currently can be created with rpm -- root/apt-get .../yum --installroot=... ? > 4. we're going to run into problems with contention for the rpm > transaction lock on the host system b/c rpm likes to lock the rpmdb on > the host system even when operating on the chroot system. A queuing > mechanism for access to the that lock so we know what else is left in > the process is not a bad idea. Yeah, that'd be nice to get fixed. mach still has a global file lock for this reason. Thomas Dave/Dina : future TV today ! - http://www.davedina.org/ <-*- thomas (dot) apestaart (dot) org -*-> he strokes your hair to keep you down will you fight ? let's see you fight <-*- thomas (at) apestaart (dot) org -*-> URGent, best radio on the net - 24/7 ! - http://urgent.fm/ From katzj at redhat.com Thu Mar 3 14:33:04 2005 From: katzj at redhat.com (Jeremy Katz) Date: Thu, 03 Mar 2005 09:33:04 -0500 Subject: Build system ideas/requirements In-Reply-To: <1109839019.5529.7.camel@otto.amantes> References: <1109803094.16819.25.camel@cutter> <1109839019.5529.7.camel@otto.amantes> Message-ID: <1109860384.3333.19.camel@bree.local.net> On Thu, 2005-03-03 at 09:36 +0100, Thomas Vander Stichele wrote: > > 1. build system using comps.xml for chroot install definitions (base, > > build, minimal) - it would make sense and we could leverage the > > groupinstall/update/remove mechanism in yum. > > Not sure what this would achieve ? In mach, these three "target" names > mean the following: > - minimal: a minimal set of packages that allows you to chroot into it > and run bash > - base: the same set of packages, but with all packages needed to make > the rpm db consistent added > - build: a bunch of additional packages that rpmbuild likes to have > (patch, gcc, ...) > > Not sure what (some magic link to comps.xml) would bring more. The big thing you gain (imho) is the easy and obvious answer of "what do these targets mean". Instead of having it in mach specific config files somewhere. Jeremy From skvidal at phy.duke.edu Fri Mar 4 04:37:43 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Thu, 03 Mar 2005 23:37:43 -0500 Subject: Build system ideas/requirements In-Reply-To: <1109839019.5529.7.camel@otto.amantes> References: <1109803094.16819.25.camel@cutter> <1109839019.5529.7.camel@otto.amantes> Message-ID: <1109911063.23585.33.camel@cutter> > Not sure what this would achieve ? In mach, these three "target" names > mean the following: > - minimal: a minimal set of packages that allows you to chroot into it > and run bash > - base: the same set of packages, but with all packages needed to make > the rpm db consistent added > - build: a bunch of additional packages that rpmbuild likes to have > (patch, gcc, ...) > > Not sure what (some magic link to comps.xml) would bring more. but instead of having to discern the list w/no relationship information from this: # Fedora Core Development packages['fedora-development-i386-core'] = { 'dir': 'fedoracore-development-i386', 'minimal': 'bash glibc yum python createrepo', 'base': 'coreutils findutils openssh-server', 'build': 'dev rpm-build make gcc tar gzip patch ' + 'unzip bzip2 diffutils cpio elfutils', } we can use an xml format that various folks are already very familiar with. > > 2. I talked to Jeremy some about this and I think if we do all rpmdb > > transactions from OUTSIDE of the rpmdb and only build in the chroot then > > we should be able to safely juggle multiple rpmdb versions from host to > > chroot systems. > > Yep - that's how mach 2 has always done it after long discussions with > jbj. The alternative is to put specially compiled rpms in the root. I'd also like to have no calls to the rpm cli binary in any buildroot system. we should never be making the buildroot with --nodeps or --force so I don't see a motive to use rpm for erasures or additions. > > 3. there's no reason to not develop a specialized script that uses the > > yum modules that can be run by something like mach-helper for making > > chroots reasonably correctly. > > What does "reasonably correctly" mean here ? I mean, is there anything > wrong with the chroots that currently can be created with rpm -- > root/apt-get .../yum --installroot=... ? okay so what do we get out of making the buildsystem capable of using yum/apt-get/rpm--aid/whatever for doing the installs? what's the perk? If we're building this for fedora why not just make a script that imports the yum modules and works out of the available infrastructure? Is there something that's needed in the yum modules to make this work? -sv From thomas at apestaart.org Fri Mar 4 14:17:30 2005 From: thomas at apestaart.org (Thomas Vander Stichele) Date: Fri, 04 Mar 2005 15:17:30 +0100 Subject: Build system ideas/requirements In-Reply-To: <1109911063.23585.33.camel@cutter> References: <1109803094.16819.25.camel@cutter> <1109839019.5529.7.camel@otto.amantes> <1109911063.23585.33.camel@cutter> Message-ID: <1109945850.23220.1.camel@otto.amantes> Hi, On Thu, 2005-03-03 at 23:37 -0500, seth vidal wrote: > > Not sure what this would achieve ? In mach, these three "target" names > > mean the following: > > - minimal: a minimal set of packages that allows you to chroot into it > > and run bash > > - base: the same set of packages, but with all packages needed to make > > the rpm db consistent added > > - build: a bunch of additional packages that rpmbuild likes to have > > (patch, gcc, ...) > > > > Not sure what (some magic link to comps.xml) would bring more. > > but instead of having to discern the list w/no relationship information > from this: well, this passes by the fact that a) comps.xml doesn't have this concept of minimal b) comps.xml IIRC has a completely different understanding of "base" than what I just said (the minimum self-consistent set of packages that give you bash) c) there are distros mach is used for that do not have comps.xml files. So, sure, I can use comps.xml. It's just that it doesn't give me exactly what I need in these cases. Thomas Dave/Dina : future TV today ! - http://www.davedina.org/ <-*- thomas (dot) apestaart (dot) org -*-> Come on baby take a walk with me honey Tell me who do you love Who do you love <-*- thomas (at) apestaart (dot) org -*-> URGent, best radio on the net - 24/7 ! - http://urgent.fm/ From thomas at apestaart.org Fri Mar 4 14:21:19 2005 From: thomas at apestaart.org (Thomas Vander Stichele) Date: Fri, 04 Mar 2005 15:21:19 +0100 Subject: yum+mach2 for fedora-development tree pseudo-release In-Reply-To: References: <1109631793.21503.62.camel@cutter> <1109641036.2899.43.camel@bree.local.net> <1109663105.23615.36.camel@cutter> Message-ID: <1109946079.23220.5.camel@otto.amantes> Hi, > You mean "hack mach to make it work with yum" as in "import yum..." kind > of thing? If so, I think that's far better approach than invoking it from > through command line with all the overhead of re-re-re-reading in metadata > etc - it'd not only improve speed but give far more control over things. > Regardless of that, supporting virtual provides in 'yum install' would be > a nice little addition to yum anyway. There's a big conceptual problem with that approach that I still don't have a satisfying answer for. Mach is meant to be run as user - I know way too little about security to be trusted to write perfectly safe python code. That's the biggest reason why mach-helper exists, and people tell me that this is indeed the smartest route to take. Of course it'd be easier for me as a programmer to just do everything in python. But if we did, then we'd need a good way of gaining and then dropping privileges for these operations, and I'd still feel very insecure about having written something potentially very harmful. I've looked for other projects that have similar security issues, but haven't found any of them tackling this particular problem. Suggestions ? Thomas Dave/Dina : future TV today ! - http://www.davedina.org/ <-*- thomas (dot) apestaart (dot) org -*-> I will play you like a shark And I'll clutch at your heart I'll come flying like a spark To enflame you <-*- thomas (at) apestaart (dot) org -*-> URGent, best radio on the net - 24/7 ! - http://urgent.fm/ From skvidal at phy.duke.edu Fri Mar 4 15:04:48 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Fri, 04 Mar 2005 10:04:48 -0500 Subject: Build system ideas/requirements In-Reply-To: <1109945850.23220.1.camel@otto.amantes> References: <1109803094.16819.25.camel@cutter> <1109839019.5529.7.camel@otto.amantes> <1109911063.23585.33.camel@cutter> <1109945850.23220.1.camel@otto.amantes> Message-ID: <1109948688.23585.77.camel@cutter> > well, this passes by the fact that > a) comps.xml doesn't have this concept of minimal why not? Just make a new group, call it minimal. > b) comps.xml IIRC has a completely different understanding of "base" > than what I just said (the minimum self-consistent set of packages that > give you bash) again - call it chroot-base - but the point is the same. > c) there are distros mach is used for that do not have comps.xml files. I'm not talking about using the distro provided comps.xml - I'm talking about using that format for specifying the packages installed in the chroots. -sv From enrico.scholz at informatik.tu-chemnitz.de Fri Mar 4 15:26:18 2005 From: enrico.scholz at informatik.tu-chemnitz.de (Enrico Scholz) Date: Fri, 04 Mar 2005 16:26:18 +0100 Subject: Build system ideas/requirements In-Reply-To: <1109948688.23585.77.camel@cutter> (seth vidal's message of "Fri, 04 Mar 2005 10:04:48 -0500") References: <1109803094.16819.25.camel@cutter> <1109839019.5529.7.camel@otto.amantes> <1109911063.23585.33.camel@cutter> <1109945850.23220.1.camel@otto.amantes> <1109948688.23585.77.camel@cutter> Message-ID: <876507zb39.fsf@kosh.ultra.csn.tu-chemnitz.de> skvidal at phy.duke.edu (seth vidal) writes: >> c) there are distros mach is used for that do not have comps.xml files. > > I'm not talking about using the distro provided comps.xml - I'm talking > about using that format for specifying the packages installed in the > chroots. What would be the advantage of this? You will have to maintain yet another configuration file with an ugly format (XML), and it will not work with other depsolvers like apt or smartpm. Enrico From skvidal at phy.duke.edu Fri Mar 4 15:33:14 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Fri, 04 Mar 2005 10:33:14 -0500 Subject: Build system ideas/requirements In-Reply-To: <876507zb39.fsf@kosh.ultra.csn.tu-chemnitz.de> References: <1109803094.16819.25.camel@cutter> <1109839019.5529.7.camel@otto.amantes> <1109911063.23585.33.camel@cutter> <1109945850.23220.1.camel@otto.amantes> <1109948688.23585.77.camel@cutter> <876507zb39.fsf@kosh.ultra.csn.tu-chemnitz.de> Message-ID: <1109950394.23585.80.camel@cutter> > >> c) there are distros mach is used for that do not have comps.xml files. > > > > I'm not talking about using the distro provided comps.xml - I'm talking > > about using that format for specifying the packages installed in the > > chroots. > > What would be the advantage of this? You will have to maintain yet > another configuration file with an ugly format (XML), and it will not > work with other depsolvers like apt or smartpm. Right, I'm having trouble trying to figure out why we're bothering with support for other depsolvers at this time. We just need to build now. -sv From enrico.scholz at informatik.tu-chemnitz.de Fri Mar 4 15:52:46 2005 From: enrico.scholz at informatik.tu-chemnitz.de (Enrico Scholz) Date: Fri, 04 Mar 2005 16:52:46 +0100 Subject: Build system ideas/requirements In-Reply-To: <1109950394.23585.80.camel@cutter> (seth vidal's message of "Fri, 04 Mar 2005 10:33:14 -0500") References: <1109803094.16819.25.camel@cutter> <1109839019.5529.7.camel@otto.amantes> <1109911063.23585.33.camel@cutter> <1109945850.23220.1.camel@otto.amantes> <1109948688.23585.77.camel@cutter> <876507zb39.fsf@kosh.ultra.csn.tu-chemnitz.de> <1109950394.23585.80.camel@cutter> Message-ID: <871xavz9v5.fsf@kosh.ultra.csn.tu-chemnitz.de> skvidal at phy.duke.edu (seth vidal) writes: >> > I'm not talking about using the distro provided comps.xml - I'm >> > talking about using that format for specifying the packages >> > installed in the chroots. >> >> What would be the advantage of this? You will have to maintain yet >> another configuration file with an ugly format (XML), and it will not >> work with other depsolvers like apt or smartpm. > > Right, I'm having trouble trying to figure out why we're bothering with > support for other depsolvers at this time. We just need to build now. When we want to build now, I do not understand why code for new technology (comps.xml) shall be added, while existing technology (manual package-lists) is already working... Enrico From skvidal at phy.duke.edu Fri Mar 4 16:01:18 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Fri, 04 Mar 2005 11:01:18 -0500 Subject: Build system ideas/requirements In-Reply-To: <871xavz9v5.fsf@kosh.ultra.csn.tu-chemnitz.de> References: <1109803094.16819.25.camel@cutter> <1109839019.5529.7.camel@otto.amantes> <1109911063.23585.33.camel@cutter> <1109945850.23220.1.camel@otto.amantes> <1109948688.23585.77.camel@cutter> <876507zb39.fsf@kosh.ultra.csn.tu-chemnitz.de> <1109950394.23585.80.camel@cutter> <871xavz9v5.fsf@kosh.ultra.csn.tu-chemnitz.de> Message-ID: <1109952078.23585.82.camel@cutter> On Fri, 2005-03-04 at 16:52 +0100, Enrico Scholz wrote: > skvidal at phy.duke.edu (seth vidal) writes: > > >> > I'm not talking about using the distro provided comps.xml - I'm > >> > talking about using that format for specifying the packages > >> > installed in the chroots. > >> > >> What would be the advantage of this? You will have to maintain yet > >> another configuration file with an ugly format (XML), and it will not > >> work with other depsolvers like apt or smartpm. > > > > Right, I'm having trouble trying to figure out why we're bothering with > > support for other depsolvers at this time. We just need to build now. > > When we want to build now, I do not understand why code for new > technology (comps.xml) shall be added, while existing technology > (manual package-lists) is already working... > > comps.xml isn't new technology. All the support is there. No waiting. Zero-day. -sv From enrico.scholz at informatik.tu-chemnitz.de Fri Mar 4 16:53:39 2005 From: enrico.scholz at informatik.tu-chemnitz.de (Enrico Scholz) Date: Fri, 04 Mar 2005 17:53:39 +0100 Subject: Build system ideas/requirements In-Reply-To: <1109952078.23585.82.camel@cutter> (seth vidal's message of "Fri, 04 Mar 2005 11:01:18 -0500") References: <1109803094.16819.25.camel@cutter> <1109839019.5529.7.camel@otto.amantes> <1109911063.23585.33.camel@cutter> <1109945850.23220.1.camel@otto.amantes> <1109948688.23585.77.camel@cutter> <876507zb39.fsf@kosh.ultra.csn.tu-chemnitz.de> <1109950394.23585.80.camel@cutter> <871xavz9v5.fsf@kosh.ultra.csn.tu-chemnitz.de> <1109952078.23585.82.camel@cutter> Message-ID: <87wtsnxsh8.fsf@kosh.ultra.csn.tu-chemnitz.de> skvidal at phy.duke.edu (seth vidal) writes: >> >> > I'm not talking about using the distro provided comps.xml - I'm >> >> > talking about using that format for specifying the packages >> >> > installed in the chroots. >> >> >> >> What would be the advantage of this? You will have to maintain yet >> >> another configuration file with an ugly format (XML), and it will not >> >> work with other depsolvers like apt or smartpm. >> > >> > Right, I'm having trouble trying to figure out why we're bothering with >> > support for other depsolvers at this time. We just need to build now. >> >> When we want to build now, I do not understand why code for new >> technology (comps.xml) shall be added, while existing technology >> (manual package-lists) is already working... > > comps.xml isn't new technology. All the support is there. mach2 has already support for specifying the location of the comps.xml file? And the comps.xml files for the buildroots exist already? Enrico From skvidal at phy.duke.edu Fri Mar 4 17:10:22 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Fri, 04 Mar 2005 12:10:22 -0500 Subject: Build system ideas/requirements In-Reply-To: <87wtsnxsh8.fsf@kosh.ultra.csn.tu-chemnitz.de> References: <1109803094.16819.25.camel@cutter> <1109839019.5529.7.camel@otto.amantes> <1109911063.23585.33.camel@cutter> <1109945850.23220.1.camel@otto.amantes> <1109948688.23585.77.camel@cutter> <876507zb39.fsf@kosh.ultra.csn.tu-chemnitz.de> <1109950394.23585.80.camel@cutter> <871xavz9v5.fsf@kosh.ultra.csn.tu-chemnitz.de> <1109952078.23585.82.camel@cutter> <87wtsnxsh8.fsf@kosh.ultra.csn.tu-chemnitz.de> Message-ID: <1109956222.23585.101.camel@cutter> > > > > comps.xml isn't new technology. All the support is there. > > mach2 has already support for specifying the location of the comps.xml > file? And the comps.xml files for the buildroots exist already? yep, in yum. it just has to use groupinstall :) -sv From enrico.scholz at informatik.tu-chemnitz.de Fri Mar 4 17:37:25 2005 From: enrico.scholz at informatik.tu-chemnitz.de (Enrico Scholz) Date: Fri, 04 Mar 2005 18:37:25 +0100 Subject: Build system ideas/requirements In-Reply-To: <1109956222.23585.101.camel@cutter> (seth vidal's message of "Fri, 04 Mar 2005 12:10:22 -0500") References: <1109803094.16819.25.camel@cutter> <1109839019.5529.7.camel@otto.amantes> <1109911063.23585.33.camel@cutter> <1109945850.23220.1.camel@otto.amantes> <1109948688.23585.77.camel@cutter> <876507zb39.fsf@kosh.ultra.csn.tu-chemnitz.de> <1109950394.23585.80.camel@cutter> <871xavz9v5.fsf@kosh.ultra.csn.tu-chemnitz.de> <1109952078.23585.82.camel@cutter> <87wtsnxsh8.fsf@kosh.ultra.csn.tu-chemnitz.de> <1109956222.23585.101.camel@cutter> Message-ID: <87psyfxqga.fsf@kosh.ultra.csn.tu-chemnitz.de> skvidal at phy.duke.edu (seth vidal) writes: >> > >> > comps.xml isn't new technology. All the support is there. >> >> mach2 has already support for specifying the location of the comps.xml >> file? And the comps.xml files for the buildroots exist already? > > yep, in yum. > > it just has to use groupinstall :) Where can the location of the comps.xml files be specified? E.g. so that builds for FC3 use comps-A.xml while builds for devel use comps-B.xml? This works really out-of-the-box without adding code to mach2? Enrico From skvidal at phy.duke.edu Fri Mar 4 17:57:05 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Fri, 04 Mar 2005 12:57:05 -0500 Subject: Build system ideas/requirements In-Reply-To: <87psyfxqga.fsf@kosh.ultra.csn.tu-chemnitz.de> References: <1109803094.16819.25.camel@cutter> <1109839019.5529.7.camel@otto.amantes> <1109911063.23585.33.camel@cutter> <1109945850.23220.1.camel@otto.amantes> <1109948688.23585.77.camel@cutter> <876507zb39.fsf@kosh.ultra.csn.tu-chemnitz.de> <1109950394.23585.80.camel@cutter> <871xavz9v5.fsf@kosh.ultra.csn.tu-chemnitz.de> <1109952078.23585.82.camel@cutter> <87wtsnxsh8.fsf@kosh.ultra.csn.tu-chemnitz.de> <1109956222.23585.101.camel@cutter> <87psyfxqga.fsf@kosh.ultra.csn.tu-chemnitz.de> Message-ID: <1109959025.23585.122.camel@cutter> On Fri, 2005-03-04 at 18:37 +0100, Enrico Scholz wrote: > skvidal at phy.duke.edu (seth vidal) writes: > > >> > > >> > comps.xml isn't new technology. All the support is there. > >> > >> mach2 has already support for specifying the location of the comps.xml > >> file? And the comps.xml files for the buildroots exist already? > > > > yep, in yum. > > > > it just has to use groupinstall :) > > Where can the location of the comps.xml files be specified? E.g. so that > builds for FC3 use comps-A.xml while builds for devel use comps-B.xml? > This works really out-of-the-box without adding code to mach2? > in the repository metadata. you can even have a repository that has ONLY comps information. -sv From skvidal at phy.duke.edu Sat Mar 5 14:38:22 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Sat, 05 Mar 2005 09:38:22 -0500 Subject: yum+mach2 for fedora-development tree pseudo-release In-Reply-To: <1109946079.23220.5.camel@otto.amantes> References: <1109631793.21503.62.camel@cutter> <1109641036.2899.43.camel@bree.local.net> <1109663105.23615.36.camel@cutter> <1109946079.23220.5.camel@otto.amantes> Message-ID: <1110033502.23585.196.camel@cutter> > There's a big conceptual problem with that approach that I still don't > have a satisfying answer for. > > Mach is meant to be run as user - I know way too little about security > to be trusted to write perfectly safe python code. That's the biggest > reason why mach-helper exists, and people tell me that this is indeed > the smartest route to take. Of course it'd be easier for me as a > programmer to just do everything in python. But if we did, then we'd > need a good way of gaining and then dropping privileges for these > operations, and I'd still feel very insecure about having written > something potentially very harmful. > > I've looked for other projects that have similar security issues, but > haven't found any of them tackling this particular problem. > Suggestions ? What about the dbus suggestion? Have the client emit dbus events and have a root-running daemon listen for them to do what they wished. That way the suid root binary doesn't need to exist unless you want the daemon to not run as root. thoughts? -sv From wtogami at redhat.com Wed Mar 9 00:38:20 2005 From: wtogami at redhat.com (Warren Togami) Date: Tue, 08 Mar 2005 14:38:20 -1000 Subject: Build system ideas/requirements In-Reply-To: <1109803094.16819.25.camel@cutter> References: <1109803094.16819.25.camel@cutter> Message-ID: <422E457C.5090604@redhat.com> seth vidal wrote: > Items for thoughts: > > 1. build system using comps.xml for chroot install definitions (base, > build, minimal) - it would make sense and we could leverage the > groupinstall/update/remove mechanism in yum. I have no objection to yum groupinstall, but in my opinion none of the current defined groups are suitable for a minimal buildroot. There have been objections in the past to this with the opinion that the "Development" tools group and -devel packages should be assumed to be installed in this minimum buildroot. However this is a bad assumption because the set of -devel packages has been arbitrary, and dependencies don't make sure that particular tools exist in the buildroot. For this reason none of the existing groups are suitable for the most important goal of the minimum buildroot: reproducible binary payloads. I believe the following requirements describe a minimal buildroot: * Absolute minimum needed for rpmbuild to function. * BuildRequires should describe explicitly what is needed beyond the minimum buildroot to build a reproducible binary payload. * Must NOT include autoconf*, automake*, gettext* or libtool. While this seems counter-productive at first, it makes sense because sources theoretically shouldn't need it to build. And in cases where patches require them during rpmbuild, they often require an explicit version of auto*. * EXCEPTIONS: Stuff like gcc or g++ are included because it would be silly to list them explicitly in every package. These exceptions should be VERY rare. bash bzip2 coreutils cpio diffutils fedora-release gcc gcc-c++ gzip make patch perl python rpm-build redhat-rpm-config sed tar unzip Something like the list describes a very well tested minimum buildroot. The dependencies pulled in by these packages form the minimum set necessary for rpmbuild to function. We may also want to consider providing a "fake-build-provides" package in a buildroot repository that provides something like "kernel = 999", since stuff in the buildroot Requires kernel but it isn't actually needed to build stuff. Warren Togami wtogami at redhat.com From jkeating at j2solutions.net Wed Mar 9 00:55:25 2005 From: jkeating at j2solutions.net (Jesse Keating) Date: Tue, 08 Mar 2005 16:55:25 -0800 Subject: Build system ideas/requirements In-Reply-To: <422E457C.5090604@redhat.com> References: <1109803094.16819.25.camel@cutter> <422E457C.5090604@redhat.com> Message-ID: <1110329725.5489.113.camel@jkeating2.hq.pogolinux.com> On Tue, 2005-03-08 at 14:38 -1000, Warren Togami wrote: > I have no objection to yum groupinstall, but in my opinion none of > the > current defined groups are suitable for a minimal buildroot. There > have > been objections in the past to this with the opinion that the > "Development" tools group and -devel packages should be assumed to be > installed in this minimum buildroot. However this is a bad > assumption > because the set of -devel packages has been arbitrary, and > dependencies > don't make sure that particular tools exist in the buildroot. As seth has pointed out before, we don't have to use the existing comps file, we can create our own groups, but in comps format. That is the real question, adding comps format support to the build system. I agree with the rest about the minimal build environment, buildreqs for the rest. -- Jesse Keating RHCE (geek.j2solutions.net) Fedora Legacy Team (www.fedoralegacy.org) GPG Public Key (geek.j2solutions.net/jkeating.j2solutions.pub) Was I helpful? Let others know: http://svcs.affero.net/rm.php?r=jkeating From pmatilai at welho.com Wed Mar 9 08:38:37 2005 From: pmatilai at welho.com (Panu Matilainen) Date: Wed, 9 Mar 2005 10:38:37 +0200 (EET) Subject: Build system ideas/requirements In-Reply-To: <422E457C.5090604@redhat.com> References: <1109803094.16819.25.camel@cutter> <422E457C.5090604@redhat.com> Message-ID: On Tue, 8 Mar 2005, Warren Togami wrote: > seth vidal wrote: >> Items for thoughts: >> >> 1. build system using comps.xml for chroot install definitions (base, >> build, minimal) - it would make sense and we could leverage the >> groupinstall/update/remove mechanism in yum. > > I have no objection to yum groupinstall, but in my opinion none of the > current defined groups are suitable for a minimal buildroot. There have been > objections in the past to this with the opinion that the "Development" tools > group and -devel packages should be assumed to be installed in this minimum > buildroot. However this is a bad assumption because the set of -devel > packages has been arbitrary, and dependencies don't make sure that particular > tools exist in the buildroot. > > For this reason none of the existing groups are suitable for the most > important goal of the minimum buildroot: reproducible binary payloads. Minimal buildroot isn't necessary for reproducible builds, a *consistently* populated buildroot is. You'll get a consistent environment by dropping in Base + Devel groups with yum groupinstall even with the stock comps.xml. Absolute bare minimum buildroot is a nice bonus in a way but by no means a requirement for build system to be useful IMHO. Me thinks concentrating on getting a *working build system, now* is at this point far more important than playing "how minimal can you get"-games. Just my 5cents. :) - Panu - From skvidal at phy.duke.edu Fri Mar 11 09:17:12 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Fri, 11 Mar 2005 04:17:12 -0500 Subject: buildroots.xml - comps for buildroots Message-ID: <1110532632.15652.52.camel@cutter> Just some basic comps groups for the levels that mach describes currently. Thoughts? -sv -------------- next part -------------- A non-text attachment was scrubbed... Name: buildroots.xml Type: text/xml Size: 2113 bytes Desc: not available URL: From thias at spam.spam.spam.spam.spam.spam.spam.egg.and.spam.freshrpms.net Fri Mar 11 10:35:22 2005 From: thias at spam.spam.spam.spam.spam.spam.spam.egg.and.spam.freshrpms.net (Matthias Saou) Date: Fri, 11 Mar 2005 11:35:22 +0100 Subject: buildroots.xml - comps for buildroots In-Reply-To: <1110532632.15652.52.camel@cutter> References: <1110532632.15652.52.camel@cutter> Message-ID: <20050311113522.0c910e9f@python2> seth vidal wrote : > Just some basic comps groups for the levels that mach describes > currently. > > Thoughts? I'd say redhat-rpm-config is missing from build, as it won't be pulled in automatically, and without it you'll get unexpected results, like not getting debuginfo packages created and having the main binary packages unstripped... Also, we could decide as to whether we want to include gcc-c++, as it's needed by many packages, even many that have nothing to do with C++ but require it because of libtool/autotools. Matthias -- Clean custom Red Hat Linux rpm packages : http://freshrpms.net/ Fedora Core release 3 (Heidelberg) - Linux kernel 2.6.10-1.770_FC3 Load : 2.27 2.33 2.11 From skvidal at phy.duke.edu Fri Mar 11 13:41:58 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Fri, 11 Mar 2005 08:41:58 -0500 Subject: buildroots.xml - comps for buildroots In-Reply-To: <20050311113522.0c910e9f@python2> References: <1110532632.15652.52.camel@cutter> <20050311113522.0c910e9f@python2> Message-ID: <1110548518.15652.55.camel@cutter> > I'd say redhat-rpm-config is missing from build, as it won't be pulled in > automatically, and without it you'll get unexpected results, like not > getting debuginfo packages created and having the main binary packages > unstripped... > > Also, we could decide as to whether we want to include gcc-c++, as it's > needed by many packages, even many that have nothing to do with C++ but > require it because of libtool/autotools. > good point on redhat-rpm-config. -sv From enrico.scholz at informatik.tu-chemnitz.de Fri Mar 11 17:08:42 2005 From: enrico.scholz at informatik.tu-chemnitz.de (Enrico Scholz) Date: Fri, 11 Mar 2005 18:08:42 +0100 Subject: [Announce] Paper "Design und Implementierung eines QA- und Buildsystems" Message-ID: <87u0ni8605.fsf@kosh.ultra.csn.tu-chemnitz.de> Hello, I just want to announce the paper and the programs which I wrote as my diploma thesis. This work can be found at http://www-user.tu-chemnitz.de/~ensc/diplom/ Some cons: 1. the paper is in german; when there is a real interest, I can work on an english translation 2. this URL will be invalidated soon as I lose my webspace at the end of the month. The work is GPLed, so feel free to mirror it 3. important things like the build-agent (the component which does the build) is missing 4. it is complex (15,000 LOC) Enrico -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 480 bytes Desc: not available URL: From skvidal at phy.duke.edu Sun Mar 13 18:14:53 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Sun, 13 Mar 2005 13:14:53 -0500 Subject: minimum build environment Message-ID: <1110737693.30233.21.camel@cutter> Hi everyone, I've been building some of fedora extras for fc4t1 with mach. Something I'm encountering are A LOT of unlisted buildreqs. A lot of items needing things like tclConfig and intltool. I'm curious - what do you want to be the minimum build environment? What things should we assume. -sv From laroche at redhat.com Sun Mar 13 18:35:47 2005 From: laroche at redhat.com (Florian La Roche) Date: Sun, 13 Mar 2005 19:35:47 +0100 Subject: minimum build environment In-Reply-To: <1110737693.30233.21.camel@cutter> References: <1110737693.30233.21.camel@cutter> Message-ID: <20050313183547.GA3933@dudweiler.stuttgart.redhat.com> > I'm curious - what do you want to be the minimum build environment? What > things should we assume. Start with a bigger buildroot so that all packages compile and we have a better start to compile everything automated, then later on start trimming things down. greetings, Florian La Roche From wtogami at redhat.com Sun Mar 13 21:10:17 2005 From: wtogami at redhat.com (Warren Togami) Date: Sun, 13 Mar 2005 11:10:17 -1000 Subject: minimum build environment In-Reply-To: <20050313183547.GA3933@dudweiler.stuttgart.redhat.com> References: <1110737693.30233.21.camel@cutter> <20050313183547.GA3933@dudweiler.stuttgart.redhat.com> Message-ID: <4234AC39.8000304@redhat.com> Florian La Roche wrote: >>I'm curious - what do you want to be the minimum build environment? What >>things should we assume. > > > Start with a bigger buildroot so that all packages compile and we have a > better start to compile everything automated, then later on start trimming > things down. > Agreed, for expediency just add stuff to the buildroot so we have Extras sooner than later. And later we'll fix all the packages for "correctness". Warren Togami wtogami at redhat.com From skvidal at phy.duke.edu Mon Mar 14 07:17:27 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Mon, 14 Mar 2005 02:17:27 -0500 Subject: new mach+yum packages Message-ID: <1110784647.28874.10.camel@cutter> Hey folks, I hacked some more on mach2+yum today. Changes: - made mach use yum/comps groups for 'minimal', 'base', 'build' groupinstalls. (this killed apt-get compat so I pulled the apt stuff out) The groups file used for these installs is here: http://linux.duke.edu/~skvidal/mach/i386/buildroots.xml - made new config files for the groups functionality and for fedora development - misc changes I cannot recall right now :) Things that I've tested - building on fedora core 4 test1 and rawhide on x86_64 - building in a 'setarch i686' shell on x86_64 on fedora core 4 test1 and rawhide. You can get things here: http://linux.duke.edu/~skvidal/mach/pkgs/ I'm also adding a small shell script I use to build packages from fedora extras cvs. I think these pkgs should let people build/test pkgs for extras in a known environment. This will 'mostly' work on fc3 but not quite. I need to backport a couple of items from yum 2.3.X to yum 2.2.X, then it will work. -sv From dcbw at redhat.com Tue Mar 15 13:13:23 2005 From: dcbw at redhat.com (Dan Williams) Date: Tue, 15 Mar 2005 08:13:23 -0500 (EST) Subject: buildroots.xml and packages have to be in same location? Message-ID: Seth, Using your mach SRPM and buildroots file, I had to actually pull down all the packages locally and place the buildroots.xml file in the same dir as the packages, then run createrepo on it before it would take (obviously pointing mach to the local files as the yumsource). It seems that (unless I'm mistaken) the buildroots.xml file needs to be in any repo that I'd point mach to? Is that correct? Is there no way of keeping the buildroots.xml file locally, or at some other site, separate from the actual packages? Anyway, once everything was in the same place it seemed to work fine. Note: This FC3 (well, Aurora "corona" on Sparc, which is == FC3). mach: mach-0.4.6.1-0.fdr.0.20050314.000733 yum: yum-2.2.0-0.fc3 Dan From skvidal at phy.duke.edu Tue Mar 15 14:27:51 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Tue, 15 Mar 2005 09:27:51 -0500 Subject: buildroots.xml and packages have to be in same location? In-Reply-To: References: Message-ID: <1110896871.617.31.camel@cutter> On Tue, 2005-03-15 at 08:13 -0500, Dan Williams wrote: > Seth, > > Using your mach SRPM and buildroots file, I had to actually pull down all the > packages locally and place the buildroots.xml file in the same dir as the > packages, then run createrepo on it before it would take (obviously pointing > mach to the local files as the yumsource). > > It seems that (unless I'm mistaken) the buildroots.xml file needs to be in any > repo that I'd point mach to? Is that correct? Is there no way of keeping the > buildroots.xml file locally, or at some other site, separate from the actual > packages? did you look at the 'buildgroups' repo I have in the fedore core development mach dist.d file? You can simply list them there, in a repository empty of files, and yum pulls it in. -sv From dcbw at redhat.com Tue Mar 15 15:37:20 2005 From: dcbw at redhat.com (Dan Williams) Date: Tue, 15 Mar 2005 10:37:20 -0500 Subject: buildroots.xml and packages have to be in same location? In-Reply-To: <1110896871.617.31.camel@cutter> References: <1110896871.617.31.camel@cutter> Message-ID: <1110901040.13199.12.camel@dcbw.boston.redhat.com> On Tue, 2005-03-15 at 09:27 -0500, seth vidal wrote: > On Tue, 2005-03-15 at 08:13 -0500, Dan Williams wrote: > > Seth, > > > > Using your mach SRPM and buildroots file, I had to actually pull down all the > > packages locally and place the buildroots.xml file in the same dir as the > > packages, then run createrepo on it before it would take (obviously pointing > > mach to the local files as the yumsource). > > > > It seems that (unless I'm mistaken) the buildroots.xml file needs to be in any > > repo that I'd point mach to? Is that correct? Is there no way of keeping the > > buildroots.xml file locally, or at some other site, separate from the actual > > packages? > > did you look at the 'buildgroups' repo I have in the fedore core > development mach dist.d file? > > You can simply list them there, in a repository empty of files, and yum > pulls it in. I tried doing exactly that: (from dist.d/aurora-2-sparc) yumsources['aurora-2-sparc'] = { 'core': 'rpm ' + aurora + ' / core', 'buildgroups': 'rpm ' + buildgroups + ' /i386/ groups', } (from location) # Fedora Core; this location should contain versioned directions aurora = 'http://download.wpi.edu/pub/linux/distributions/aurora/corona/sparc/os/Fedora/RPMS' # build groups buildgroups = 'http://linux.duke.edu/~skvidal/mach/' but this results in the following error: Preparing root Installing group 'minimal' ...! error: /usr/sbin/mach-helper yum --installroot /build/lib/mach/roots/aurora-2-sparc-core -c /build/lib/mach/states/aurora-2-sparc-core/yum.conf groupinstall build-minimal failed. Setting up Group Process Setting up Repos core 100% |=========================| 903 B 00:00 Error: No Groups on which to run command ERROR: Could not get build-minimal The yum.repo file that the config file points to looks like this: [core] name=core baseurl=http://download.wpi.edu/pub/linux/distributions/aurora/corona/sparc/os/Fedora/RPMS// enabled=1 gpgcheck=0 How is yum going to find the group information at Duke if its not anywhere in the yum.repo file? Dan From skvidal at phy.duke.edu Tue Mar 15 15:44:42 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Tue, 15 Mar 2005 10:44:42 -0500 Subject: buildroots.xml and packages have to be in same location? In-Reply-To: <1110901040.13199.12.camel@dcbw.boston.redhat.com> References: <1110896871.617.31.camel@cutter> <1110901040.13199.12.camel@dcbw.boston.redhat.com> Message-ID: <1110901482.617.42.camel@cutter> On Tue, 2005-03-15 at 10:37 -0500, Dan Williams wrote: > On Tue, 2005-03-15 at 09:27 -0500, seth vidal wrote: > > On Tue, 2005-03-15 at 08:13 -0500, Dan Williams wrote: > > > Seth, > > > > > > Using your mach SRPM and buildroots file, I had to actually pull down all the > > > packages locally and place the buildroots.xml file in the same dir as the > > > packages, then run createrepo on it before it would take (obviously pointing > > > mach to the local files as the yumsource). > > > > > > It seems that (unless I'm mistaken) the buildroots.xml file needs to be in any > > > repo that I'd point mach to? Is that correct? Is there no way of keeping the > > > buildroots.xml file locally, or at some other site, separate from the actual > > > packages? > > > > did you look at the 'buildgroups' repo I have in the fedore core > > development mach dist.d file? > > > > You can simply list them there, in a repository empty of files, and yum > > pulls it in. > > I tried doing exactly that: > > (from dist.d/aurora-2-sparc) > yumsources['aurora-2-sparc'] = { > 'core': 'rpm ' + aurora + ' / core', > 'buildgroups': 'rpm ' + buildgroups + ' /i386/ groups', > } > > (from location) > # Fedora Core; this location should contain versioned directions > aurora = 'http://download.wpi.edu/pub/linux/distributions/aurora/corona/sparc/os/Fedora/RPMS' > > # build groups > buildgroups = 'http://linux.duke.edu/~skvidal/mach/' > > > but this results in the following error: > > Preparing root > Installing group 'minimal' ...! > error: /usr/sbin/mach-helper yum --installroot /build/lib/mach/roots/aurora-2-sparc-core -c /build/lib/mach/states/aurora-2-sparc-core/yum.conf groupinstall build-minimal failed. > Setting up Group Process > Setting up Repos > core 100% |=========================| 903 B 00:00 > Error: No Groups on which to run command > > ERROR: Could not get build-minimal > > The yum.repo file that the config file points to looks like this: > [core] > name=core > baseurl=http://download.wpi.edu/pub/linux/distributions/aurora/corona/sparc/os/Fedora/RPMS// > enabled=1 > gpgcheck=0 > > How is yum going to find the group information at Duke if its not anywhere in the yum.repo file? let me see the rest of your dist.d file. I think you left out a section. -sv From dcbw at redhat.com Tue Mar 15 15:46:21 2005 From: dcbw at redhat.com (Dan Williams) Date: Tue, 15 Mar 2005 10:46:21 -0500 Subject: buildroots.xml and packages have to be in same location? In-Reply-To: <1110901482.617.42.camel@cutter> References: <1110896871.617.31.camel@cutter> <1110901040.13199.12.camel@dcbw.boston.redhat.com> <1110901482.617.42.camel@cutter> Message-ID: <1110901581.13199.15.camel@dcbw.boston.redhat.com> On Tue, 2005-03-15 at 10:44 -0500, seth vidal wrote: > let me see the rest of your dist.d file. I think you left out a section. # mach dist configuration -*- python -*- # aurora-2-sparc: configuration for Aurora 2 # yum sources layout: # 'key': 'rpm ' + locationkey + ' path/to/repo reponame' yumsources['aurora-2-sparc'] = { 'core': 'rpm ' + aurora + ' /sparc/os/Fedora/RPMS/ core', 'buildgroups': 'rpm ' + buildgroups + ' /i386/ groups', } # Aurora Development groups['aurora-2-sparc-core'] = { 'minimal': 'build-minimal', 'base': 'build-base', 'build': 'build', } # Aurora 2 Core packages['aurora-2-sparc-core'] = { 'dir': 'aurora-2-sparc', } sourceslist['aurora-2-sparc-core'] = { 'aurora-2-sparc': ('core', ) } # Aurora2 roots should use runuser instead of su config['aurora-2-sparc-core'] = {'runuser': '/sbin/runuser'} Dan From skvidal at phy.duke.edu Tue Mar 15 15:54:55 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Tue, 15 Mar 2005 10:54:55 -0500 Subject: buildroots.xml and packages have to be in same location? In-Reply-To: <1110901581.13199.15.camel@dcbw.boston.redhat.com> References: <1110896871.617.31.camel@cutter> <1110901040.13199.12.camel@dcbw.boston.redhat.com> <1110901482.617.42.camel@cutter> <1110901581.13199.15.camel@dcbw.boston.redhat.com> Message-ID: <1110902095.617.45.camel@cutter> On Tue, 2005-03-15 at 10:46 -0500, Dan Williams wrote: > On Tue, 2005-03-15 at 10:44 -0500, seth vidal wrote: > > let me see the rest of your dist.d file. I think you left out a section. > > # mach dist configuration -*- python -*- > > # aurora-2-sparc: configuration for Aurora 2 > > # yum sources layout: > # 'key': 'rpm ' + locationkey + ' path/to/repo reponame' > > yumsources['aurora-2-sparc'] = { > 'core': 'rpm ' + aurora + ' /sparc/os/Fedora/RPMS/ core', > 'buildgroups': 'rpm ' + buildgroups + ' /i386/ groups', > } > > # Aurora Development > groups['aurora-2-sparc-core'] = { > 'minimal': 'build-minimal', > 'base': 'build-base', > 'build': 'build', > } > > # Aurora 2 Core > packages['aurora-2-sparc-core'] = { > 'dir': 'aurora-2-sparc', > } > > sourceslist['aurora-2-sparc-core'] = { > 'aurora-2-sparc': ('core', ) > } > In your sourceslist for 'aurora-2-sparc-core' you're never specifying 'buildgroups'. so the above needs to read like: sourceslist['aurora-2-sparc-core'] = { 'aurora-2-sparc': ('core', 'buildgroups', ) } -sv From dcbw at redhat.com Tue Mar 15 15:51:04 2005 From: dcbw at redhat.com (Dan Williams) Date: Tue, 15 Mar 2005 10:51:04 -0500 Subject: buildroots.xml and packages have to be in same location? In-Reply-To: <1110901581.13199.15.camel@dcbw.boston.redhat.com> References: <1110896871.617.31.camel@cutter> <1110901040.13199.12.camel@dcbw.boston.redhat.com> <1110901482.617.42.camel@cutter> <1110901581.13199.15.camel@dcbw.boston.redhat.com> Message-ID: <1110901864.13199.17.camel@dcbw.boston.redhat.com> On Tue, 2005-03-15 at 10:46 -0500, Dan Williams wrote: > On Tue, 2005-03-15 at 10:44 -0500, seth vidal wrote: > > let me see the rest of your dist.d file. I think you left out a section. > sourceslist['aurora-2-sparc-core'] = { > 'aurora-2-sparc': ('core', ) ^^^ Adding "buildgroups" here doesn't work either Dan From skvidal at phy.duke.edu Tue Mar 15 15:56:45 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Tue, 15 Mar 2005 10:56:45 -0500 Subject: buildroots.xml and packages have to be in same location? In-Reply-To: <1110901864.13199.17.camel@dcbw.boston.redhat.com> References: <1110896871.617.31.camel@cutter> <1110901040.13199.12.camel@dcbw.boston.redhat.com> <1110901482.617.42.camel@cutter> <1110901581.13199.15.camel@dcbw.boston.redhat.com> <1110901864.13199.17.camel@dcbw.boston.redhat.com> Message-ID: <1110902205.617.47.camel@cutter> On Tue, 2005-03-15 at 10:51 -0500, Dan Williams wrote: > On Tue, 2005-03-15 at 10:46 -0500, Dan Williams wrote: > > On Tue, 2005-03-15 at 10:44 -0500, seth vidal wrote: > > > let me see the rest of your dist.d file. I think you left out a section. > > sourceslist['aurora-2-sparc-core'] = { > > 'aurora-2-sparc': ('core', ) > ^^^ Adding "buildgroups" here doesn't work either > > odd, works for me for i386 and x86_64. you did a mach clean after adding that, right? -sv From dcbw at redhat.com Tue Mar 15 16:51:41 2005 From: dcbw at redhat.com (Dan Williams) Date: Tue, 15 Mar 2005 11:51:41 -0500 (EST) Subject: buildroots.xml and packages have to be in same location? In-Reply-To: <1110902205.617.47.camel@cutter> References: <1110896871.617.31.camel@cutter> <1110901040.13199.12.camel@dcbw.boston.redhat.com> <1110901482.617.42.camel@cutter> <1110901581.13199.15.camel@dcbw.boston.redhat.com> <1110901864.13199.17.camel@dcbw.boston.redhat.com> <1110902205.617.47.camel@cutter> Message-ID: On Tue, 15 Mar 2005, seth vidal wrote: > On Tue, 2005-03-15 at 10:51 -0500, Dan Williams wrote: > > On Tue, 2005-03-15 at 10:46 -0500, Dan Williams wrote: > > > On Tue, 2005-03-15 at 10:44 -0500, seth vidal wrote: > > > > let me see the rest of your dist.d file. I think you left out a section. > > > sourceslist['aurora-2-sparc-core'] = { > > > 'aurora-2-sparc': ('core', ) > > ^^^ Adding "buildgroups" here doesn't work either > > > > > > odd, works for me for i386 and x86_64. > > you did a mach clean after adding that, right? Yeah... It appears that mach is not adding the [buildgroups] repo info to yum.repo. Once I add a section for that, it appears to work. I'll have to debug and find out what's going on. Dan From skvidal at phy.duke.edu Tue Mar 15 16:58:16 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Tue, 15 Mar 2005 11:58:16 -0500 Subject: buildroots.xml and packages have to be in same location? In-Reply-To: References: <1110896871.617.31.camel@cutter> <1110901040.13199.12.camel@dcbw.boston.redhat.com> <1110901482.617.42.camel@cutter> <1110901581.13199.15.camel@dcbw.boston.redhat.com> <1110901864.13199.17.camel@dcbw.boston.redhat.com> <1110902205.617.47.camel@cutter> Message-ID: <1110905896.617.57.camel@cutter> > > odd, works for me for i386 and x86_64. > > > > you did a mach clean after adding that, right? > > Yeah... It appears that mach is not adding the [buildgroups] repo info to > yum.repo. Once I add a section for that, it appears to work. I'll have to > debug and find out what's going on. > Can you post your dist.d config file or a link to it - i'd like to see if there's anything missing. -sv From dcbw at redhat.com Tue Mar 15 17:24:24 2005 From: dcbw at redhat.com (Dan Williams) Date: Tue, 15 Mar 2005 12:24:24 -0500 (EST) Subject: buildroots.xml and packages have to be in same location? In-Reply-To: <1110905896.617.57.camel@cutter> References: <1110896871.617.31.camel@cutter> <1110901040.13199.12.camel@dcbw.boston.redhat.com> <1110901482.617.42.camel@cutter> <1110901581.13199.15.camel@dcbw.boston.redhat.com> <1110901864.13199.17.camel@dcbw.boston.redhat.com> <1110902205.617.47.camel@cutter> <1110905896.617.57.camel@cutter> Message-ID: On Tue, 15 Mar 2005, seth vidal wrote: > > > odd, works for me for i386 and x86_64. > > > > > > you did a mach clean after adding that, right? > > > > Yeah... It appears that mach is not adding the [buildgroups] repo info to > > yum.repo. Once I add a section for that, it appears to work. I'll have to > > debug and find out what's going on. > > > > Can you post your dist.d config file or a link to it - i'd like to see > if there's anything missing. The last message I sent with the dist.d file in it should be exactly what you're asking for, right? It hasn't really changed... Dan From skvidal at phy.duke.edu Tue Mar 15 17:38:39 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Tue, 15 Mar 2005 12:38:39 -0500 Subject: buildroots.xml and packages have to be in same location? In-Reply-To: References: <1110896871.617.31.camel@cutter> <1110901040.13199.12.camel@dcbw.boston.redhat.com> <1110901482.617.42.camel@cutter> <1110901581.13199.15.camel@dcbw.boston.redhat.com> <1110901864.13199.17.camel@dcbw.boston.redhat.com> <1110902205.617.47.camel@cutter> <1110905896.617.57.camel@cutter> Message-ID: <1110908319.617.71.camel@cutter> > The last message I sent with the dist.d file in it should be exactly what you're > asking for, right? It hasn't really changed... > except for the addition of 'buildgroups' to the sourceslist for your root definition, right? -sv From dcbw at redhat.com Tue Mar 15 17:49:58 2005 From: dcbw at redhat.com (Dan Williams) Date: Tue, 15 Mar 2005 12:49:58 -0500 Subject: Fixed: Re: buildroots.xml and packages have to be in same location? In-Reply-To: <1110908319.617.71.camel@cutter> References: <1110896871.617.31.camel@cutter> <1110901040.13199.12.camel@dcbw.boston.redhat.com> <1110901482.617.42.camel@cutter> <1110901581.13199.15.camel@dcbw.boston.redhat.com> <1110901864.13199.17.camel@dcbw.boston.redhat.com> <1110902205.617.47.camel@cutter> <1110905896.617.57.camel@cutter> <1110908319.617.71.camel@cutter> Message-ID: <1110908998.13199.24.camel@dcbw.boston.redhat.com> On Tue, 2005-03-15 at 12:38 -0500, seth vidal wrote: > > The last message I sent with the dist.d file in it should be exactly what you're > > asking for, right? It hasn't really changed... > > > > except for the addition of 'buildgroups' to the sourceslist for your > root definition, right? Moral of the story: Don't have: dist.d/aurora-2-sparc dist.d/aurora-2-sparc.good when you make changes to the first, mach doesn't pick them up because it seems to be reading from the second, or at least mashing the two together. Dan From sopwith at redhat.com Tue Mar 15 22:24:26 2005 From: sopwith at redhat.com (Elliot Lee) Date: Tue, 15 Mar 2005 17:24:26 -0500 (EST) Subject: Build system ideas/requirements In-Reply-To: References: <1109803094.16819.25.camel@cutter> <422E457C.5090604@redhat.com> Message-ID: On Wed, 9 Mar 2005, Panu Matilainen wrote: > Minimal buildroot isn't necessary for reproducible builds, a > *consistently* populated buildroot is. You'll get a consistent environment > by dropping in Base + Devel groups with yum groupinstall even with the > stock comps.xml. Providing consistent buildroots actually works against reproducible builds in the long term, because of the effect those buildroots have on the way people choose to package things. For maximum quality control, packages should not be affected by having an unrelated (non-BuildRequires and non-base) package installed in the buildroot. If package X is is unrelated to the ongoing build of package Y, then package Y's build should not be affected by the absence OR presence of package X in the buildroot. The root cause of the problem here is not really having consistent buildroots, but having improper packaging that doesn't account for all possible variables. One thing we have internally at Red Hat is a mass rebuild system that creates a buildroot with all packages installed, attempts rebuilds of all packages, and for the builds that succeed, it compares the resulting binary packages against the original ones to see if things like filelist or dependencies have changed. It'd be nice to get the equivalent of that for Fedora. Best, -- Elliot From skvidal at phy.duke.edu Tue Mar 15 22:31:13 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Tue, 15 Mar 2005 17:31:13 -0500 Subject: Build system ideas/requirements In-Reply-To: References: <1109803094.16819.25.camel@cutter> <422E457C.5090604@redhat.com> Message-ID: <1110925873.617.96.camel@cutter> > The root cause of the problem here is not really having consistent > buildroots, but having improper packaging that doesn't account for all > possible variables. One thing we have internally at Red Hat is a mass > rebuild system that creates a buildroot with all packages installed, > attempts rebuilds of all packages, and for the builds that succeed, it > compares the resulting binary packages against the original ones to see if > things like filelist or dependencies have changed. It'd be nice to get > the equivalent of that for Fedora. ALL packages installed becomes a bit more complex when you think about fedora extras. ALL could become several thousand packages. rpm comparison scripts for file list and dependencies abound from the rhel rebuild projects. We can probably just snag one of those. -sv From sopwith at redhat.com Wed Mar 16 00:25:57 2005 From: sopwith at redhat.com (Elliot Lee) Date: Tue, 15 Mar 2005 19:25:57 -0500 (EST) Subject: Build system ideas/requirements In-Reply-To: <1110925873.617.96.camel@cutter> References: <1109803094.16819.25.camel@cutter> <422E457C.5090604@redhat.com> <1110925873.617.96.camel@cutter> Message-ID: On Tue, 15 Mar 2005, seth vidal wrote: > ALL packages installed becomes a bit more complex when you think about > fedora extras. ALL could become several thousand packages. A random sampling should be sufficient. Cheers, -- Elliot From laroche at redhat.com Wed Mar 16 07:45:28 2005 From: laroche at redhat.com (Florian La Roche) Date: Wed, 16 Mar 2005 08:45:28 +0100 Subject: Build system ideas/requirements In-Reply-To: References: <1109803094.16819.25.camel@cutter> <422E457C.5090604@redhat.com> <1110925873.617.96.camel@cutter> Message-ID: <20050316074528.GC5033@dudweiler.stuttgart.redhat.com> On Tue, Mar 15, 2005 at 07:25:57PM -0500, Elliot Lee wrote: > On Tue, 15 Mar 2005, seth vidal wrote: > > > ALL packages installed becomes a bit more complex when you think about > > fedora extras. ALL could become several thousand packages. > > A random sampling should be sufficient. And it is enough to make this test only once per quarter as a special check, not needed that often anyway... greetings, Florian La Roche From skvidal at phy.duke.edu Wed Mar 16 08:04:55 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Wed, 16 Mar 2005 03:04:55 -0500 Subject: mach+yum update and yum 2.2.1 Message-ID: <1110960295.617.127.camel@cutter> Hi Everyone, I've got a couple of items for you here: new mach packages here: http://linux.duke.edu/~skvidal/mach/pkgs/ - just a tar ball and src.rpm - you'll need to build it for whatever platform you're on. Changes: - use yum to clean out buildroots - minor tidying efforts - output some more data about what happens in the 'removing packages' section. I've also uploaded a yum 2.2.1 package to: http://linux.duke.edu/yum/download/2.2/ I've not announced that to the yum mailing list yet but I will do so tomorrow when I'm not so tired. This is useful for you folks running fc3, rhel3 or rhel4 and wanting to use these packages. Changes useful for mach: - lock for yum run only in installroot, not in host root - install-by-dependency I know these work on x86 and x86_64. They _should_ work on ppc and sparc, but I can't test those platforms. I know these work on fc3, rawhide and rhel4, but I've done no testing on rhel3. I can also build rawhide/fc4t1 chroots and packages on fc3 w/o any rpmdb problems, provided that I do not run rpm or yum from INSIDE the chroot, only from the outside. Let me know if these work for you or, more importantly, if they don't. Thanks, -sv From dcbw at redhat.com Wed Mar 16 14:42:04 2005 From: dcbw at redhat.com (Dan Williams) Date: Wed, 16 Mar 2005 09:42:04 -0500 Subject: mach+yum update and yum 2.2.1 In-Reply-To: <1110960295.617.127.camel@cutter> References: <1110960295.617.127.camel@cutter> Message-ID: <1110984124.26477.6.camel@dcbw.boston.redhat.com> On Wed, 2005-03-16 at 03:04 -0500, seth vidal wrote: > I know these work on x86 and x86_64. They _should_ work on ppc and > sparc, but I can't test those platforms. I'll run a quick test on Sparc. Dan From dcbw at redhat.com Wed Mar 16 15:04:21 2005 From: dcbw at redhat.com (Dan Williams) Date: Wed, 16 Mar 2005 10:04:21 -0500 Subject: mach+yum update and yum 2.2.1 In-Reply-To: <1110960295.617.127.camel@cutter> References: <1110960295.617.127.camel@cutter> Message-ID: <1110985461.26477.17.camel@dcbw.boston.redhat.com> On Wed, 2005-03-16 at 03:04 -0500, seth vidal wrote: > Hi Everyone, > I've got a couple of items for you here: > > new mach packages here: > http://linux.duke.edu/~skvidal/mach/pkgs/ > - just a tar ball and src.rpm - you'll need to build it for whatever > platform you're on. Still doesn't BuildRequires: libselinux-devel. Dan From skvidal at phy.duke.edu Wed Mar 16 15:13:45 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Wed, 16 Mar 2005 10:13:45 -0500 Subject: mach+yum update and yum 2.2.1 In-Reply-To: <1110985461.26477.17.camel@dcbw.boston.redhat.com> References: <1110960295.617.127.camel@cutter> <1110985461.26477.17.camel@dcbw.boston.redhat.com> Message-ID: <1110986025.617.152.camel@cutter> On Wed, 2005-03-16 at 10:04 -0500, Dan Williams wrote: > On Wed, 2005-03-16 at 03:04 -0500, seth vidal wrote: > > Hi Everyone, > > I've got a couple of items for you here: > > > > new mach packages here: > > http://linux.duke.edu/~skvidal/mach/pkgs/ > > - just a tar ball and src.rpm - you'll need to build it for whatever > > platform you're on. > > Still doesn't BuildRequires: libselinux-devel. > Thorsten told me that, too. I fixed it last night before I went to bed but after I released the pkgs. thanks, -sv From mharris at redhat.com Wed Mar 16 16:30:14 2005 From: mharris at redhat.com (Mike A. Harris) Date: Wed, 16 Mar 2005 11:30:14 -0500 (EST) Subject: Build system ideas/requirements In-Reply-To: References: <1109803094.16819.25.camel@cutter> <422E457C.5090604@redhat.com> Message-ID: On Tue, 15 Mar 2005, Elliot Lee wrote: >> Minimal buildroot isn't necessary for reproducible builds, a >> *consistently* populated buildroot is. You'll get a consistent environment >> by dropping in Base + Devel groups with yum groupinstall even with the >> stock comps.xml. > >Providing consistent buildroots actually works against reproducible builds >in the long term, because of the effect those buildroots have on the way >people choose to package things. I agree... in theory... >For maximum quality control, packages should not be affected by having an >unrelated (non-BuildRequires and non-base) package installed in the >buildroot. If package X is is unrelated to the ongoing build of package Y, >then package Y's build should not be affected by the absence OR presence >of package X in the buildroot. In the ideal world, yes. In the practical world, a very large amount of software does ./configure time autodetection of various libraries and other software which may or may not be present in a buildroot, many of it being conditionally enabled/disabled based on the presence or lack of libs/etc. avail. This autodetection is good for Joe Blow downloading something and compiling/installing by hand into /usr/local per se. but it kindof works against rpm based builds in the way of reproduceability. This puts a large part of the reproduceability factor square in the hands of the package maintainer. In order to get a reasonably good chance of having every rpm rebuild exactly the same regardless of what deps are present or absent in the buildroot, all package maintainers need to be come much more intimately involved with the rpms they maintain. This would require deeply inspecting all ./configure options with each release of the software, and being more involved with the underlying projects in question, and very closely analyzing the output of ./configure and trying to determine if there are any changes from upstream version to version and build to build. While it could be argued "this is already the packager's responsibility", in reality it does not work well, and it isn't likely to ever work well as long as it is not automated in some fashion. Relying on humans to do all of this: 1) Puts a lot of extra burden on humans, whom are already already greatly overburdened. 2) Makes the single point of failure be the human. Very bad idea. Not scaleable. Humans make mistakes. Computers do not. The most scaleable systems, are those which are as completely automated as possible, requiring as little to no human intervention as possible. So my suggestion to those seeking a solution to this problem, is to look at how it can be eliminated or reduced through software automation. rpmdiff is an example of creative use of automation. Perhaps someone can brainstorm an automation tool that could be plugged into rpm or beehive or mach, etc. >The root cause of the problem here is not really having consistent >buildroots, but having improper packaging that doesn't account for all >possible variables. Yep. >One thing we have internally at Red Hat is a mass >rebuild system that creates a buildroot with all packages installed, >attempts rebuilds of all packages, and for the builds that succeed, it >compares the resulting binary packages against the original ones to see if >things like filelist or dependencies have changed. It'd be nice to get >the equivalent of that for Fedora. If someone were to develop a tool that compared consecutive ./configure runs and reported major differences, that'd be cool. I don't know how difficult that'd be though. I suspect if it were easy someone might have done it by now, but who knows. ;o) HTH -- Mike A. Harris, Systems Engineer - X11 Development team, Red Hat Canada, Ltd. IT executives rate Red Hat #1 for value: http://www.redhat.com/promo/vendor From laroche at redhat.com Wed Mar 16 16:46:28 2005 From: laroche at redhat.com (Florian La Roche) Date: Wed, 16 Mar 2005 17:46:28 +0100 Subject: Build system ideas/requirements In-Reply-To: References: <1109803094.16819.25.camel@cutter> <422E457C.5090604@redhat.com> Message-ID: <20050316164628.GC9768@dudweiler.stuttgart.redhat.com> > If someone were to develop a tool that compared consecutive > ./configure runs and reported major differences, that'd be cool. > I don't know how difficult that'd be though. I suspect if it > were easy someone might have done it by now, but who knows. ;o) The log files might help with some of this. E.g. looking at changes between different archs or if updating glibc could sometimes help. greetings, Florian La Roche From enrico.scholz at informatik.tu-chemnitz.de Wed Mar 16 19:16:46 2005 From: enrico.scholz at informatik.tu-chemnitz.de (Enrico Scholz) Date: Wed, 16 Mar 2005 20:16:46 +0100 Subject: Build system ideas/requirements In-Reply-To: (Mike A. Harris's message of "Wed, 16 Mar 2005 11:30:14 -0500 (EST)") References: <1109803094.16819.25.camel@cutter> <422E457C.5090604@redhat.com> Message-ID: <87oedj4d0h.fsf@kosh.ultra.csn.tu-chemnitz.de> mharris at redhat.com ("Mike A. Harris") writes: >>One thing we have internally at Red Hat is a mass >>rebuild system that creates a buildroot with all packages installed, >>attempts rebuilds of all packages, and for the builds that succeed, it >>compares the resulting binary packages against the original ones to see if >>things like filelist or dependencies have changed. It'd be nice to get >>the equivalent of that for Fedora. > > If someone were to develop a tool that compared consecutive > ./configure runs and reported major differences, That's not a problem... just run a diff across the 'config.status' files. But what is with packages which do not use ./configure but have other kinds of build-time feature-checks? Comparing resulting packages seems to be a more universal way for detecting missing BuildRequires:. Enrico -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 480 bytes Desc: not available URL: From skvidal at phy.duke.edu Thu Mar 17 08:09:45 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Thu, 17 Mar 2005 03:09:45 -0500 Subject: more mach+yum stuff Message-ID: <1111046985.5267.54.camel@cutter> Hi folks, I added the libselinux-devel buildreq into the spec file and I fixed a couple of problems in the default config and dist.d files. Right now I'm using mach to build the packages coming out for fedora extras 3 and fedora extras development. I'm building them all on an fc3 x86_64 machine using 'setarch i686' for the i386 builds. Things appear pretty normal, so far :) pkgs are here: http://linux.duke.edu/~skvidal/mach/pkgs/ I've noticed one bug that i'm going to see about fixing - if the .spec file is set with mode 600 or 400, then mach will traceback b/c it won't be able to read the extracted spec from /tmp of the chroot. The easy-fix to this is to just chmod the spec file so it's readable by everyone, as soon as I figure out where this is happening I'll fix it and put a new pkg up. Also at the above url, I've included the incredibly simple shell scripts I use for starting builds and maintaining the extras trees. They're really not complex but maybe worth looking at, I think. Modifying those to add an arch like ppc/ppc64 should be extremely trivial :) If things work like we hope to expect then I'm probably going to recommend that packagers for fedora extras start running their package though mach before they request a build, just to verify that the package will build at all. Thanks! -sv From skvidal at phy.duke.edu Fri Mar 18 06:52:32 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Fri, 18 Mar 2005 01:52:32 -0500 Subject: build system glue scripts and requirements Message-ID: <1111128752.30200.39.camel@cutter> hi folks, So right now I think things with mach and yum are working for building fedora extras. The packages _seem_ like they're coming out right and things seem functional enough. The second part that I need help and input on is the glue scripts and requirements for having automatic triggering of builds for packagers. The questions I have: 1. if this is meant to run on the red hat boxes in the PHX coloc, what does that infrastructure look like? What features does it have? Can we assume all the build boxes have access to the cvs tree? Do we need to worry about pushing srpms around? 2. How do folks want packagers to send notices about builds? Just a cvs tag? A webpage? A gpg-signed email with specific content? A custom xmpp/jabber-client to send a custom message to a listening build client across an xmpp infrastructure? :) 3. What things am I missing or not understanding about what is needed from the build system? The requirements I've been working under are/were: - self hosting on Fedora Core - not crazy What else do I need to think about? 4. Who else is interested in working on this and getting things progressing more? The yum changes to mach are just a hackjob to get a problem solved for the short term. However, I'd like to continue down this general line of development. so Where do we go from here? Thanks, -sv From thias at spam.spam.spam.spam.spam.spam.spam.egg.and.spam.freshrpms.net Fri Mar 18 08:50:02 2005 From: thias at spam.spam.spam.spam.spam.spam.spam.egg.and.spam.freshrpms.net (Matthias Saou) Date: Fri, 18 Mar 2005 09:50:02 +0100 Subject: build system glue scripts and requirements In-Reply-To: <1111128752.30200.39.camel@cutter> References: <1111128752.30200.39.camel@cutter> Message-ID: <20050318095002.40c3ebde@python2> seth vidal wrote : > 1. if this is meant to run on the red hat boxes in the PHX coloc, what > does that infrastructure look like? What features does it have? Can we > assume all the build boxes have access to the cvs tree? Do we need to > worry about pushing srpms around? I'd guess that the lookaside cache is there exactly to avoid any srpm upload, and use it + the CVS instead. > 2. How do folks want packagers to send notices about builds? Just a > cvs tag? A webpage? A gpg-signed email with specific content? A custom > xmpp/jabber-client to send a custom message to a listening build client > across an xmpp infrastructure? :) Yeah, a "jabber bot"! :-D For me, the easiest would be to trigger a build directly from a form on a web page. Ideally, be able to track the build from the same page too : Check status, view full build log once finished etc. Sounds easy, but most certainly isn't, especially if we plan on supporting other archs than i386/ x86_64 since it'll mean de-centralized builds. > 3. What things am I missing or not understanding about what is needed > from the build system? The requirements I've been working under > are/were: > - self hosting on Fedora Core > - not crazy > What else do I need to think about? Depending on the answers to the questions above, all the flow between how/ when a packager requests a build, and how/when the packages appear in the ftp tree will need to be thought out. Hmm, "not crazy", you say? :-) > 4. Who else is interested in working on this and getting things > progressing more? The yum changes to mach are just a hackjob to get a > problem solved for the short term. However, I'd like to continue down > this general line of development. so Where do we go from > here? I still haven't had time to try out your modified mach2, but definitely want to, and want to adopt it since it'll solve all the small annoyances related to using apt (requiring the metadata, having all cached file names mangled). Keep up the good work Seth, it's much appreciated! As for the exact direction to take, it'll depend on RH official answers, what can and cannot be done with the build server(s). Matthias -- Clean custom Red Hat Linux rpm packages : http://freshrpms.net/ Fedora Core release 3 (Heidelberg) - Linux kernel 2.6.10-1.770_FC3 Load : 0.01 0.18 0.61 From ville.skytta at iki.fi Sun Mar 20 10:49:00 2005 From: ville.skytta at iki.fi (Ville =?ISO-8859-1?Q?Skytt=E4?=) Date: Sun, 20 Mar 2005 12:49:00 +0200 Subject: build system glue scripts and requirements In-Reply-To: <20050318095002.40c3ebde@python2> References: <1111128752.30200.39.camel@cutter> <20050318095002.40c3ebde@python2> Message-ID: <1111315740.8997.259.camel@bobcat.mine.nu> On Fri, 2005-03-18 at 09:50 +0100, Matthias Saou wrote: > seth vidal wrote : > > 2. How do folks want packagers to send notices about builds? Just a > > cvs tag? A webpage? A gpg-signed email with specific content? A custom > > xmpp/jabber-client to send a custom message to a listening build client > > across an xmpp infrastructure? :) > > Yeah, a "jabber bot"! :-D > For me, the easiest would be to trigger a build directly from a form on a > web page. Ideally, be able to track the build from the same page too : > Check status, view full build log once finished etc. Sounds easy, but most > certainly isn't, especially if we plan on supporting other archs than i386/ > x86_64 since it'll mean de-centralized builds. I think requiring a tag of some kind in CVS to trigger a build would be cool: it ensures that builds are actually tagged. Maybe something like this: "make tag" in a CVS checkout could do the tagging with the appropriate tag name derived from the package (N?)EVR(+branch?), as well as force- tag a special transient and moving BUILDME_DAMMIT'ish tag which the buildsys could look for. However, the buildsys needs to be able to figure out when to build based on new tags appearing in CVS or existing tags being moved to new revisions. That might be a bit tricky, but I think it's doable. Or there could be a special BUILDIT'ish file (possibly GPG signed) committed to CVS containing checksums of some kind and the tag to build. Regarding the tracking, I agree that a web page would work pretty well. Some examples from Debian: http://packages.qa.debian.org/ http://buildd.debian.org/ A mail interface should also be there, I think a notification message after a build containing links to the hypothetical web page for the real contents (build logs, possibly some other reports like rpmlint, etc) as in above would be ok. Triggering a build by sending mail sounds somewhat cumbersome, ditto having to surf to a specific web page and clicking around. Being able to do it with CVS would be ideal IMO. From skvidal at phy.duke.edu Sun Mar 20 15:06:31 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Sun, 20 Mar 2005 10:06:31 -0500 Subject: build system glue scripts and requirements In-Reply-To: <1111315740.8997.259.camel@bobcat.mine.nu> References: <1111128752.30200.39.camel@cutter> <20050318095002.40c3ebde@python2> <1111315740.8997.259.camel@bobcat.mine.nu> Message-ID: <1111331192.15119.2.camel@cutter> > I think requiring a tag of some kind in CVS to trigger a build would be > cool: it ensures that builds are actually tagged. > > Maybe something like this: > "make tag" in a CVS checkout could do the tagging with the appropriate > tag name derived from the package (N?)EVR(+branch?), as well as force- > tag a special transient and moving BUILDME_DAMMIT'ish tag which the > buildsys could look for. However, the buildsys needs to be able to > figure out when to build based on new tags appearing in CVS or existing > tags being moved to new revisions. That might be a bit tricky, but I > think it's doable. Or there could be a special BUILDIT'ish file > (possibly GPG signed) committed to CVS containing checksums of some kind > and the tag to build. problem: what if you only want a single arch built, how do you specify a single arch be built via the tag? How do you deal with test non-release builds? That's my concern about tag-based building, it seems kinda limited. I'd think having tagged releases + some way of instructing the buildsystem that we want it built 1. for release 2. for a set of archs, would probably give us more flexibility. Maybe some of beehive's caretakers can tell us how beehive gets build requests signaled? -sv From ville.skytta at iki.fi Sun Mar 20 19:35:09 2005 From: ville.skytta at iki.fi (Ville =?ISO-8859-1?Q?Skytt=E4?=) Date: Sun, 20 Mar 2005 21:35:09 +0200 Subject: build system glue scripts and requirements In-Reply-To: <1111331192.15119.2.camel@cutter> References: <1111128752.30200.39.camel@cutter> <20050318095002.40c3ebde@python2> <1111315740.8997.259.camel@bobcat.mine.nu> <1111331192.15119.2.camel@cutter> Message-ID: <1111347310.8997.392.camel@bobcat.mine.nu> On Sun, 2005-03-20 at 10:06 -0500, seth vidal wrote: > > I think requiring a tag of some kind in CVS to trigger a build would be > > cool: it ensures that builds are actually tagged. > > > > Maybe something like this: > > "make tag" in a CVS checkout could do the tagging with the appropriate > > tag name derived from the package (N?)EVR(+branch?), as well as force- > > tag a special transient and moving BUILDME_DAMMIT'ish tag which the > > buildsys could look for. However, the buildsys needs to be able to > > figure out when to build based on new tags appearing in CVS or existing > > tags being moved to new revisions. That might be a bit tricky, but I > > think it's doable. Or there could be a special BUILDIT'ish file > > (possibly GPG signed) committed to CVS containing checksums of some kind > > and the tag to build. > > problem: what if you only want a single arch built, how do you specify a > single arch be built via the tag? How do you deal with test non-release > builds? That's my concern about tag-based building, it seems kinda > limited. > > I'd think having tagged releases + some way of instructing the > buildsystem that we want it built 1. for release 2. for a set of archs, > would probably give us more flexibility. The second "special file" approach could be used for this, although there might be better and more user friendly approaches. Anyway, eg. something like this (disclaimer: unfiltered raw braindump): echo 'tag archs buildtype' \ | gpg --clearsign > BUILD && cvs ci -m 'Build request' BUILD (Yes, I'm aware that this doesn't contain enough uniquely identifying data so that it would make sense to sign it, think replayability elsewhere in the CVS repo. But don't get stuck wit that now :) Synopsis: tag [archs [buildtype]] tag: The tag to build. archs (optional): The archs to build for, ALL (or not present) means all supported. buildtype (optional): Type of build, enumeration of keywords. "release" or not present means a release build, add others as needed. The buildsys could either "cvs up" and/or "cvs stat" all BUILD files, or be configured to receive notifications from the CVS server some way when they are changed, eg. mail. Semi-offtopic: it would be good to prepare the infrastructure so that rpmbuild args could be passed to the build system some way. One example of potential packages that could benefit from this would be kernel module packages where the module specfiles wouldn't need changes between target kernel revisions, instead the target kernel(s) to build for would be identified by eg. a --define to rpmbuild. From skvidal at phy.duke.edu Mon Mar 21 01:19:46 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Sun, 20 Mar 2005 20:19:46 -0500 Subject: build system glue scripts and requirements In-Reply-To: <1111347310.8997.392.camel@bobcat.mine.nu> References: <1111128752.30200.39.camel@cutter> <20050318095002.40c3ebde@python2> <1111315740.8997.259.camel@bobcat.mine.nu> <1111331192.15119.2.camel@cutter> <1111347310.8997.392.camel@bobcat.mine.nu> Message-ID: <1111367986.16997.11.camel@cutter> > Semi-offtopic: it would be good to prepare the infrastructure so that > rpmbuild args could be passed to the build system some way. One example > of potential packages that could benefit from this would be kernel > module packages where the module specfiles wouldn't need changes between > target kernel revisions, instead the target kernel(s) to build for would > be identified by eg. a --define to rpmbuild. > mach lets you pass extras flags to rpmbuild internally, but I agree with you. I've been butting my head up against a few things in mach that I think, for the way the buildsystem is going to have to layout might need to be moved around a bit. I'm hoping Thomas comes back from vacation or work or wherever he's been b/c I really would like to know what he thinks about some of this stuff. I had to hack up some of mach to get it to work the way it is now but I'd like to get things sorted out so we can either merge upstream or plan to make things match up at some later date. -sv From katzj at redhat.com Mon Mar 21 03:41:18 2005 From: katzj at redhat.com (Jeremy Katz) Date: Sun, 20 Mar 2005 22:41:18 -0500 Subject: build system glue scripts and requirements In-Reply-To: <1111128752.30200.39.camel@cutter> References: <1111128752.30200.39.camel@cutter> Message-ID: <1111376479.5620.34.camel@bree.local.net> On Fri, 2005-03-18 at 01:52 -0500, seth vidal wrote: > So right now I think things with mach and yum are working for building > fedora extras. The packages _seem_ like they're coming out right and > things seem functional enough. Yep, has seemed pretty reasonable to me in the poking at it I've done so far. mschwendt's idea of running a lot of already built packages through to see how they fare isn't a bad one. I'll try to set up one of my test boxes to do this over the course of this week (hopefully tomorrow). You don't have to worry about doing that right now :-) > The second part that I need help and > input on is the glue scripts and requirements for having automatic > triggering of builds for packagers. > > The questions I have: > 1. if this is meant to run on the red hat boxes in the PHX coloc, what > does that infrastructure look like? What features does it have? Can we > assume all the build boxes have access to the cvs tree? Do we need to > worry about pushing srpms around? I can't answer authoritatively on what the set up is, but I have a pretty good idea. As it stands right now, the build boxes are all x86_64 boxes with reasonable amounts of RAM and disk space. They should be able to access CVS (if not, that can be fixed) with the theory that you want the buildsystem to be given an explicit tag to build and then you check out off the tag, make the src.rpm and go. We _do_ need to worry about how the binary packages end up. I'm not entirely sure what's best here. Internally, we use writing to directory trees that look like release/package-N/V-R/A (N is name, V is version, R is release, A is arch) Then, the trees which end up on the FTP site are composed out of these directory trees. This has the nice feature of making the inheritance of builds from older releases a bit easier. What is slightly more complicated with a scheme like this is how do you update the repodata after a build completes quickly with the new package info. Some of the "createrepo should be able to be done incrementally" people will probably come back out of the woodwork. > 2. How do folks want packagers to send notices about builds? Just a > cvs tag? A webpage? A gpg-signed email with specific content? A custom > xmpp/jabber-client to send a custom message to a listening build client > across an xmpp infrastructure? :) Magic occurring at cvs tag time is less than ideal. People like it from the "oh, it looks simple" perspective, but there's a lot of other metadata that you sometimes need. Having it be a web form that you give all the appropriate info to seems reasonable to me. Or XML-RPC. Or something like that. Either gives you the ability to have a relatively simple makefile target for a 'make build' with the appropriate info you need to set[1]. This still all works off of a tag being made [2] Then, after the build, we probably want to kick off a "build complete" mail at least to the originator of the build. Failed builds should similarly get a mail and the build logs need to be able to be easily accessible. One other thing that springs to mind is the question of what arches to build packages for and whether a specific arch failing should block the build. My opinion, which matches how Core gets built, is that * Packages get built for all Extras arches. ExcludeArch/ExclusiveArch can be used for the (rare) things which need otherwise * Build failures on one arch block all arches. Otherwise, some arches will fall far behind and things like prereqs get really painful > 3. What things am I missing or not understanding about what is needed > from the build system? The requirements I've been working under > are/were: > - self hosting on Fedora Core > - not crazy > What else do I need to think about? Security is probably one. Although "not crazy" probably covers that. :) Otherwise, I can't think of big things which need thinking about. Perhaps easy setup such that it can be included in the kickstart configs for the build machines without any real difficulty. But how things look now doesn't seem bad for that. And the bit that we talked about during LinuxWorld on how we want to make it possible and easy for developers to download the buildsystem and get it going on their own workstation for test builds. That then also enables other third party repositories (there won't be only one :-) to use it and get some consistency. > 4. Who else is interested in working on this and getting things > progressing more? The yum changes to mach are just a hackjob to get a > problem solved for the short term. However, I'd like to continue down > this general line of development. so Where do we go from > here? I'm interested... let me see how much time I can commit. I think where we go from here is mostly trying to get some of the glue pieces working. Jeremy [1] Even if this requires writing a (simple-ish) python app to run for kicking it off [2] Yes, I know that the tags aren't being done right now. I'll get that added to Makefile.common tonight or in the morning. From notting at redhat.com Mon Mar 21 03:46:28 2005 From: notting at redhat.com (Bill Nottingham) Date: Sun, 20 Mar 2005 22:46:28 -0500 Subject: build system glue scripts and requirements In-Reply-To: <1111331192.15119.2.camel@cutter> References: <1111128752.30200.39.camel@cutter> <20050318095002.40c3ebde@python2> <1111315740.8997.259.camel@bobcat.mine.nu> <1111331192.15119.2.camel@cutter> Message-ID: <20050321034628.GA10610@nostromo.devel.redhat.com> seth vidal (skvidal at phy.duke.edu) said: > Maybe some of beehive's caretakers can tell us how beehive gets build > requests signaled? Explicit signals via a 'make build'. Users can pass a variable to build in a scratch destination. 'make build' signals the build system to attempt to check out the current n-v-r from CVS (in general, you need to do 'make tag' before 'make build'), and build in the default location for that branch of CVS. Bill From wtogami at redhat.com Mon Mar 21 04:01:33 2005 From: wtogami at redhat.com (Warren Togami) Date: Sun, 20 Mar 2005 18:01:33 -1000 Subject: build system glue scripts and requirements In-Reply-To: <1111331192.15119.2.camel@cutter> References: <1111128752.30200.39.camel@cutter> <20050318095002.40c3ebde@python2> <1111315740.8997.259.camel@bobcat.mine.nu> <1111331192.15119.2.camel@cutter> Message-ID: <423E471D.4070201@redhat.com> seth vidal wrote: > > problem: what if you only want a single arch built, how do you specify a > single arch be built via the tag? How do you deal with test non-release > builds? That's my concern about tag-based building, it seems kinda > limited. > We shouldn't allow only a single arch to be built into the repository. But for flexibility, if a package has a difficult time being fixed for an arch (after some effort is put into it), the packager should temporarily Exclude, build, and file a bug assigned to an arch-group of maintainers. For example Thorsten seems interested in making packages work on x86-64, while dwmw2 on ppc. To answer your other question, we need both a tag and target repository. Warren Togami wtogami at redhat.com From skvidal at phy.duke.edu Mon Mar 21 08:48:05 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Mon, 21 Mar 2005 03:48:05 -0500 Subject: new mach+yum pkgs Message-ID: <1111394886.16997.70.camel@cutter> Hi Folks, some new mach+yum pkgs are up: http://linux.duke.edu/~skvidal/mach/pkgs/ Changes: - fix for traceback when rebuilding srpm when spec file in srpm is mode 600 - add in: lrwxrwxrwx 1 root root 15 Apr 17 2003 fd -> ../proc/self/fd crw-rw-rw- 1 root root 1, 7 Dec 11 15:49 full crw-rw-rw- 1 root root 1, 3 Dec 11 15:49 null crw-rw-rw- 1 root root 5, 2 Dec 11 15:49 ptmx crw-r--r-- 1 root root 1, 8 Dec 11 15:49 random crw-rw-rw- 1 root root 5, 0 Dec 11 15:49 tty crw-r--r-- 1 root root 1, 9 Dec 11 15:49 urandom crw-rw-rw- 1 root root 1, 5 Dec 11 15:49 zero to /dev in chroots to make sure builds work :) - make the command 'mach yum' actually work At this point I'll probably check the code in as 'extras-buildsys-temp' or something equally as obviously named. Can anyone here tell me if I can even make new modules in /cvs/fedora? -sv From laroche at redhat.com Mon Mar 21 08:52:36 2005 From: laroche at redhat.com (Florian La Roche) Date: Mon, 21 Mar 2005 09:52:36 +0100 Subject: new mach+yum pkgs In-Reply-To: <1111394886.16997.70.camel@cutter> References: <1111394886.16997.70.camel@cutter> Message-ID: <20050321085236.GA7164@dudweiler.stuttgart.redhat.com> > At this point I'll probably check the code in as 'extras-buildsys-temp' > or something equally as obviously named. Can anyone here tell me if I > can even make new modules in /cvs/fedora? We should include this into Fedora Extras as rpm as well to get more people using it. greetings, Florian La Roche From skvidal at phy.duke.edu Mon Mar 21 08:57:14 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Mon, 21 Mar 2005 03:57:14 -0500 Subject: new mach+yum pkgs In-Reply-To: <20050321085236.GA7164@dudweiler.stuttgart.redhat.com> References: <1111394886.16997.70.camel@cutter> <20050321085236.GA7164@dudweiler.stuttgart.redhat.com> Message-ID: <1111395434.16997.74.camel@cutter> On Mon, 2005-03-21 at 09:52 +0100, Florian La Roche wrote: > > At this point I'll probably check the code in as 'extras-buildsys-temp' > > or something equally as obviously named. Can anyone here tell me if I > > can even make new modules in /cvs/fedora? > > We should include this into Fedora Extras as rpm as well to get more > people using it. we need to make a decision about separation or integration with upstream mach cvs, esp wrt mach3. if we're not going to merge back to mach2 head (which notably could take a lot of work b/c I've not focused on making that easy with these patches) I'd rather rename this something innocuous so we don't annoy thomas with bugs from this version. -sv From wtogami at redhat.com Mon Mar 21 10:42:27 2005 From: wtogami at redhat.com (Warren Togami) Date: Mon, 21 Mar 2005 00:42:27 -1000 Subject: new mach+yum pkgs In-Reply-To: <1111395434.16997.74.camel@cutter> References: <1111394886.16997.70.camel@cutter> <20050321085236.GA7164@dudweiler.stuttgart.redhat.com> <1111395434.16997.74.camel@cutter> Message-ID: <423EA513.8050102@redhat.com> seth vidal wrote: > we need to make a decision about separation or integration with upstream > mach cvs, esp wrt mach3. > > if we're not going to merge back to mach2 head (which notably could take > a lot of work b/c I've not focused on making that easy with these > patches) I'd rather rename this something innocuous so we don't annoy > thomas with bugs from this version. yach? Warren From skvidal at phy.duke.edu Mon Mar 21 20:53:49 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Mon, 21 Mar 2005 15:53:49 -0500 Subject: code checked in Message-ID: <1111438429.6458.48.camel@cutter> I checked the mach+yum stuff into: extras-buildsys-temp in /cvs/fedora -sv From skvidal at phy.duke.edu Fri Mar 25 10:09:36 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Fri, 25 Mar 2005 05:09:36 -0500 Subject: build system glue scripts and requirements In-Reply-To: <1111376479.5620.34.camel@bree.local.net> References: <1111128752.30200.39.camel@cutter> <1111376479.5620.34.camel@bree.local.net> Message-ID: <1111745377.20715.44.camel@cutter> > Yep, has seemed pretty reasonable to me in the poking at it I've done so > far. mschwendt's idea of running a lot of already built packages > through to see how they fare isn't a bad one. I'll try to set up one of > my test boxes to do this over the course of this week (hopefully > tomorrow). You don't have to worry about doing that right now :-) You did this and it did reasonably okay, right? > We _do_ need to worry about how the binary packages end up. I'm not > entirely sure what's best here. Internally, we use writing to directory > trees that look like > release/package-N/V-R/A > (N is name, V is version, R is release, A is arch) > > Then, the trees which end up on the FTP site are composed out of these > directory trees. This has the nice feature of making the inheritance of > builds from older releases a bit easier. > > What is slightly more complicated with a scheme like this is how do you > update the repodata after a build completes quickly with the new package > info. Some of the "createrepo should be able to be done incrementally" > people will probably come back out of the woodwork. Those people have too much ram. If push comes to shove I can think of a few ways of doing this. createrepo will read symlinks as well, so we could do something grotty like a symlink-farm for packages per repo. > One other thing that springs to mind is the question of what arches to > build packages for and whether a specific arch failing should block the > build. My opinion, which matches how Core gets built, is that > * Packages get built for all Extras arches. ExcludeArch/ExclusiveArch > can be used for the (rare) things which need otherwise > * Build failures on one arch block all arches. Otherwise, some arches > will fall far behind and things like prereqs get really painful This might be tricky. I'm worried about notifications b/t the buildsystems. Or are you thinking about having levels like: buildsystem -> pops out rpm in some known location releasesystem -> determines if all arches have built and therefore if the pkg ever gets moved/copied/linked to the release tree it's targeted for. > And the bit that we talked about during LinuxWorld on how we want to > make it possible and easy for developers to download the buildsystem and > get it going on their own workstation for test builds. That then also > enables other third party repositories (there won't be only one :-) to > use it and get some consistency. Here's what I'm thinking right now: If we can get a good working interface for bugzilla for package tracking. And we can define some new fields/tags for items in this interface then we should be able to let it work for us. 1. packager tags a release in cvs using make tag or whatever. 2. packager updates the package status in the bugzilla package tracker a. they mark it as build b. they mark it for what release (fc3, testing, rawhide, etc) c. they input the cvs tag to build from 3. build system, at regular intervals, queries this information via xml-rpc to bugzilla. It builds the packages and attaches the log reports (or links to the log reports) to the comments in the package tracking interface. 4. build system puts the finalized packages + other stuff in some path that's web accessible as you described above.(CAVEAT: some special casing for embargo'd builds will need to be put in place) Advantages: 1. users cannot directly request builds so the system isn't overwhelmed 2. regular intervals means the user doesn't have to wait for some person to kick off a build. 3. Having one build master system scan and kick off builds on other machines/arches is not crazy OR having multiple build systems scan, mark the package as 'being built for arch foo on system bar' is also not outside the realm of possibility (though there might be race conditions there) 4. Reasonably scalable as more build systems/packagers are added Disadvantages: 1. might be overstating the functionality available in the xml-rpc interface to bugzilla 2. Users cannot directly kick off builds, they have to wait (waaaaaaah) 3. Dealing with Embargo'd builds - gonna be a pain no matter what 4. Ordering of builds based on dependency 5. package tracking system does not yet exist. thoughts? -sv From gdk at redhat.com Fri Mar 25 14:56:57 2005 From: gdk at redhat.com (Greg DeKoenigsberg) Date: Fri, 25 Mar 2005 09:56:57 -0500 (EST) Subject: build system glue scripts and requirements In-Reply-To: <1111745377.20715.44.camel@cutter> References: <1111128752.30200.39.camel@cutter> <1111376479.5620.34.camel@bree.local.net> <1111745377.20715.44.camel@cutter> Message-ID: On Fri, 25 Mar 2005, seth vidal wrote: > Disadvantages: > 1. might be overstating the functionality available in the xml-rpc > interface to bugzilla The xml-rpc functionality doesn't really need to be too complicated, I don't think. It's basically just: * An interface to create a bug according to some "build request form". Simple "new bug" function, got it. * An interface to walk through a set of bugs and pull interesting fields. Got it. * An interface to add comments to existing bugs. Got it. And what we don't got, dkl can create, if it's sensible. > 2. Users cannot directly kick off builds, they have to wait (waaaaaaah) > 3. Dealing with Embargo'd builds - gonna be a pain no matter what > 4. Ordering of builds based on dependency How do you do this now? Is it trial and error, or do you have a heuristic that works? > 5. package tracking system does not yet exist. --g _____________________ ____________________________________________ Greg DeKoenigsberg ] [ the future masters of technology will have Community Relations ] [ to be lighthearted and intelligent. the Red Hat ] [ machine easily masters the grim and the ] [ dumb. --mcluhan Red Hat Summit ] [ New Orleans ] [ Learn. Network. Experience Open Source. June 1/2/3 2005 ] [ (And Make Your Boss Pay For It.) [ http://www.redhat.com/promo/summit/ > From katzj at redhat.com Fri Mar 25 15:04:09 2005 From: katzj at redhat.com (Jeremy Katz) Date: Fri, 25 Mar 2005 10:04:09 -0500 Subject: build system glue scripts and requirements In-Reply-To: <1111745377.20715.44.camel@cutter> References: <1111128752.30200.39.camel@cutter> <1111376479.5620.34.camel@bree.local.net> <1111745377.20715.44.camel@cutter> Message-ID: <1111763049.25887.47.camel@bree.local.net> On Fri, 2005-03-25 at 05:09 -0500, seth vidal wrote: > > Yep, has seemed pretty reasonable to me in the poking at it I've done so > > far. mschwendt's idea of running a lot of already built packages > > through to see how they fare isn't a bad one. I'll try to set up one of > > my test boxes to do this over the course of this week (hopefully > > tomorrow). You don't have to worry about doing that right now :-) > > You did this and it did reasonably okay, right? Yeah, almost everything succeeded. Just quick checking some of the failure logs showed missing BuildRequires being the reason, which seems sane. Thanks for poking to remind me. > > One other thing that springs to mind is the question of what arches to > > build packages for and whether a specific arch failing should block the > > build. My opinion, which matches how Core gets built, is that > > * Packages get built for all Extras arches. ExcludeArch/ExclusiveArch > > can be used for the (rare) things which need otherwise > > * Build failures on one arch block all arches. Otherwise, some arches > > will fall far behind and things like prereqs get really painful > > This might be tricky. I'm worried about notifications b/t the > buildsystems. Or are you thinking about having levels like: > > buildsystem -> pops out rpm in some known location > releasesystem -> determines if all arches have built and therefore if > the pkg ever gets moved/copied/linked to the release tree it's targeted > for. That's the easiest way to do it. A file gets dropped in specific location (BUILD-SUCCESS, BUILD-FAIL) that can just be watched for. We don't have to have instantaneous knowledge of build success. > > And the bit that we talked about during LinuxWorld on how we want to > > make it possible and easy for developers to download the buildsystem and > > get it going on their own workstation for test builds. That then also > > enables other third party repositories (there won't be only one :-) to > > use it and get some consistency. > > Here's what I'm thinking right now: > If we can get a good working interface for bugzilla for package > tracking. And we can define some new fields/tags for items in this > interface then we should be able to let it work for us. This could work. Although more "fields" in bugzilla always makes me a little bit wary. Even though I know dkl would end up just overloading existing fields in some cases and making them look different with template magic. > 4. build system puts the finalized packages + other stuff in some path > that's web accessible as you described above.(CAVEAT: some special > casing for embargo'd builds will need to be put in place) Yeah, embargo'd stuff probably requires thought. There's not really a way to do it right now with the CVS repo either, though, so it's further out as a question > 3. Having one build master system scan and kick off builds on other > machines/arches is not crazy OR having multiple build systems scan, > mark the package as 'being built for arch foo on system bar' is > also not outside the realm of possibility (though there might be > race conditions there) I think there is provision for having a build master machine? Cristian? > Disadvantages: > 1. might be overstating the functionality available in the xml-rpc > interface to bugzilla It just sounds like query is mostly needed. And for the nice, easy to use build target from the makefile, open bug/add comments. Those should both be present. > 2. Users cannot directly kick off builds, they have to wait (waaaaaaah) So long as the polling is done frequently enough, this isn't a huge deal. > 3. Dealing with Embargo'd builds - gonna be a pain no matter what See above. > 4. Ordering of builds based on dependency FIFO. You need to request your builds in dep order. Sucks a little, but is easy to implement :) Also, another thing I've thought about (and it's run through my head a few times, this is just the first time I've remembered to type it out). One thing that would be nice would be able to have multiple roots for the same release in mach easily (ie, without having to make copies of the config files with tweaked paths :-). At the same time, that's probably not something that matters in the short term as it could easily be added as an optimization later. Jeremy From skvidal at phy.duke.edu Fri Mar 25 16:28:26 2005 From: skvidal at phy.duke.edu (seth vidal) Date: Fri, 25 Mar 2005 11:28:26 -0500 Subject: build system glue scripts and requirements In-Reply-To: References: <1111128752.30200.39.camel@cutter> <1111376479.5620.34.camel@bree.local.net> <1111745377.20715.44.camel@cutter> Message-ID: <1111768106.23491.11.camel@cutter> > * An interface to create a bug according to some "build request form". > Simple "new bug" function, got it. > > * An interface to walk through a set of bugs and pull interesting fields. > Got it. > > * An interface to add comments to existing bugs. Got it. Maybe an interface to lock the bug for a few seconds while you do this update so two systems don't end up answering at the same time, but I'm guessing that's implicit. > And what we don't got, dkl can create, if it's sensible. sensible is important. > > 2. Users cannot directly kick off builds, they have to wait (waaaaaaah) > > 3. Dealing with Embargo'd builds - gonna be a pain no matter what > > 4. Ordering of builds based on dependency > > How do you do this now? Is it trial and error, or do you have a heuristic > that works? pass mach a list of related pkgs and it builds them in the right order, installing the buildreqs it needs for the others it is building in that set. So it sounds like the order of operations is: - package tracking system - xml-rpc interface - cvs tagging for extras - buildsystem scripts to pull/update this stuff. sound right? -sv From wtogami at redhat.com Sat Mar 26 09:08:44 2005 From: wtogami at redhat.com (Warren Togami) Date: Fri, 25 Mar 2005 23:08:44 -1000 Subject: RPATH and build root traces Message-ID: <4245269C.6000101@redhat.com> /usr/lib/rpm/check-buildroot /usr/lib/rpm/check-rpaths %__arch_install_post /usr/lib/rpm/check-rpaths /usr/lib/rpm/check-buildroot fedora-rpmdevtools contains these two scripts that can be run at the end of rpmbuild automatically with this above rpmmacro. Can we consider adding this as a standard to mach buildroots? Enrico's scripts above have worked very well for us in detecting and forcing us to correct RPATH problems for a long time now. I am not aware of any false positives discovered during all this time. Warren Togami wtogami at redhat.com From fedora at leemhuis.info Sat Mar 26 18:40:47 2005 From: fedora at leemhuis.info (Thorsten Leemhuis) Date: Sat, 26 Mar 2005 19:40:47 +0100 Subject: RPATH and build root traces In-Reply-To: <4245269C.6000101@redhat.com> References: <4245269C.6000101@redhat.com> Message-ID: <1111862447.6225.3.camel@notebook.thl.home> Am Freitag, den 25.03.2005, 23:08 -1000 schrieb Warren Togami: > /usr/lib/rpm/check-buildroot > /usr/lib/rpm/check-rpaths > > %__arch_install_post /usr/lib/rpm/check-rpaths /usr/lib/rpm/check-buildroot > > fedora-rpmdevtools contains these two scripts that can be run at the end > of rpmbuild automatically with this above rpmmacro. Can we consider > adding this as a standard to mach buildroots? > > Enrico's scripts above have worked very well for us in detecting and > forcing us to correct RPATH problems for a long time now. I am not > aware of any false positives discovered during all this time. I'm all for it but a small warning here: I think that a lot of x86_64 packages will fail due to hardcoded RPATH -- I saw it in a lot of different packages in the past. Some were fixed, a lot not, because I considered fixing x86_64 packages for extras was more important at this point then to fix all appearances of hardcoded RPATH. Maybe we should wait with this after FC4 and FC4 extras are out? Otherwise a lot of x86_64 packages that were in FC3 extras might be missing in FC4 extras. Or we could modify the script so it warns only for the moment. After FC4, we could modify it again so the build fails again if it finds a hardcoded RPATH. -- Thorsten Leemhuis From wtogami at redhat.com Sun Mar 27 21:19:56 2005 From: wtogami at redhat.com (Warren Togami) Date: Sun, 27 Mar 2005 11:19:56 -1000 Subject: RPATH and build root traces In-Reply-To: <1111862447.6225.3.camel@notebook.thl.home> References: <4245269C.6000101@redhat.com> <1111862447.6225.3.camel@notebook.thl.home> Message-ID: <4247237C.6040307@redhat.com> Thorsten Leemhuis wrote: > > > I'm all for it but a small warning here: I think that a lot of x86_64 > packages will fail due to hardcoded RPATH -- I saw it in a lot of > different packages in the past. Some were fixed, a lot not, because I > considered fixing x86_64 packages for extras was more important at this > point then to fix all appearances of hardcoded RPATH. > > Maybe we should wait with this after FC4 and FC4 extras are out? > Otherwise a lot of x86_64 packages that were in FC3 extras might be > missing in FC4 extras. Or we could modify the script so it warns only > for the moment. After FC4, we could modify it again so the build fails > again if it finds a hardcoded RPATH. > Alternatively: RPATH check only i386 and ppc. *OR* RPATH errors on only the most dangerous RPATHs like "" or "."