[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: What Fedora makes sucking for me - or why I am NOT Fedora

James Antill wrote:
On Tue, 2008-12-09 at 09:43 -0900, Jeff Spaleta wrote:
On Tue, Dec 9, 2008 at 9:33 AM, Les Mikesell <lesmikesell gmail com> wrote:
But, as I've mentioned before, I think you'd get much better public
participation in testing if yum could do repeatable updates.  That is, I'm
only interested in testing exactly the update that I will later do on my own
more critical machine(s) and I'm not interested enough to maintain my own
mirrored repository which is currently the only way to get exactly the same
set of programs installed on 2 different machines at different times.
Do you mean.... we require all the mirrors to hold all version of all
updates for a release cycle?

 Yes, that's the only sane way to do it. yum-debug-dump /
yum-debug-restore will somewhat do the above, if the repo. has the old
versions available.

I don't think that's the only sane way to do it, but it is the most obvious and adding a simple mechanism to yum to report the latest update timestamp or some repo transaction id(s) that could be fed to another instance to ensure it ignored subsequent changes to the repo(s) to perform an update to the same packages would be useful in its own right and appreciated when inherited by the enterprise versions.

However Fedora only has the latest, so the only real
alternative is creating a new repo. of somekind ... at which point you
might as well just do a local mirror.

Fedora has this split personality about wanting to be both production-usable and also the leading edge where new code first meets a lot of new situations. You can't quite be both at once. However, it could actually pull it off if there were a way designed in to avoid some of the bugs pushed out in updates on critical machines. Asking every user to maintain a full repo mirror just doesn't sound like a reasonable approach to this though, especially if you think the mirrors themselves would have a problem storing all the updates.

It could be as simple as batching updates: suppose everything but critical security fixes and corrections for known-bad updates only updated every few weeks, and the user could could choose (with a permanent option) whether any particular machine should update on the leading or trailing edge of this window.

Or, pick a time frame reasonable both for mirrors to hold updates and for users to complete testing (2 months?) and only remove packages after their replacements have reached that age.

Or, what if one machine's yum automatically acted as a proxy for another's update, perhaps with an error generated if the package hadn't been downloaded already and if you want to be even more helpful, a warning if none of the code from a package had been run on the intermediate machine? That way you'd get local mirroring of just the desired packages without extra work anywhere - in fact you'd get both repeatable updates and a load reduction on the mirrors out of it. It's probably possible to do this now with a lot of extra steps. Nobody is going to do it unless it is one step and looks like a cleverly planned design.

Or, design a working solution to back out broken changes. The only one I'd trust would be to install with a spare system partition that is synchronized with the active one just before an update and used as an alternate boot for a fairly drastic fail-back mechanism. And even that won't work where file formats of things in other partitions are modified in an update.

   Les Mikesell
   lesmikesell gmail com

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]