[libvirt] [RFC PATCH 2/2] qemu: Force capabilities cache read if libvirtd date is different
Daniel P. Berrange
berrange at redhat.com
Fri May 22 10:02:57 UTC 2015
On Fri, May 22, 2015 at 11:56:24AM +0200, Michal Privoznik wrote:
> On 20.05.2015 19:35, John Ferlan wrote:
> > On 05/20/2015 10:22 AM, Ján Tomko wrote:
> >> s/read/refresh/ in the commit message?
> >> On Wed, May 20, 2015 at 08:52:57AM -0400, John Ferlan wrote:
> >>> https://bugzilla.redhat.com/show_bug.cgi?id=1195882
> >>> Original commit id 'cbde3589' indicates that the cache file would be
> >>> discarded if either the QEMU binary or libvirtd 'ctime' changes; however,
> >>> the code only discarded if the QEMU binary time didn't match or if the
> >>> new libvirtd ctime was later than what created the cache file.
> >>> This could lead to issues with respect to how the order of libvirtd images
> >>> is created for maintenance or patch branches where if someone had a libvirtd
> >>> created on 'date x' that was created from (a) backported patch(es) followed
> >>> by a subsequent install of the primary release which would have 'date y'
> >>> where if 'date x' was greater than 'date y', then features in a primary
> >>> release branch may not be available.
> >> I can see how here can be two daemons with different ctimes on the same
> >> system during development, so ACK to the change.
> > But they'd use different cache paths, right?
> >> However, when you install the daemon (even from an older package), ctime
> >> should only move forward, so I'm sceptical about its relevance to the
> >> referenced bug.
> > Perhaps that my misinterpretation or misunderstanding of ctime -
> > certainly in a different OS I used to work on many years ago - ctime was
> > when the image was created by a build and not when the inode or file
> > change time as I (now) read...
> > So hmmm... that blows my theory to shreds - unless you account for that
> > window of time during an install where files are being replaced while
> > libvirtd is still running an older release. As Dan notes in my 1/2
> > removing the cache in between would be racey. So would it also be racey
> > if something caused a cache read, code finds updated libvirtd, asks qemu
> > for current capabilities, gets answer, creates file based on current (or
> > old) understanding... Then when libvirtd restarts it finds a ctime of
> > itself, doesn't update the cache, and of course doesn't have the latest
> > bits.
> How about computing a hash of the qemu binary, and if hashes do not
> equal recompile the cache?
That'd be significantly slower than todays code, and I don't think it
is needed. We just need to change the from < date to != date. The
currenty < date check was written based on the (mistaken) assumption
that the ctime would be set based on the time at which the RPM package
was installed. ie, even if you downgraded libvirt, I thought you'd
get a newer ctime due to install time. Since RPM preserves timestamps
from the build-environment this assumption was invalid. Using != date
should deal with it adequately as the chance of two separate RPMs
builds having been done at precisely the same time is minimal.
|: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org -o- http://virt-manager.org :|
|: http://autobuild.org -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|
More information about the libvir-list