[libvirt] [PATCH v2 7/7] qemu_security: Lock metadata while relabelling

Daniel P. Berrangé berrange at redhat.com
Fri Aug 24 12:53:07 UTC 2018


On Fri, Aug 24, 2018 at 02:01:52PM +0200, Michal Privoznik wrote:
> On 08/23/2018 06:14 PM, Michal Privoznik wrote:
> > On 08/23/2018 05:51 PM, Daniel P. Berrangé wrote:
> >> On Thu, Aug 23, 2018 at 05:45:42PM +0200, Michal Privoznik wrote:
> >>> On 08/23/2018 05:01 PM, Daniel P. Berrangé wrote:
> >>>> On Thu, Aug 23, 2018 at 04:54:01PM +0200, Michal Privoznik wrote:
> >>>>> On 08/20/2018 07:17 PM, Michal Prívozník wrote:
> >>>>>> On 08/20/2018 05:16 PM, Daniel P. Berrangé wrote:
> >>>>>>> On Tue, Aug 14, 2018 at 01:19:43PM +0200, Michal Privoznik wrote:
> >>>>>>>> Fortunately, we have qemu wrappers so it's sufficient to put
> >>>>>>>> lock/unlock call only there.
> >>>>>>>>
> >>>>>>>> Signed-off-by: Michal Privoznik <mprivozn at redhat.com>
> >>>>>>>> ---
> >>>>>>>>  src/qemu/qemu_security.c | 107 +++++++++++++++++++++++++++++++++++++++++++++++
> >>>>>>>>  1 file changed, 107 insertions(+)
> >>>>>>>>
> >>>>>
> >>>>>
> > 
> > 
> >>
> >> Oh right, yes, that kills the idea of using a WAIT flag for lockspaces,
> >> unless we want to introduce threads ingto virtlockd, but we can't do
> >> that because Linux has totally fubard  locking across execve() :-(
> 
> Actually, it's problem with open() + close() not execve(). And no dumy
> implementation (as I'm suggesting below) will not help us with that.

No, I really do mean execve() here.  The open() + close() issue is the
awful standards compliant behaviour. The execve() issue with threads
is a trainwreck of Linuxs own making:

  https://bugzilla.redhat.com/show_bug.cgi?id=1552621

> > But we need WAIT. I guess we do not want to do:
> > 
> > lockForMetadata(const char *path) {
> >   int retries;
> > 
> >   while (retries) {
> >     rc = try_lock(path, wait=false);
> > 
> >     if (!rc) break;
> >     if (errno = EAGAIN && retries--) {sleep(.1); continue;}
> > 
> >     errorOut();
> >   }
> > }
> > 
> > However, if we make sure that virtlockd never forks() nor execs() we
> > have nothing to fear about. Or am I missing something? And to be 100%
> > sure we can always provide dummy impl for fork() + execve() in
> > src/locking/lock_daemon.c which fails everytime.
> 
> I work around this by putting a mutex into secdriver so that only one
> thread can do lock(). The others have to wait until the thread unlocks()
> the path. This way we leave virtlockd with just one thread. In my
> testing everything works like charm.

That sounds reasonable, so we don't need the _WAIT behaviour in
virtlockd itself, as everything will wait in the secdriver instead.
At least for now, until we modularize the startup process with the
shim. Guess that's just one more todo item to solve for the shim
so not the end of the world.

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|




More information about the libvir-list mailing list