[Libguestfs] Proposed changes for OpenStack

Matthew Booth mbooth at redhat.com
Wed Dec 14 17:07:04 UTC 2011


On 12/14/2011 04:40 PM, Richard W.M. Jones wrote:
> On Wed, Dec 14, 2011 at 04:03:16PM +0000, Matthew Booth wrote:
>> On 12/14/2011 03:13 PM, Richard W.M. Jones wrote:
>>> On Wed, Dec 14, 2011 at 02:17:05PM +0000, Matthew Booth wrote:
>>>> I still don't see why we need the guestfish event callbacks.
>>>
>>> because ...
>>>
>>>>> The guestfish change is only for us to do testing.
>>
>> Can't the test be written in another language?
>
> Yes we could.
>
> On the other hand, mount-local (in guestfish) wouldn't
> be very useful without some way to access the event API (whether
> that method is using shell scripts or something else).  eg:
>
>    ><fs>  mount-local /mnt options:allow_root
>    [ guestfish doesn't return to the prompt ]
>
> /mnt isn't usable until some short but undefined time later on.
>
> With the event API bindings it'd look like:
>
>    ><fs>  event foo mounted "echo READY"
>    ><fs>  mount-local /mnt options:allow_root
>    [short pause]
>    READY
>
> When 'READY' is printed, the mount point is ready to use.

This is really a symptom of mount-local being a poor fit for the guestfs 
API. Note that no other API requires a callback to be usable.

> [...]
>
>> I just read your original python a little more closely:
>>
>>   g.set_event_callback (callback, guestfs.EVENT_MOUNTED)
>>   g.mount_local ("/tmp/dir")
>>   # In the event callback, start a thread which:
>>   # - injects files
>>   # - modifies network config
>>   # - finally calls g.umount_local ()
>>
>> The problem with this is that if the event callback does anything to
>> the newly mounted filesystem without a new thread, it'll deadlock.
>
> Right; this is a general and well-known limitation of the libguestfs
> API -- you mustn't, except in limited situations, call the same handle
> from multiple threads.
> http://libguestfs.org/guestfs.3.html#multiple_handles_and_multiple_threads
>
>> And even if you require this, you're opening a can of worms because
>> you'll then have to make the API threadsafe so you can call
>> g.umount_local() from the thread.
>
> No, because umount-local will be a special case.  Essentially it'll
> just call 'fusermount -u' so there's no thread safety issues AFAIK.
>
>> And please don't forget that I'm also opposed to this in principal,
>> as mounting filesystems on the host is not the business of the
>> guestfs API.
>>
>> How about this instead:
>>
>> g.launch()
>> g.mount(...)
>> g.foo()
>> g.bar()
>>
>> shell("
>>    guestmount -c /path/to/existing/unix/sock /tmp/dir
>>    ... manipulate /tmp/dir ...
>>    umount /tmp/dir
>> ")
>>
>> g.close()
>>
>> If you wanted to remove the requirement to exec guestmount, you
>> could solve that in *guestmount* by exposing it as a library.
>
> I thought about this too, but now this is pretty ugly too isn't it.
> There are two problems (at least) ...
>
> (1) Is the protocol stable enough that we should do this?  I've been
> very caution about libguestfs live so far, for exactly this reason.

The protocol is supposed to be stable, and I don't think it's been 
modified in quite a while.

> (2) How do we interleave protocol requests that could be coming from
> both guestmount and the library?  For example, where should asynch
> events get routed?  The protocol isn't really designed for this unless
> there's a "g.give_up_connection()" call.

You wouldn't interleave commands at all. This would require minimal code 
change in the daemon: you'd select() a socket at the top level of the 
main loop, then fully handle the received request until completion.

> Actually I think this is pretty ugly compared to adding a mount-local
> API.

Let me rewrite with a guestmount api:

g.launch()
g.mount(...)
g.foo()
g.bar()

guestmount_mount(g.sock_path(), '/tmp/dir')
...filesystem stuff...
guestmount_umount('/tmp/dir')

g.close()

Note that there are no callbacks, no threads and no races. The code can 
be written sequentially. Also, we're not bringing FUSE into the main API.

Matt
-- 
Matthew Booth, RHCA, RHCSS
Red Hat Engineering, Virtualisation Team

GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490




More information about the Libguestfs mailing list