[Libguestfs] More parallelism in VDDK driver (was: Re: CFME-5.11.7.3 Perf. Tests)

Nir Soffer nsoffer at redhat.com
Wed Aug 5 13:49:04 UTC 2020


On Wed, Aug 5, 2020 at 4:28 PM Richard W.M. Jones <rjones at redhat.com> wrote:
>
> On Wed, Aug 05, 2020 at 03:40:43PM +0300, Nir Soffer wrote:
> > On Wed, Aug 5, 2020 at 2:58 PM Richard W.M. Jones <rjones at redhat.com> wrote:
> > >
> > > On Wed, Aug 05, 2020 at 02:39:44PM +0300, Nir Soffer wrote:
> > > > Can we use something like the file plugin? thread pool of workers,
> > > > each keeping open vddk handle, and serving requests in parallel from
> > > > the same nbd socket?
> > >
> > > Yes, but this isn't implemented in the plugins, it's implemented in
> > > the server.  The server always uses a thread pool, but plugins can opt
> > > for more or less concurrency by adjusting the thread model:
> > >
> > >   http://libguestfs.org/nbdkit-plugin.3.html#Threads
> > >
> > > The file plugin uses PARALLEL:
> > >
> > >   $ nbdkit file --dump-plugin | grep thread
> > >   max_thread_model=parallel
> > >   thread_model=parallel
> > >
> > > The VDDK plugin currently uses SERIALIZE_ALL_REQUESTS:
> > >
> > >   $ nbdkit vddk --dump-plugin | grep thread
> > >   max_thread_model=serialize_all_requests
> > >   thread_model=serialize_all_requests
> > >
> > > The proposal is to use SERIALIZE_REQUESTS, with an extra mutex added
> > > by the plugin around VixDiskLib_Open and _Close calls.
> >
> > I'm not sure what is the difference between SERIALIZE_REQUESTS and
> > SERIALIZE_ALL_REQUESTS,
>
> SERIALIZE_ALL_REQUESTS serializes requests across all handles.
> SERIALIZE_REQUESTS serializes requests within each handle,
> but multiple parallel requests can happen to different handles.

I see, can change the python plugin to support multiple connections to imageio
using SERIALIZE_REQUESTS?

The GiL should not limit us since the GIL is released when you write to
imageio socket, and this is likely where the plugin spends most of the time.

I'm not sure what triggers using multiple connections in qemu-img and
how it decide how many connections should be used, but we report
the number of writers in OPTIONS:
http://ovirt.github.io/ovirt-imageio/images.html#max_writers

There is a hard limit in vdsm, because it runs qemu-nbd with
--shared=8, so you should
not use more than 8 connections, they will just block on qemu-nbd forever.

We use 4 connections by default, giving about 100% speed up compared with
one connection. 2 connections give about 80% speed up.

If the number of connections is related to the number of coroutines, you can use
-m 4 to use 4 coroutines.

Using -W will improve performance. In this mode every coroutine will do the
I/O when it is ready, instead of waiting for other coroutines and
submit the I/O in
the right order.

> > but it sounds to me like we need PARALLEL.
> >
> > With parallel we will have multiple threads using the same
> > vddk_handle, which can
> > be thread safe since we control this struct.
> >
> > The struct can be:
> >
> > struct vddk_item {
> >   VixDiskLibConnectParams *params; /* connection parameters */
> >   VixDiskLibConnection connection; /* connection */
> >   VixDiskLibHandle handle;         /* disk handle */
> >   struct vddk_item *next;   /* next handle in the list */
> > }
> >
> > struct vddk_handle {
> >     struct vddm_items *pool;
> >     pthread_mutex_t *mutex;
> > }
> >
> > open() will initialize the pool of vddk_item.
> >
> > pread() will:
> > - lock the mutex
> > - take an item from the pool
> > - unlock the mutex
> > - perform a single request
> > - lock the mutex
> > - return the item to the pool
> > - unlock the mutex
> >
> > Since we don't need lot of connections, and most of the time is spent
> > waiting on I/O, the time to
> > lock/unlock the pool should not be significant.
> >
> > The server thread pool should probably use the same size of the pool,
> > so there is always
> > a free item in the pool for every thread.
> >
> > Or maybe something simpler, every thread will create a vddk_item and
> > keep it in thread local
> > storage, no locking required (maybe only for open and close).
> >
> > Do you see any reason why this will not work?
>
> It sounds plausible, but since VDDK itself seems to be serialized
> internally (or more probably in the server) I doubt you'll see much
> performance improvement.

If we get better performance with multiple connections, we are likely to get
the same performance using multiple threads, but it seems that using multiple
connections is much simpler.

>
> Rich.
>
> --
> Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
> Read my programming and virtualization blog: http://rwmj.wordpress.com
> libguestfs lets you edit virtual machines.  Supports shell scripting,
> bindings from many languages.  http://libguestfs.org
>




More information about the Libguestfs mailing list