[Libguestfs] More parallelism in VDDK driver (was: Re: CFME-5.11.7.3 Perf. Tests)

Nir Soffer nsoffer at redhat.com
Wed Aug 5 12:40:43 UTC 2020


On Wed, Aug 5, 2020 at 2:58 PM Richard W.M. Jones <rjones at redhat.com> wrote:
>
> On Wed, Aug 05, 2020 at 02:39:44PM +0300, Nir Soffer wrote:
> > Can we use something like the file plugin? thread pool of workers,
> > each keeping open vddk handle, and serving requests in parallel from
> > the same nbd socket?
>
> Yes, but this isn't implemented in the plugins, it's implemented in
> the server.  The server always uses a thread pool, but plugins can opt
> for more or less concurrency by adjusting the thread model:
>
>   http://libguestfs.org/nbdkit-plugin.3.html#Threads
>
> The file plugin uses PARALLEL:
>
>   $ nbdkit file --dump-plugin | grep thread
>   max_thread_model=parallel
>   thread_model=parallel
>
> The VDDK plugin currently uses SERIALIZE_ALL_REQUESTS:
>
>   $ nbdkit vddk --dump-plugin | grep thread
>   max_thread_model=serialize_all_requests
>   thread_model=serialize_all_requests
>
> The proposal is to use SERIALIZE_REQUESTS, with an extra mutex added
> by the plugin around VixDiskLib_Open and _Close calls.

I'm not sure what is the difference between SERIALIZE_REQUESTS and
SERIALIZE_ALL_REQUESTS,
but it sounds to me like we need PARALLEL.

With parallel we will have multiple threads using the same
vddk_handle, which can
be thread safe since we control this struct.

The struct can be:

struct vddk_item {
  VixDiskLibConnectParams *params; /* connection parameters */
  VixDiskLibConnection connection; /* connection */
  VixDiskLibHandle handle;         /* disk handle */
  struct vddk_item *next;   /* next handle in the list */
}

struct vddk_handle {
    struct vddm_items *pool;
    pthread_mutex_t *mutex;
}

open() will initialize the pool of vddk_item.

pread() will:
- lock the mutex
- take an item from the pool
- unlock the mutex
- perform a single request
- lock the mutex
- return the item to the pool
- unlock the mutex

Since we don't need lot of connections, and most of the time is spent
waiting on I/O, the time to
lock/unlock the pool should not be significant.

The server thread pool should probably use the same size of the pool,
so there is always
a free item in the pool for every thread.

Or maybe something simpler, every thread will create a vddk_item and
keep it in thread local
storage, no locking required (maybe only for open and close).

Do you see any reason why this will not work?

>  PARALLEL is not possible.
>
> > This is kind of ugly but simple, and it works great for the file
> > plugin - we get better
> > performance than qemu-nbd.
> >
> > But since we get low throughput even when we have 10 concurrent
> > handles for 10 different disks, I'm sure this will help, and the
> > issue may be deeper in vmware. Maybe they intentionally throttle the
> > clients?
>
> The whole server side seems very heavyweight, judging by how long it
> takes to answer single requests.  It might just be poor implementation
> rather than throttling though.
>
> Rich.
>
> --
> Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
> Read my programming and virtualization blog: http://rwmj.wordpress.com
> Fedora Windows cross-compiler. Compile Windows programs, test, and
> build Windows installers. Over 100 libraries supported.
> http://fedoraproject.org/wiki/MinGW
>




More information about the Libguestfs mailing list