[Libguestfs] More parallelism in VDDK driver

Eric Blake eblake at redhat.com
Wed Aug 5 14:38:34 UTC 2020


On 8/5/20 9:10 AM, Richard W.M. Jones wrote:
> On Wed, Aug 05, 2020 at 04:49:04PM +0300, Nir Soffer wrote:
>> I see, can change the python plugin to support multiple connections to imageio
>> using SERIALIZE_REQUESTS?
>>
>> The GiL should not limit us since the GIL is released when you write to
>> imageio socket, and this is likely where the plugin spends most of the time.
> 
> It's an interesting question and one I'd not really considered at all.
> Does the Python GIL actively mutex different threads if they call into
> Python code at the same time?  If it's truly a lock, then it should,
> in which case it should be safe to change the Python plugin to
> PARALLEL ...
> 
> I'll try it out and get back to you.

Yeah, I would not be surprised if we could make the Python plugin more 
performant, but matching our glue code to the python docs for embedding 
with C code is  not trivial, so I haven't spent the time trying.


> Also NBD lets you multiplex commands on a single connection (which
> does not require multi-conn or --shared).
> 
> BTW I found that multi-conn is a big win with the Linux kernel NBD
> client.
> 
>> We use 4 connections by default, giving about 100% speed up compared
>> with one connection. 2 connections give about 80% speed up.  If the
>> number of connections is related to the number of coroutines, you
>> can use -m 4 to use 4 coroutines.
>>
>> Using -W will improve performance. In this mode every coroutine will
>> do the I/O when it is ready, instead of waiting for other coroutines
>> and submit the I/O in the right order.
> 
> I think Eric might have a better idea about what -m and -W really do
> for qemu NBD client.  Maybe improve multiplexing?  They don't enable
> multi-conn :-(

Correct.  Using -W doesn't make sense without -m (if you only have one 
worker, you might as well proceed linearly than trying to randomize 
access, but even when you have multiple threads, there are cases where 
linear operations are still useful, such as 'nbdkit streaming'.  But -m 
is definitely the knob that controls how many outstanding I/O requests 
qemu-img is willing to use; and once you are using -m, using -W makes 
life easier for those coroutines to stay active.  The default -m1 says 
that at most one request is outstanding, so parallelism in the server is 
not utilized.  With higher -m, qemu-img issues up to that many requests 
without waiting for server answers, but all on the same NBD connection. 
Ideally, you'll get the maximum behavior as 'qemu-img -m' and 'nbdkit 
--threads' choose the same values; if either side has fewer in-flight 
operations permitted than the other, then that side has the potential to 
become a bottleneck.  Right now, nbdkit defaults to 16 threads (that is, 
up to 16 in-flight operations) for any PARALLEL plugin.

And someday, I'd love to improve nbdkit's PARALLEL mode to make its 
thread-pool more of an on-demand setup (right now, we pre-create all 16 
threads up front, even if the client never reaches 16 in-flight 
operations at once, which is a bit wasteful), but other than potential 
performance improvements, it should be a transparent change to both 
plugins and clients.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org




More information about the Libguestfs mailing list