[Libguestfs] [PATCH virt-v2v] v2v: -o rhv-upload: Enable multi-conn

Richard W.M. Jones rjones at redhat.com
Mon Aug 2 12:50:05 UTC 2021

On Mon, Aug 02, 2021 at 03:35:36PM +0300, Nir Soffer wrote:
> I'm not about how multi_con works in the python plugin - do we get one
> open() call
> or 4 open() calls?

The NBD client is supposed to:

 - make a single NBD connection

 - test the multi-conn flag on that connection

 - if it is true and the client wants to proceed with multi-conn,
   make N-1 further NBD connections

 - each NBD connection operates separately, except for the multi-conn
   guarantees related to accurate flushing and write tearing:

   * if you flush or FUA on one connection, before you return on that
     connection, any caches in the other connections must also be


 - correctly implements the NBD protocol and multi-conn (or if it
   doesn't, it's a bug!)

 - makes large enough requests that write tearing is unlikely to
   be an issue

 - doesn't use FUA

 - issues separate flush calls on every connection at the end

So there will be multiple open() calls in the Python plugin.

> If we get 4 open() calls, this cannot work since we mix control flow
> and data flow in the
> rhv plugin. Each open call will try to create a new disk and every
> connection will upload
> to a different disk.


I did try it now.  It's actually slower :-(  But I didn't change how
HTTP pools worked.

However it didn't appear to create multiple disks, unless it only
attached one and the others are hidden somehow.  I'll PM you the admin
details for my RHV cluster so you can take a look.

It has to be said this assumption that open() creates the disk is not
a good one.  It's certainly not guaranteed that even with "qemu-img
convert" only a single NBD connection will ever be opened.  It's quite
valid for an NBD client to open a connection and query properties
about it, and then perhaps close it and open another one.  So I guess
we have to fix this anyway.

> multi_con can work if we have:
> - single open() call
> - many write/zero/flush calls
> - single close() call
> If we have:
> - one open() for nbd connection
> - many write/zero calls per connection
> - one close() for nbd connection
> We need to separate the control flow stuff (e.g. create disk) from the
> plugin, and do it in
> another step of the process.
> > Naturally I've not actually tested any of this.  There are some
> > advantages from the nbdcopy side of things, especially because it is
> > able to query extents on the input side in parallel which seems to be
> > advantageous for inputs with slow extent querying (like VDDK).  There
> > may be further advantages on the output side because it would allow us
> > to write data to the RHV upload plugin over four sockets, allowing
> > better use of multiple cores.
> It may work better when using HTTPS, the TLS part is implemented in C
> and can run in parallel.

Yes, or even with kTLS it may run in multiple kernel threads.


Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
Fedora Windows cross-compiler. Compile Windows programs, test, and
build Windows installers. Over 100 libraries supported.

More information about the Libguestfs mailing list