[Libguestfs] [PATCH 4/6] v2v: rhv-upload-plugin: Support multiple connections

Nir Soffer nsoffer at redhat.com
Sat Jan 23 15:18:30 UTC 2021


On Sat, Jan 23, 2021 at 2:48 PM Richard W.M. Jones <rjones at redhat.com> wrote:
>
> On Sat, Jan 23, 2021 at 06:38:10AM +0000, Richard W.M. Jones wrote:
> > On Sat, Jan 23, 2021 at 12:45:22AM +0200, Nir Soffer wrote:
> > > Use multiple connections to imageio server to speed up the transfer.
> > >
> > > Connections are managed via a thread safe queue. Threads remove
> > > a connection from the queue for every request, and put it back when at
> > > the end of the request. Only one thread can access the connection at the
> > > same time.
> > >
> > > Threads are accessing existing values in the handle dict, like
> > > h["path"]. They may also modify h["failed"] on errors. These operations
> > > are thread safe and do not require additional locking.
> > >
> > > Sending flush request is more tricky; on imageio side, we have one
> > > qemu-nbd server, with multiple connections. I'm not sure if sending one
> > > flush command on one of the connections is good enough to flush all
> > > commands, so we send flush command on all connections in the flush
> > > callback.
> >
> > I know the answer to this!  It depends if the NBD server advertises
> > multi-conn or not.  With libnbd you can find out by querying
> > nbd_can_multi_conn on any of the connections (if the server is
> > behaving, the answer should be identical for any connection with the
> > same exportname).  See:
> >
> >   https://github.com/NetworkBlockDevice/nbd/blob/master/doc/proto.md
> >
> >   "bit 8, NBD_FLAG_CAN_MULTI_CONN: Indicates that the server operates
> >   entirely without cache, or that the cache it uses is shared among
> >   all connections to the given device. In particular, if this flag is
> >   present, then the effects of NBD_CMD_FLUSH and NBD_CMD_FLAG_FUA MUST
> >   be visible across all connections when the server sends its reply to
> >   that command to the client. In the absence of this flag, clients
> >   SHOULD NOT multiplex their commands over more than one connection to
> >   the export."
> >
> > For unclear reasons qemu-nbd only advertises multi-conn for r/o
> > connections, assuming my reading of the code is correct.  For nbdkit
> > we went through the plugins a long time ago and made them advertise
> > (or not) multi-conn as appropriate.
>
>   $ truncate -s 10M /tmp/disk.img
>   $ rm -f /tmp/sock
>   $ qemu-nbd -k /tmp/sock --format=raw /tmp/disk.img & pid=$!
>   $ nbdinfo --json 'nbd+unix://?socket=/tmp/sock' | jq '.exports[0].can_multi_conn'
>   false
>   $ kill $pid
>
> Adding the --read-only option doesn't change it:
>
>   false
>
> Adding --shared=2 --read-only:
>
>   true
>
> For comparison:
>
>   $ nbdkit file /tmp/disk.img --run 'nbdinfo --json "$uri"' | jq '.exports[0].can_multi_conn'
>   true
>
>   $ nbdkit memory 1G --run 'nbdinfo --json "$uri"' | jq '.exports[0].can_multi_conn'
>   true

I discussed it with Eric in the past, and the conclusion was it is ok to
use multiple connections if clients write to distinct areas.
https://lists.nongnu.org/archive/html/qemu-block/2019-08/msg00917.html

We always use --cache=none and --aio=native in qemu-nbd. so practically
we don't have any cache involved on the host running qemu-nbd, but this
is probably not enough to report multi-conn.




More information about the Libguestfs mailing list