[Libguestfs] Two multi-conn questions

Richard W.M. Jones rjones at redhat.com
Thu Feb 2 21:00:23 UTC 2023


On Thu, Feb 02, 2023 at 12:54:16PM -0600, Eric Blake wrote:
> On Thu, Feb 02, 2023 at 04:26:04PM +0000, Richard W.M. Jones wrote:
> ...
> > > > $ time nbdkit -r curl https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img --filter=multi-conn multi-conn-mode=unsafe timeout=2000 --run ' nbdcopy --no-extents -p $uri jammy-server-cloudimg-amd64.img '
> > > 
> > > Yay - I'm glad my multi-conn filter makes it easier to test things
> > > like this!
> > > 
> > > Should we tweak the docs in nbdkit-multi-conn-filter(1) to mention
> > > that despite multi-conn-mode=unsafe being unsafe for a plugin that
> > > does not have consistency, it is useful for timing tests on a plugin
> > > where we suspect consistency is available in order to test timing to
> > > see if it actually makes a difference?
> > 
> > Yes, sounds good.
> 
> How about:
> 
> diff --git i/filters/multi-conn/nbdkit-multi-conn-filter.pod w/filters/multi-conn/nbdkit-multi-conn-filter.pod
> index 87b31692..7f70ade9 100644
> --- i/filters/multi-conn/nbdkit-multi-conn-filter.pod
> +++ w/filters/multi-conn/nbdkit-multi-conn-filter.pod
> @@ -118,7 +118,11 @@ passed on to the plugin.
>  When B<unsafe> mode is chosen, this filter blindly advertises
>  multi-conn to the client even if the plugin lacks support.  This is
>  dangerous, and risks data corruption if the client makes assumptions
> -about flush consistency that were not actually met.
> +about flush consistency that were not actually met.  However, for a
> +plugin that does not yet advertise multi-conn, but where it is
> +suspected that the plugin behaves consistently, this is a great way to
> +run timing and accuracy tests to see enabling multi-conn in the plugin

                                     ^^^ "if" (or "whether")
> +will make a difference.

ACK

>  =item B<multi-conn-track-dirty=fast>
> 
> > > > Is there any work on adding multi-conn support to qemu's NBD client?
> > > 
> > > Not that I'm aware of at the moment, but we have proof that it may
> > > prove fruitful to have someone spend time on.
> > 
> > Kubernetes team are complaining that this particular case (stream
> > qcow2 file from a website and convert it to raw without a temporary
> > file) is slow.  This is not a case that I've encountered before
> > because it's not relevant to virt-v2v, but it does appear to be
> > important.
> > 
> > 	- * - * -
> > 
> > I didn't want to complicate the original message with irrelevant
> > stuff, but there's something else I want to mention now.  If we just
> > use qemu's curl driver (thus eliminating NBD & nbdkit from the mix),
> > it gets even slower:
> > 
> >   $ time ~/d/qemu/build/qemu-img convert -p -W -f qcow2 'json:{ "file.readahead": 67108864, "file.driver": "http", "file.url": "http://oirase.annexia.org/tmp/jammy-server-cloudimg-amd64.qcow2", "file.timeout":2000 }' -O raw jammy-server-cloudimg-amd64.img.raw 
> >       (100.00/100%)
> > 
> >   real	3m53.923s
> >   user	0m13.751s
> >   sys	0m15.346s
> > 
> > Now this is not something I'm personally concerned about (since I've
> > long been arguing we should deprecate the qemu curl driver and use
> > nbdkit), but it's also very surprising.
> 
> I'm also not surprised if qemu's curl driver is not efficient, because
> it does not seem to be a case frequently used.  Offloading it to other
> paths, like nbdkit, is indeed a good goal.

Yes it may just be that the curl driver is slow.

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-p2v converts physical machines to virtual machines.  Boot with a
live CD or over the network (PXE) and turn machines into KVM guests.
http://libguestfs.org/virt-v2v


More information about the Libguestfs mailing list