[Libguestfs] [PATCH 1/1] nbd/server: push pending frames after sending reply
Eric Blake
eblake at redhat.com
Fri Mar 24 20:03:23 UTC 2023
On Fri, Mar 24, 2023 at 02:41:20PM -0500, Eric Blake wrote:
> On Fri, Mar 24, 2023 at 11:47:20AM +0100, Florian Westphal wrote:
> > qemu-nbd doesn't set TCP_NODELAY on the tcp socket.
Replying to myself, WHY aren't we setting TCP_NODELAY on the socket?
>
> And surprisingly, qemu IS using corking on the client side:
> https://gitlab.com/qemu-project/qemu/-/blob/master/block/nbd.c#L525
> just not on the server side, before your patch.
Corking matters more when TCP_NODELAY is enabled. The entire reason
Nagle's algorithm exists (and is default on unless you enable
TCP_NODELAY) is that the benefits of merging smaller piecemeal packets
into larger traffic is a lot easier to do in a common location for
code that isn't super-sensitive to latency and message boundaries.
But once you get to the point where corking or MSG_MORE makes a
difference, then it is obvious you know your message boundaries, and
will benefit from TCP_NODELAY, at the expense of potentially more
network traffic overhead. One more code search, and I find that we
use TCP_NODELAY in all of:
qemu client: https://gitlab.com/qemu-project/qemu/-/blob/master/nbd/client-connection.c#L143
nbdkit: https://gitlab.com/nbdkit/nbdkit/-/blob/master/server/sockets.c#L430
libnbd: https://gitlab.com/nbdkit/libnbd/-/blob/master/generator/states-connect.c#L41
so I think we _should_ be calling qio_channel_set_delay(false) for
qemu-nbd as well. That doesn't negate your patch, but rather argues
that we can go for even better performance with TCP_NODELAY also
turned on.
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3266
Virtualization: qemu.org | libvirt.org
More information about the Libguestfs
mailing list