[Virtio-fs] [PATCH v2 0/3] virtio: increase VIRTQUEUE_MAX_SIZE to 32k

Greg Kurz groug at kaod.org
Fri Oct 8 08:27:01 UTC 2021


On Fri, 8 Oct 2021 09:25:33 +0200
Greg Kurz <groug at kaod.org> wrote:

> On Thu, 7 Oct 2021 16:42:49 +0100
> Stefan Hajnoczi <stefanha at redhat.com> wrote:
> 
> > On Thu, Oct 07, 2021 at 02:51:55PM +0200, Christian Schoenebeck wrote:
> > > On Donnerstag, 7. Oktober 2021 07:23:59 CEST Stefan Hajnoczi wrote:
> > > > On Mon, Oct 04, 2021 at 09:38:00PM +0200, Christian Schoenebeck wrote:
> > > > > At the moment the maximum transfer size with virtio is limited to 4M
> > > > > (1024 * PAGE_SIZE). This series raises this limit to its maximum
> > > > > theoretical possible transfer size of 128M (32k pages) according to the
> > > > > virtio specs:
> > > > > 
> > > > > https://docs.oasis-open.org/virtio/virtio/v1.1/cs01/virtio-v1.1-cs01.html#
> > > > > x1-240006
> > > > Hi Christian,
> > > > I took a quick look at the code:
> > > > 
> 
> 
> Hi,
> 
> Thanks Stefan for sharing virtio expertise and helping Christian !
> 
> > > > - The Linux 9p driver restricts descriptor chains to 128 elements
> > > >   (net/9p/trans_virtio.c:VIRTQUEUE_NUM)
> > > 
> > > Yes, that's the limitation that I am about to remove (WIP); current kernel 
> > > patches:
> > > https://lore.kernel.org/netdev/cover.1632327421.git.linux_oss@crudebyte.com/
> > 
> > I haven't read the patches yet but I'm concerned that today the driver
> > is pretty well-behaved and this new patch series introduces a spec
> > violation. Not fixing existing spec violations is okay, but adding new
> > ones is a red flag. I think we need to figure out a clean solution.
> > 
> > > > - The QEMU 9pfs code passes iovecs directly to preadv(2) and will fail
> > > >   with EINVAL when called with more than IOV_MAX iovecs
> > > >   (hw/9pfs/9p.c:v9fs_read())
> > > 
> > > Hmm, which makes me wonder why I never encountered this error during testing.
> > > 
> > > Most people will use the 9p qemu 'local' fs driver backend in practice, so 
> > > that v9fs_read() call would translate for most people to this implementation 
> > > on QEMU side (hw/9p/9p-local.c):
> > > 
> > > static ssize_t local_preadv(FsContext *ctx, V9fsFidOpenState *fs,
> > >                             const struct iovec *iov,
> > >                             int iovcnt, off_t offset)
> > > {
> > > #ifdef CONFIG_PREADV
> > >     return preadv(fs->fd, iov, iovcnt, offset);
> > > #else
> > >     int err = lseek(fs->fd, offset, SEEK_SET);
> > >     if (err == -1) {
> > >         return err;
> > >     } else {
> > >         return readv(fs->fd, iov, iovcnt);
> > >     }
> > > #endif
> > > }
> > > 
> > > > Unless I misunderstood the code, neither side can take advantage of the
> > > > new 32k descriptor chain limit?
> > > > 
> > > > Thanks,
> > > > Stefan
> > > 
> > > I need to check that when I have some more time. One possible explanation 
> > > might be that preadv() already has this wrapped into a loop in its 
> > > implementation to circumvent a limit like IOV_MAX. It might be another "it 
> > > works, but not portable" issue, but not sure.
> > >
> > > There are still a bunch of other issues I have to resolve. If you look at
> > > net/9p/client.c on kernel side, you'll notice that it basically does this ATM
> > > 
> > >     kmalloc(msize);
> > > 
> 
> Note that this is done twice : once for the T message (client request) and once
> for the R message (server answer). The 9p driver could adjust the size of the T
> message to what's really needed instead of allocating the full msize. R message
> size is not known though.
> 
> > > for every 9p request. So not only does it allocate much more memory for every 
> > > request than actually required (i.e. say 9pfs was mounted with msize=8M, then 
> > > a 9p request that actually would just need 1k would nevertheless allocate 8M), 
> > > but also it allocates > PAGE_SIZE, which obviously may fail at any time.
> > 
> > The PAGE_SIZE limitation sounds like a kmalloc() vs vmalloc() situation.
> > 
> > I saw zerocopy code in the 9p guest driver but didn't investigate when
> > it's used. Maybe that should be used for large requests (file
> > reads/writes)?
> 
> This is the case already : zero-copy is only used for reads/writes/readdir
> if the requested size is 1k or more.
> 
> Also you'll note that in this case, the 9p driver doesn't allocate msize
> for the T/R messages but only 4k, which is largely enough to hold the
> header.
> 
> 	/*
> 	 * We allocate a inline protocol data of only 4k bytes.
> 	 * The actual content is passed in zero-copy fashion.
> 	 */
> 	req = p9_client_prepare_req(c, type, P9_ZC_HDR_SZ, fmt, ap);
> 
> and
> 
> /* size of header for zero copy read/write */
> #define P9_ZC_HDR_SZ 4096
> 
> A huge msize only makes sense for Twrite, Rread and Rreaddir because
> of the amount of data they convey. All other messages certainly fit
> in a couple of kilobytes only (sorry, don't remember the numbers).
> 
> A first change should be to allocate MIN(XXX, msize) for the
> regular non-zc case, where XXX could be a reasonable fixed
> value (8k?). 


Note that this would violate the 9p spec since the server
can legitimately use the negotiated msize for all R messages
even if all of them only need a couple of bytes in practice,
at worse a couple of kilobytes if a path is involved.

In a ideal world, this would call for a spec refinement to
special case Rread and Rreaddir, which are the only ones
where a high msize is useful AFAICT.

> In the case of T messages, it is even possible
> to adjust the size to what's exactly needed, ala snprintf(NULL).
> 
> > virtio-blk/scsi don't memcpy data into a new buffer, they
> > directly access page cache or O_DIRECT pinned pages.
> > 
> > Stefan
> 
> Cheers,
> 
> --
> Greg

-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 833 bytes
Desc: OpenPGP digital signature
URL: <http://listman.redhat.com/archives/virtio-fs/attachments/20211008/d547f171/attachment.sig>


More information about the Virtio-fs mailing list