[Virtio-fs] [virtio-dev] [PATCH v2 0/2] virtio-fs: add notification queue

Stefan Hajnoczi stefanha at redhat.com
Mon May 17 10:06:37 UTC 2021


On Fri, May 14, 2021 at 01:50:06AM +0800, Liu Bo wrote:
> On Thu, May 13, 2021 at 02:44:42PM +0100, Stefan Hajnoczi wrote:
> > On Thu, May 13, 2021 at 06:36:37AM +0800, Liu Bo wrote:
> > > Hi Stefan,
> > > 
> > > On Tue, May 11, 2021 at 09:22:24AM +0100, Stefan Hajnoczi wrote:
> > > > On Mon, Feb 15, 2021 at 09:54:08AM +0000, Stefan Hajnoczi wrote:
> > > > > v2:
> > > > >  * Document empty virtqueue behavior for FUSE_NOTIFY_LOCK messages
> > > > > 
> > > > > This patch series adds the notification queue to the VIRTIO specification.
> > > > > This new virtqueue carries device->driver FUSE notify messages.  They are
> > > > > currently unused but will be necessary for file locking, which can block for an
> > > > > unbounded amount of time and therefore needs a asynchronous completion event
> > > > > instead of a request/response buffer that consumes space in the request
> > > > > virtqueue until the operation completes.
> > > > > 
> > > > > Patch 1 corrects an oversight I noticed: the file system device was not added
> > > > > to the Conformance chapter.
> > > > > 
> > > > > Stefan Hajnoczi (2):
> > > > >   virtio-fs: add file system device to Conformance chapter
> > > > >   virtio-fs: add notification queue
> > > > > 
> > > > >  conformance.tex | 23 ++++++++++++++++
> > > > >  virtio-fs.tex   | 71 ++++++++++++++++++++++++++++++++++++++++++-------
> > > > >  2 files changed, 84 insertions(+), 10 deletions(-)
> > > > 
> > > > Reminder to anyone who needs the virtio-fs notification queue: please
> > > > review this series.
> > > >
> > > 
> > > Besides using notification queue to provide posix lock support, I've
> > > also managed to invalidate dentry/inode's cache with notification
> > > queue, it worked well.
> > 
> > Thank you!
> > 
> > Are you using dentry/inode cache invalidation to reduce the number of
> > file descriptors that virtiofsd needs to hold open, or are you using it
> > because the file system is shared by multiple systems and you want to
> > stronger cache coherency?
> >
> 
> The former one is one of the problems I've come across, but I worked
> around it by set_rlimit'ing a large enough number of fds.
> 
> My scenario is that
> 
> a) a bind mount point was shared as a sub-directory of virtiofs's shared directory,
> 
> b) and I needed to umount the bind mount point on the host side but I
> was not able to because virtiofsd enables cache=always,
> 
> So basically it was used as a preciser "drop_cache".
> 
> Besides that, I'm going to do some experiements with notification
> queue to give a warm up for fuse's metadata in order to have less
> metadata requests.  Although this may be done with ebpf as well, the
> idea with notification queue seems more straightforward.

Thanks for sharing. I was curious because I shared the NFSv4
FH4_NOEXPIRE_WITH_OPEN approach in a previous virtio-fs bi-weekly call:
https://www.rfc-editor.org/rfc/rfc7530.html#section-4.2.3

This approach does not involve the notification queue. It's lazy in the
sense that the client is unaware which inode have expired until it
accesses them. An advantage is that no host<->guest communication is
required (fewer CPU cycles), but the disadvantage is that the client may
have expired inode/dentry cache structures allocated so it will end up
throwing them away but was unable to use the memory for something else
in the meantime.

Stefan
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 488 bytes
Desc: not available
URL: <http://listman.redhat.com/archives/virtio-fs/attachments/20210517/280e5ed7/attachment.sig>


More information about the Virtio-fs mailing list