[Virtio-fs] [PATCH 0/4] virtiofsd: multithreading preparation part 3

Vivek Goyal vgoyal at redhat.com
Thu Aug 8 12:53:20 UTC 2019


On Thu, Aug 08, 2019 at 10:53:16AM +0100, Dr. David Alan Gilbert wrote:
> * Stefan Hajnoczi (stefanha at redhat.com) wrote:
> > On Wed, Aug 07, 2019 at 04:57:15PM -0400, Vivek Goyal wrote:
> > > Kernel also serializes MAP/UNMAP on one inode. So you will need to run
> > > multiple jobs operating on different inodes to see parallel MAP/UNMAP
> > > (atleast from kernel's point of view).
> > 
> > Okay, there is still room to experiment with how MAP and UNMAP are
> > handled by virtiofsd and QEMU even if the host kernel ultimately becomes
> > the bottleneck.
> > 
> > One possible optimization is to eliminate REMOVEMAPPING requests when
> > the guest driver knows a SETUPMAPPING will follow immediately.  I see
> > the following request pattern in a fio randread iodepth=64 job:
> > 
> >   unique: 995348, opcode: SETUPMAPPING (48), nodeid: 135, insize: 80, pid: 1351
> >   lo_setupmapping(ino=135, fi=0x(nil), foffset=3860856832, len=2097152, moffset=859832320, flags=0)
> >      unique: 995348, success, outsize: 16
> >   unique: 995350, opcode: REMOVEMAPPING (49), nodeid: 135, insize: 60, pid: 12
> >      unique: 995350, success, outsize: 16
> >   unique: 995352, opcode: SETUPMAPPING (48), nodeid: 135, insize: 80, pid: 1351
> >   lo_setupmapping(ino=135, fi=0x(nil), foffset=16777216, len=2097152, moffset=861929472, flags=0)
> >      unique: 995352, success, outsize: 16
> >   unique: 995354, opcode: REMOVEMAPPING (49), nodeid: 135, insize: 60, pid: 12
> >      unique: 995354, success, outsize: 16
> >   virtio_send_msg: elem 9: with 1 in desc of length 16
> >   unique: 995356, opcode: SETUPMAPPING (48), nodeid: 135, insize: 80, pid: 1351
> >   lo_setupmapping(ino=135, fi=0x(nil), foffset=383778816, len=2097152, moffset=864026624, flags=0)
> >      unique: 995356, success, outsize: 16
> >   unique: 995358, opcode: REMOVEMAPPING (49), nodeid: 135, insize: 60, pid: 12
> > 
> > The REMOVEMAPPING requests are unnecessary since we can map over the top
> > of the old mapping instead of taking the extra step of removing it
> > first.
> 
> Yep, those should go - I think Vivek likes to keep them for testing
> since they make things fail more completely if there's a screwup.

I like to keep them because otherwise they keep the resources busy
on host. If DAX range is being used immediately, then this optimization
makes more sense. I will keep this in mind.

> 
> > Some more questions to consider for DAX performance optimization:
> > 
> > 1. Is FUSE_READ/FUSE_WRITE more efficient than DAX for some I/O patterns?
> 
> Probably for cases where the data is only accessed once, and you can't
> preemptively map.
> Another variant on (1) is whether we could do read/writes while the mmap
> is happening to absorb the latency.

For small random I/O, dax might not be very effective. Overhead of
setting up mapping and tearing it down is significant.

Vivek




More information about the Virtio-fs mailing list