[Virtio-fs] [PATCH 2/2] virtiofs: add dmap flags to differentiate write mapping from read mapping

Tao Peng bergwolf at hyper.sh
Thu May 9 03:44:56 UTC 2019


On Thu, May 9, 2019 at 10:32 AM Liu Bo <bo.liu at linux.alibaba.com> wrote:
>
> On Thu, May 09, 2019 at 10:01:03AM +0800, Tao Peng wrote:
> > On Thu, May 9, 2019 at 2:48 AM Liu Bo <bo.liu at linux.alibaba.com> wrote:
> > >
> > > On Wed, May 08, 2019 at 11:27:04AM -0400, Vivek Goyal wrote:
> > > > On Tue, May 07, 2019 at 01:58:11PM +0800, Liu Bo wrote:
> > > > > There're 2 problems in dax rw path which were found by [1][2],
>
> > > > >
> > > > > a) setupmapping always sends a RW mapping message to virtiofs daemon side
> > > > > no matter it's doing reads or writes, the end result is that guest reads
> > > > > on a mapping will cause a write page fault on host, which is unnecessary.
> > > > >
> > > > > b) After a successful setupmapping, it doesn't guarantee the following
> > > > > writes can land on host, e.g. page fault on host may fail because of
> > > > > ENOSPC.
> > > > >
> > > > > This is trying to solve the problems by
> > > > > a) marking a dmap as RO or RW to indicate it's being used for reads or
> > > > >    writes,
> > > > > b) setting up a RO dmap for reads
> > > > > c) setting up a RW dmap for writes
> > > > > d) using an existing dmap for reads if it's availalbe in the interval tree
> > > > > e) converting an existing dmap from RO to RW for writes if it's available
> > > > >    in the interval tree
> > > > >
> > > > > The downside of the above approach is write amplification.
> > > >
> > > > Another downside of using fallocate() is performance overhead. I wrote
> > > > a problem to do small 16K file extending writes and measure total time
> > > > with and without fallocate. Fallocate version seems to be much slower.
> > > >
> > > > With fallocate()
> > > > ===============
> > > > # ./file-extending-writes /mnt/virtio-fs/foo.txt
> > > > 16384 extending writes took 14 secs and 647210 us
> > > > 16384 extending writes took 11 secs and 571941 us
> > > > 16384 extending writes took 11 secs and 981329 us
> > > >
> > > > With dax + direct write (FUSE_WRITE)
> > > > ==============================
> > > > # ./file-extending-writes /mnt/virtio-fs/foo.txt
> > > > 16384 extending writes took 2 secs and 477642 us
> > > > 16384 extending writes took 2 secs and 352413 us
> > > > 16384 extending writes took 2 secs and 347223 us
> > > >
> > > > I ran your test script as well and that does not show much difference.
> > > > I think part of the problem is that every write launches new xfs_io
> > > > process. So overhead of launching process, opening fd again hides the
> > > > fallocate overhead.
> > >
> > > It's so weird, with the below C test, I couldn't reproduce the huge
> > > difference of w/ or w/o fallocate.
> > >
> > ext4 vs. xfs difference?
>
> Indeed there exists a gap, but on my box it's not as much as Vivek's,
>
fallocate requires to zero the newly allocated blocks. So underlying
block device speed can also make a difference.

-- 
bergwolf at hyper.sh




More information about the Virtio-fs mailing list