[Virtio-fs] [PATCH v4 0/2] virtiofsd: Improve io bandwidth by replacing pwrite with pwritev

Eric Ren renzhen at linux.alibaba.com
Sun Aug 11 02:06:59 UTC 2019


Hi jun,

On Fri, Aug 09, 2019 at 08:50:12AM +0800, piaojun wrote:
> >From my test, write bandwidth will be improved greatly by replacing
> pwrite with pwritev, and the test result as below:

Could you share more information about this testing?

- args for qemu: cache size?
- args for virtiofsd: which cache mode?
- DAX is used, right?

- which kind of disk are you using, what's then IOPS/BW limit?

I tried this patch with HDD disk - IOPS:5000, BW: 140MB/s.

- VM: 4 vcpus, 8G mem
- cache=always, cache-size=8G, DAX
- fio job

```
[global]
fsync=0
name=virtiofs-test
filename=fio-test
directory=$mntdir   # share dir for test sitting on the disk
rw=randwrite
bs=4K
direct=1
numjobs=1
time_based=0

[file1]
size=2G
io_size=40M
ioengine=libaio
iodepth=128
```
- without this patch

```
file1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
fio-3.1
Starting 1 process

file1: (groupid=0, jobs=1): err= 0: pid=11: Sat Aug 10 11:33:29 2019
  write: IOPS=985, BW=3942KiB/s (4037kB/s)(40.0MiB/10390msec)
```

- with this patch applied

```
file1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
fio-3.1
Starting 1 process

file1: (groupid=0, jobs=1): err= 0: pid=10: Sat Aug 10 15:31:57 2019
  write: IOPS=1056, BW=4224KiB/s (4326kB/s)(40.0MiB/9696msec)

```

the number is even worse than 9pfs.

Thanks,
Eric

> 
> ---
> pwrite:
> # fio -direct=1 -time_based -iodepth=64 -rw=randwrite -ioengine=libaio -bs=1M -size=1G -numjob=16 -runtime=30 -group_reporting -name=file -filename=/mnt/virtiofs/file
> file: (g=0): rw=randwrite, bs=1M-1M/1M-1M/1M-1M, ioengine=libaio, iodepth=64
> ...
> fio-2.13
> Starting 16 processes
> Jobs: 16 (f=16): [w(16)] [100.0% done] [0KB/886.0MB/0KB /s] [0/886/0 iops] [eta 00m:00s]
> file: (groupid=0, jobs=16): err= 0: pid=5799: Tue Aug 6 18:48:26 2019
> write: io=26881MB, bw=916988KB/s, iops=895, runt= 30018msec
> 
> pwritev:
> # fio -direct=1 -time_based -iodepth=64 -rw=randwrite -ioengine=libaio -bs=1M -size=1G -numjob=16 -runtime=30 -group_reporting -name=file -filename=/mnt/virtiofs/file
> file: (g=0): rw=randwrite, bs=1M-1M/1M-1M/1M-1M, ioengine=libaio, iodepth=64
> ...
> fio-2.13
> Starting 16 processes
> Jobs: 16 (f=16): [w(16)] [100.0% done] [0KB/1793MB/0KB /s] [0/1793/0 iops] [eta 00m:00s]
> file: (groupid=0, jobs=16): err= 0: pid=6328: Tue Aug 6 18:22:17 2019
> write: io=52775MB, bw=1758.7MB/s, iops=1758, runt= 30009msec
> ---
> 
> This patch introduces writev and pwritev for lo_write_buf(). I tried my
> best not doing harm to the origin code construct, and there will be
> some *useless* branches in fuse_buf_copy_one() which are hard to judge
> if they will be useful in the future. So I just leave them alone
> safely. If the cleanup work is necessary, please let me know.
> 
> v2
>   - Split into two patches
>   - Add the lost flags support, such as FUSE_BUF_PHYS_ADDR
> 
> v3
>   - use git send-email to make the patch set in one thread
>   - move fuse_buf_writev() into fuse_buf_copy()
>   - use writev for the src buffers when they're alread already mapped by the daemon process
>   - use calloc to replace malloc
>   - set res 0 if writev() returns 0
> 
> v4
>   - iterate from in_buf->buf[0] rather than buf[1]
>   - optimize the code to make it more elegant
> 
> Jun Piao (2):
>   add definition of fuse_buf_writev().
>   use fuse_buf_writev to replace fuse_buf_write for better performance
> 
> Signed-off-by: Jun Piao <piaojun at huawei.com>
> Suggested-by: Dr. David Alan Gilbert <dgilbert at redhat.com>
> Suggested-by: Stefan Hajnoczi <stefanha at redhat.com>
> ---
>  buffer.c      |   48 ++++++++++++++++++++++++++++++++++++++++++++++++
>  fuse_common.h |   14 ++++++++++++++
>  seccomp.c     |    2 ++
>  3 files changed, 64 insertions(+)
> --
> 
> _______________________________________________
> Virtio-fs mailing list
> Virtio-fs at redhat.com
> https://www.redhat.com/mailman/listinfo/virtio-fs




More information about the Virtio-fs mailing list