[Libguestfs] [PATCH nbdkit 2/3] filters: stats: Measure time per operation

Nir Soffer nsoffer at redhat.com
Sat Nov 30 14:26:53 UTC 2019


On Sat, Nov 30, 2019 at 9:13 AM Richard W.M. Jones <rjones at redhat.com> wrote:
>
> On Sat, Nov 30, 2019 at 02:17:06AM +0200, Nir Soffer wrote:
> > Previously we measured the total time and used it to calculate the rate
> > of different operations. This is incorrect and hides the real
> > throughput.  A more useful way is to measure the time we spent in each
> > operation.
> >
> > Here is an example run with this change:
> >
> > $ ./nbdkit --foreground \
> >     --unix /tmp/nbd.sock \
> >     --exportname '' \
> >     --filter stats \
> >     file file=/var/tmp/dst.img \
> >     statsfile=/dev/stderr \
> >     --run 'qemu-img convert -p -n -f raw -O raw -T none /var/tmp/fedora-30.img nbd:unix:/tmp/nbd.sock'
> >     (100.00/100%)
> > elapsed time: 2.150 s
> > write: 1271 ops, 1.14 GiB, 0.398 s, 2922.22 MiB/s
> > zero: 1027 ops, 4.86 GiB, 0.012 s, 414723.03 MiB/s
> > extents: 1 ops, 2.00 GiB, 0.000 s, 120470559.51 MiB/s
> >
> > This show that the actual time waiting for storage was only 0.4 seconds,
> > but elapsed time was 2.1 seconds. I think the missing time is in flush()
> > which we do not measure yet.
>
> I think opinions differ here.  Some people will want to know the
> throughput being achieved by the system, and some people will want to
> know how well the plugin/backend is performing single operations.  I
> know that you're interested in benchmarking imageio and that's
> completely valid, but I'm interested in measuring the total
> throughput.  IOW: Can't we show both?

Sure we can, but showing system throughput require more thinking.

We have 3 uses cases. Here is a possible way to show both system throughput
and fine grain per operation stats.

I'm assuming that for system throughput we like to think about the
total bytes processed
by the system in each direction:

read_rate = pread_bytes / total_time
write_rate = (pwrite_bytes + zero_bytes + trim_bytes) / total_time

1. copy image to nbdkit

total: 2300 ops, 6 GiB, 2.150 s, 2925.71 wMiB/s
write: 1271 ops, 1.14 GiB, 0.398 s, 2922.22 MiB/s
zero: 1027 ops, 4.86 GiB, 0.012 s, 414723.03 MiB/s
extents: 1 ops, 2.00 GiB, 0.000 s, 120470559.51 MiB/s
flush: 1 ops, 1.200 s

2. read image from nbdkit:

total: 1272 ops, 6 GiB, 2.150 seconds, 2925.71 rMiB/s
read: 1271 ops, 1.14 GiB, 0.398 s, 2922.22 MiB/s
extents: 1 ops, 2.00 GiB, 0.000 s, 120470559.51 MiB/s

3. read/write, e.g running a vm from image served by nbdkit

total: 5998 ops, 8.50 GiB, 125.000 s, 20.48 rMiB/s, 49.15 wMiB/s
write: 1271 ops, 1.14 GiB, 0.398 s, 2922.22 MiB/s
read: 3000 ops, 2.50 GiB, 1.100 s, 2327.27 MiB/s
zero: 1027 ops, 4.86 GiB, 0.012 s, 414723.03 MiB/s
extents: 500 ops, 2.00 GiB, 0.000 s, 120470559.51 MiB/s
flush: 200 ops, 1.200 s

What do you think?

> The rest of the code is screaming out to be refactored so we aren't
> repeating the same change over and over.  How about using something
> like the attached patch as an initial refactoring?

Very nice, do you want to push it? I can include it in v2.

Nir





More information about the Libguestfs mailing list