[Libguestfs] RPM package builds backed by nbdkit

Richard W.M. Jones rjones at redhat.com
Sat May 23 15:12:15 UTC 2020


On Thu, May 21, 2020 at 03:48:18PM +0100, Richard W.M. Jones wrote:
> Context:
> https://bugzilla.redhat.com/show_bug.cgi?id=1837809#c28

I collected a few more stats.  This time I'm using a full
‘fedpkg mockbuild’ of Mesa 3D from Rawhide.  I chose mesa largely at
random but it has the nice properties that builds take a reasonable
but not too long amount of time, and it has to install a lot of
dependencies in the mock chroot.

As I wanted this to look as much like Koji as possible, I enabled the
yum cache, disabled the root cache and disabled ccache.

Baseline build:           5m15.548s, 5m14.383s

- This is with /var/lib/mock mounted on a logical volume formatted
  with ext4.

tmpfs:                    4m10.350s, 4m2.618s

- This is supposed to be the fastest possible case, /var/lib/mock is
  mounted on a tmpfs.  It's the suggested disposition for Koji
  builders when performance is paramount.

nbdkit file plugin:       5m21.668s, 4m59.460s, 5m2.020s

- nbd.ko, multi-conn = 4.

- Similar enough to the baseline build showing that NBD has no/low
  overhead.

- For unclear reasons multi-conn has no effect.  By adding the log
  filter I could see that it only ever uses one connection.

nbdkit memory plugin:     4m17.861s, 4m13.609s

- nbd.ko, multi-conn = 4.

- This is very similar to the tmpfs, showing that NBD itself doesn't
  have very much overhead.  (As above multi-conn has no effect).

nbdkit file plugin + fuamode=discard:  4m13.213s, 4m15.510s

- nbd.ko, multi-conn = 4.

- This is very interesting because it shows that almost all of the
  performance benefits can be gained by disabling flush requests,
  while still using disks for backing.

remote nbdkit memory plugin:  6m40.881s, 6m43.723s

- nbd.ko, multi-conn = 4, over a TCP socket to a server located
  next to the build system through a gigabit ethernet switch.

- Only 25% slower than direct access to a local disk.

----------------------------------------------------------------------

Some thoughts (sorry these are not very conclusive, your thoughts
also welcome ...)

(0) Performance of nbd.ko + nbdkit is excellent, even remote.

(1) NVME disks are really fast.  I'm sure the differences between
in-memory and disk would be much larger if I was using a hard disk.

(2) Why are all requests happening over a single connection?  The
build is highly parallel.

(3) I was able to use the log filter to collect detailed log
information with almost zero overhead.  However I'm not sure exactly
what I can do with it.  http://oirase.annexia.org/2020-mesa-build.log

(4) What can we do to optimize the block device for this situation?
Obviously drop flushes.  Can we do some hierarchical storage approach
where we create a large RAM disk but back the lesser used bits by
disk?  (The small difference in performance between RAM and NVME makes
me think this will not be very beneficial.)

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-top is 'top' for virtual machines.  Tiny program with many
powerful monitoring features, net stats, disk stats, logging, etc.
http://people.redhat.com/~rjones/virt-top




More information about the Libguestfs mailing list