[Libguestfs] [PATCH nbdkit v2 02/11] blocksize: Implement filtering of .can_multi_conn (forcing it to false).

Richard W.M. Jones rjones at redhat.com
Sun Jan 6 23:11:25 UTC 2019


On Sat, Jan 05, 2019 at 04:48:22PM -0600, Eric Blake wrote:
> On 1/5/19 8:50 AM, Richard W.M. Jones wrote:
> > I examined each filter to see which ones implement a cache and do not
> > properly consider consistency across clients for flush requests.  For
> > these filters we should force .can_multi_conn to return false.
> > 
> > I believe only one filter (blocksize) needs to be updated and all the
> > other ones are safe.
> > ---
> >  filters/blocksize/blocksize.c | 12 ++++++++++++
> >  1 file changed, 12 insertions(+)
> > 
> > diff --git a/filters/blocksize/blocksize.c b/filters/blocksize/blocksize.c
> > index 34f58c9..9e0e713 100644
> > --- a/filters/blocksize/blocksize.c
> > +++ b/filters/blocksize/blocksize.c
> > @@ -164,6 +164,17 @@ blocksize_get_size (struct nbdkit_next_ops *next_ops, void *nxdata,
> >    return size == -1 ? size : size & ~(minblock - 1);
> >  }
> >  
> > +static int
> > +blocksize_can_multi_conn (struct nbdkit_next_ops *next_ops, void *nxdata,
> > +                          void *handle)
> > +{
> > +  /* Although we are serializing all requests anyway so this likely
> > +   * doesn't make a difference, return false because the bounce buffer
> > +   * is not consistent for flush.
> > +   */
> > +  return 0;
> > +}
> 
> I think we could get away with returning 1, _because_ we are serializing
> all requests.  Even though we make multiple calls into the plugin
> (worst-case being read-modify-write-flush on a client write-with-FUA),
> no other operation can compete with a client flush.  But I also agree
> that if we were to fix the bounce buffer to be thread-safe and change
> the filter to be more parallel, then you are also right that unless we
> add a .flush callback and add locking so that all calls into the plugin
> are guarded within the same lock, then returning 0 is appropriate, even
> if the plugin itself returns 1.  So I agree with adding this callback,
> and could go either way with you returning 0 or 1 for now.

I agree it could go either way, but I think returning 0 at least
forces us to think through the consequences if we change the thread
model or bounce buffer in future.

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-builder quickly builds VMs from scratch
http://libguestfs.org/virt-builder.1.html




More information about the Libguestfs mailing list