[Virtio-fs] xfstest results for virtio-fs on aarch64

Stefan Hajnoczi stefanha at redhat.com
Wed Oct 16 14:09:59 UTC 2019


On Tue, Oct 15, 2019 at 11:58:46PM +0900, Chirantan Ekbote wrote:
> On Mon, Oct 14, 2019 at 6:11 PM Stefan Hajnoczi <stefanha at redhat.com> wrote:
> >
> > On Fri, Oct 11, 2019 at 04:36:47PM -0400, Vivek Goyal wrote:
> > > On Sat, Oct 12, 2019 at 05:13:51AM +0900, Chirantan Ekbote wrote:
> > > > On Sat, Oct 12, 2019 at 4:59 AM Vivek Goyal <vgoyal at redhat.com> wrote:
> > > > @@ -922,7 +990,8 @@ static int virtio_fs_enqueue_req(struct virtqueue
> > > > *vq, struct fuse_req *req)
> > > >  static void virtio_fs_wake_pending_and_unlock(struct fuse_iqueue *fiq)
> > > >  __releases(fiq->waitq.lock)
> > > >  {
> > > > -       unsigned queue_id = VQ_REQUEST; /* TODO multiqueue */
> > > > +       /* unsigned queue_id = VQ_REQUEST; /\* TODO multiqueue *\/ */
> > > > +       unsigned queue_id;
> > > >         struct virtio_fs *fs;
> > > >         struct fuse_conn *fc;
> > > >         struct fuse_req *req;
> > > > @@ -937,6 +1006,7 @@ __releases(fiq->waitq.lock)
> > > >         spin_unlock(&fiq->waitq.lock);
> > > >
> > > >         fs = fiq->priv;
> > > > +       queue_id = (req->in.h.unique % (fs->nvqs - 1)) + 1;
> > > >         fc = fs->vqs[queue_id].fud->fc;
> > > >
> > > >         dev_dbg(&fs->vqs[queue_id].vq->vdev->dev,
> > > >
> > > >
> > > > This is simply round-robin scheduling but even going from one to two
> > > > queues gives a significant performance improvement (especially because
> > > > crosvm doesn't support shared memory regions yet).
> > >
> > > Interesting. I thought virtiofsd is hard coded right now to support
> > > only one queue. Did you modify virtiofsd to support more than one
> > > request queue?
> >
> > Right, virtiofsd currently refuses to bring up more than 1 request
> > queue.  The code can actually handle multiqueue now but there is no
> > command-line support for it yet.  The ability to set CPU affinity for
> > virtqueue threads could be introduced at the same time as enabling
> > multiqueue.
> >
> 
> I'm not using virtiofsd.  We have our own server for crosvm, which
> supports multiple queues.
> 
> As for the performance numbers, I don't have my test device with me
> but if I remember correctly the blogbench scores almost doubled when
> going from one queue to two queues.

If I understand the code correctly each virtqueue is processed by a
worker and the number of virtqueues is the vcpu count.  This means that
requests are processed in a blocking fashion and the disk is unlikely to
reach its maximum queue depth (e.g. often 64 requests).

Going from 1 virtqueue to 2 virtqueues will improve performance, but
it's still a far cry from the maximum queue depth that the host kernel
and hardware supports.  This is why virtiofsd processes multiple
requests in parallel on a single virtqueue using a thread pool.

Stefan
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 488 bytes
Desc: not available
URL: <http://listman.redhat.com/archives/virtio-fs/attachments/20191016/e8c4695c/attachment.sig>


More information about the Virtio-fs mailing list