[Virtio-fs] xfstest results for virtio-fs on aarch64

Vivek Goyal vgoyal at redhat.com
Fri Oct 11 20:36:47 UTC 2019


On Sat, Oct 12, 2019 at 05:13:51AM +0900, Chirantan Ekbote wrote:
> On Sat, Oct 12, 2019 at 4:59 AM Vivek Goyal <vgoyal at redhat.com> wrote:
> >
> > On Sat, Oct 12, 2019 at 03:45:01AM +0900, Chirantan Ekbote wrote:
> > > This is unrelated but there's also a nullptr dereference in the driver
> > > if the device advertises more than request queue.  This is problematic
> > > code:
> > >
> > > /* Allocate fuse_dev for hiprio and notification queues */
> > > for (i = 0; i < VQ_REQUEST; i++) {
> > >     struct virtio_fs_vq *fsvq = &fs->vqs[i];
> > >
> > >     fsvq->fud = fuse_dev_alloc();
> > >     if (!fsvq->fud)
> > >         goto err_free_fuse_devs;
> > > }
> > >
> > > ctx.fudptr = (void **)&fs->vqs[VQ_REQUEST].fud;
> > > err = fuse_fill_super_common(sb, &ctx);
> > > if (err < 0)
> > >     goto err_free_fuse_devs;
> > >
> > > fc = fs->vqs[VQ_REQUEST].fud->fc;
> > >
> > > for (i = 0; i < fs->nvqs; i++) {
> > >     struct virtio_fs_vq *fsvq = &fs->vqs[i];
> > >
> > >     if (i == VQ_REQUEST)
> > >         continue; /* already initialized */
> > >     fuse_dev_install(fsvq->fud, fc);
> > > }
> > >
> > > The first loop only initializes fsvq->fud for queues up to VQ_REQUEST
> > > but then the second loop calls fuse_dev_install with fsvq->fud for all
> > > queues up to fs->nvqs.  When fs->nvqs is greater than VQ_REQUEST this
> > > will end up dereferencing an uninitialized pointer.  I think the fix
> > > is to skip calling fuse_dev_alloc for the VQ_REQUEST queue but then
> > > call it for all the other queues up to fs->nvqs.
> >
> > Right now we support only 1 request queue and multi queue support is
> > not there. Planning to add multiqueue support in future releases though.
> >
> 
> Could you elaborate on what is missing for multiqueue support?  For
> reference, this patch was enough to get it working for me:

May be not much is missing. Just that were focussed on first getting a
basic version of virtiofs upstream and then start looking at performance
improvement features.

> 
> @@ -922,7 +990,8 @@ static int virtio_fs_enqueue_req(struct virtqueue
> *vq, struct fuse_req *req)
>  static void virtio_fs_wake_pending_and_unlock(struct fuse_iqueue *fiq)
>  __releases(fiq->waitq.lock)
>  {
> -       unsigned queue_id = VQ_REQUEST; /* TODO multiqueue */
> +       /* unsigned queue_id = VQ_REQUEST; /\* TODO multiqueue *\/ */
> +       unsigned queue_id;
>         struct virtio_fs *fs;
>         struct fuse_conn *fc;
>         struct fuse_req *req;
> @@ -937,6 +1006,7 @@ __releases(fiq->waitq.lock)
>         spin_unlock(&fiq->waitq.lock);
> 
>         fs = fiq->priv;
> +       queue_id = (req->in.h.unique % (fs->nvqs - 1)) + 1;
>         fc = fs->vqs[queue_id].fud->fc;
> 
>         dev_dbg(&fs->vqs[queue_id].vq->vdev->dev,
> 
> 
> This is simply round-robin scheduling but even going from one to two
> queues gives a significant performance improvement (especially because
> crosvm doesn't support shared memory regions yet).

Interesting. I thought virtiofsd is hard coded right now to support
only one queue. Did you modify virtiofsd to support more than one
request queue?

Can you give some concrete numbers. How much performance improvement
do you see with multiqueue. What workload you are using for testing.

Thanks
Vivek




More information about the Virtio-fs mailing list