[Virtio-fs] xfstest results for virtio-fs on aarch64

Chirantan Ekbote chirantan at chromium.org
Wed Oct 16 19:36:33 UTC 2019


On Wed, Oct 16, 2019 at 11:10 PM Stefan Hajnoczi <stefanha at redhat.com> wrote:
>
> On Tue, Oct 15, 2019 at 11:58:46PM +0900, Chirantan Ekbote wrote:
> >
> > As for the performance numbers, I don't have my test device with me
> > but if I remember correctly the blogbench scores almost doubled when
> > going from one queue to two queues.
>
> If I understand the code correctly each virtqueue is processed by a
> worker and the number of virtqueues is the vcpu count.  This means that
> requests are processed in a blocking fashion and the disk is unlikely to
> reach its maximum queue depth (e.g. often 64 requests).
>
> Going from 1 virtqueue to 2 virtqueues will improve performance, but
> it's still a far cry from the maximum queue depth that the host kernel
> and hardware supports.  This is why virtiofsd processes multiple
> requests in parallel on a single virtqueue using a thread pool.

Hmm, maybe I'm missing something but isn't this still limited by the
number of requests that can fit in the virtqueue?  Unlike virtio-net
there is no separate queue for sending data back from host -> guest.
Each request also includes a buffer for the response so once the
virtqueue is full all other requests block in the guest until a
request is completed and can be removed from the queue.  So even in a
thread pool model, going from 1 virtqueue to 2 virtqueues would double
the number of requests that can be handled in parallel.

>
> Stefan




More information about the Virtio-fs mailing list