[Virtio-fs] [PATCH 07/24] DAX: virtio-fs: Add vhost-user slave commands for mapping

Vivek Goyal vgoyal at redhat.com
Thu Mar 11 13:50:50 UTC 2021


On Thu, Mar 11, 2021 at 12:15:09PM +0000, Dr. David Alan Gilbert wrote:
> * Chirantan Ekbote (chirantan at chromium.org) wrote:
> > On Wed, Feb 10, 2021 at 4:04 AM Dr. David Alan Gilbert (git)
> > <dgilbert at redhat.com> wrote:
> > >
> > > From: "Dr. David Alan Gilbert" <dgilbert at redhat.com>
> > > +
> > > +typedef struct {
> > > +    /* Offsets within the file being mapped */
> > > +    uint64_t fd_offset[VHOST_USER_FS_SLAVE_ENTRIES];
> > > +    /* Offsets within the cache */
> > > +    uint64_t c_offset[VHOST_USER_FS_SLAVE_ENTRIES];
> > > +    /* Lengths of sections */
> > > +    uint64_t len[VHOST_USER_FS_SLAVE_ENTRIES];
> > > +    /* Flags, from VHOST_USER_FS_FLAG_* */
> > > +    uint64_t flags[VHOST_USER_FS_SLAVE_ENTRIES];
> > > +} VhostUserFSSlaveMsg;
> > > +
> > 
> > Is it too late to change this?  This struct allocates space for up to
> > 8 entries but most of the time the server will only try to set up one
> > mapping at a time so only 32 out of the 256 bytes in the message are
> > actually being used.  We're just wasting time memcpy'ing bytes that
> > will never be used.  Is there some reason this can't be dynamically
> > sized?  Something like:
> > 
> > typedef struct {
> >     /* Number of mapping requests */
> >     uint16_t num_requests;
> >     /* `num_requests` mapping requests */
> >    MappingRequest requests[];
> > } VhostUserFSSlaveMsg;
> > 
> > typedef struct {
> >     /* Offset within the file being mapped */
> >     uint64_t fd_offset;
> >     /* Offset within the cache */
> >     uint64_t c_offset;
> >     /* Length of section */
> >     uint64_t len;
> >     /* Flags, from VHOST_USER_FS_FLAG_* */
> >     uint64_t flags;
> > } MappingRequest;
> > 
> > The current pre-allocated structure both wastes space when there are
> > fewer than 8 requests and requires extra messages to be sent when
> > there are more than 8 requests.  I realize that in the grand scheme of
> > things copying 224 extra bytes is basically not noticeable but it just
> > irks me that we could fix this really easily before it gets propagated
> > to too many other places.
> 
> So this has come out as:
> 
> typedef struct {
>     /* Offsets within the file being mapped */
>     uint64_t fd_offset;
>     /* Offsets within the cache */
>     uint64_t c_offset;
>     /* Lengths of sections */
>     uint64_t len;
>     /* Flags, from VHOST_USER_FS_FLAG_* */
>     uint64_t flags;
> } VhostUserFSSlaveMsgEntry;
>  
> typedef struct {
>     /* Generic flags for the overall message */
>     uint32_t flags;
>     /* Number of entries */
>     uint16_t count;
>     /* Spare */
>     uint16_t align;
>  
>     VhostUserFSSlaveMsgEntry entries[];
> } VhostUserFSSlaveMsg;
> 
> which seems to work OK.
> I've still got a:
> #define VHOST_USER_FS_SLAVE_MAX_ENTRIES 8

Hi Dave,

So if we were to raise this limit down the line, will it be just a matter
of changing this numebr and recompile qemu + virtiofsd? Or this is just
a limit on sender and qemu does not care.

If qemu cares about number of entries, then it will be good to raise this
limit to say 32 or 64.

Otherwise new definitions look good.

Thanks
Vivek

> 
> to limit the size VhostUserFSSlaveMsg can get to.
> The variable length array makes the union in the reader a bit more
> hairy, but it's OK.
> 
> Dave
> 
> > Chirantan
> > 
> > > --
> > > 2.29.2
> > >
> > > _______________________________________________
> > > Virtio-fs mailing list
> > > Virtio-fs at redhat.com
> > > https://www.redhat.com/mailman/listinfo/virtio-fs
> > >
> > 
> -- 
> Dr. David Alan Gilbert / dgilbert at redhat.com / Manchester, UK




More information about the Virtio-fs mailing list