[Virtio-fs] [RFC PATCH 0/7] Inotify support in FUSE and virtiofs

Amir Goldstein amir73il at gmail.com
Mon Dec 20 18:22:11 UTC 2021


On Mon, Dec 20, 2021 at 6:42 PM Vivek Goyal <vgoyal at redhat.com> wrote:
>
> On Sat, Dec 18, 2021 at 10:28:35AM +0200, Amir Goldstein wrote:
> > > > >
> > > > > > 2. For FS_RENAME, will we be able to pass 4 buffers in iov?
> > > > > >     src_fuse_notify_fsnotify_out, src_name,
> > > > > >     dst_fuse_notify_fsnotify_out, dst_name
> > > > >
> > > > > So it is sort of two fsnotify events travelling in same event. We will
> > > > > have to have some sort of information in the src_fuse_notify_fsnotify_out
> > > > > which signals that another fsnotify_out is following. May be that's
> > > > > where fsnotify_flags->field can be used. Set a bit to signal another
> > > > > fsnotify_out is part of same event and this will also mean first one
> > > > > is src and second one is dst.
> > > > >
> > > > > Challenge I see is "src_name" and "dst_name", especially in the context
> > > > > of virtio queues.
> > > > >
> > > > > So we have a notification queue and for each notification, driver
> > > > > allocates a fixed amount of memory for each element and queues these
> > > > > elements in virtqueue. Server side pops these elements, places
> > > > > notification info in this vq element and sends back.
> > > > >
> > > > > So basically size of notification buffer needs to be known in advance,
> > > > > because these are allocated by driver (and not device/server). And
> > > > > that's the reason virtio spec has added a new filed "notify_buf_size"
> > > > > in device configuration space. Using this field device lets the driver
> > > > > know how much memory to allocate for each notification element.
> > > > >
> > > > > IOW, we can put variable sized elements in notifiation but max size
> > > > > of that variable length needs to be fixed in advance and needs to
> > > > > be told to driver at device initialization time.
> > > > >
> > > > > So what can be the max length of "src_name" and "dst_name"? Is it fair
> > > > > to use NAME_MAX to determine max length of name. So say "255" bytes
> > > > > (not including null) for each name. That means notify_buf_size will
> > > > > be.
> > > > >
> > > > > notify_buf_size = 2 * 255 + 2 * sizeof(struct fuse_notify_fsnotify_out);
> > > > >
> > > >
> > > > Can you push two subsequent elements to the events queue atomically?
> > > > If you can, then it is not a problem to queue the FS_MOVED_FROM
> > > > event (with one name) followed by FS_MOVED_TO event with
> > > > a self generated cookie in response to FAN_RENAME event on virtiofsd
> > > > server and rejoin them in virtiofs client.
> > >
> > > Hmm..., so basically break down FAN_RENAME event into two events joined
> > > by cookie and send them separately. I guess sounds reasonable because
> > > it helps reduce the max size of event.
> > >
> > > What do you mean by "atomically"? Do you mean strict ordering and these
> > > two events are right after each other. But if they are joined by cookie,
> > > we don't have to enforce it. Driver should be able to maintain an internal
> > > list and queue the event and wait for pair event to arrive. This also
> >
> > This is what I would consider repeating mistakes of past APIs (i.e. inotify).
> > Why should the driver work hard to join events that were already joint before
> > the queue? Is there really a technical challenge in queueing the two events
> > together?
>
> We can try queuing these together but it might not be that easy. If two
> elements are not available at the time of the queuing, then you will
> to let go the lock, put that element back in the queue and retry later.
>
> To me, it is much simpler and more flexible to not guarantee strict
> ordering and let the events be joined by cookie. BTW, we are using
> cookie anyway. So strict ordering should not be required.
>
> All I am saying is that implementation can still choose to send two
> events together one after the other, but this probably should not be
> a requirement on the part of the protocol.
>
> So what's the concern with joining the event with cookie API? I am
> not aware what went wrong in the past.
>
> One thing simpler with atomic event is that if second event does not
> come right away, we can probably discard first event saying some
> error happened. But if we join them by cookie and second event does
> not come, its not clear how do to error handling and how long guest
> driver should wait for second event to arrive.
>
> With the notion of enforcing atomicity, I am concerned about some
> deadlock/starvation possibility where you can't get two elements
> together at all. For example, some guest driver decides to instantiate
> queue only with 1 element.
>
> So is it doable, it most likely is. Still, I am not sure we should try to
> enforce atomicity. Apart from error handling, I am unable to seek
> what other issues exist with non-atomic joined events. Maybe you can
> help me understand better why requiring atomicity is a better option.
>

As UAPI the cookie interface is non reliable and users don't know how long
they would need to wait for the other part and how many rename states
to maintain and generally, most applications got it wrong and assumed to
many things about ordering and such.

As an internal virtio protocol you can do whatever ends up being more
easy for you as long as you are able to call the vfs fsnotify() API with
both src and dst dirs.

My feeling is that it is easier to queue two elements together.
I think it is possible to prevent starvation, by some reservation.
For example, if you allocate N+1 slots, but treat the queue as
"full" when there are N elements in the queue -
only a 2-elements push can dip into the +1 reservation if the
queue is not "full".

You can try this way or the other and choose for yourself.

Thanks,
Amir.




More information about the Virtio-fs mailing list