[Virtio-fs] (no subject)
Hanna Czenczek
hreitz at redhat.com
Mon Oct 9 09:13:33 UTC 2023
On 09.10.23 11:07, Hanna Czenczek wrote:
> On 09.10.23 10:21, Hanna Czenczek wrote:
>> On 07.10.23 04:22, Yajun Wu wrote:
>
> [...]
>
>>> The main motivation of adding VHOST_USER_SET_STATUS is to let
>>> backend DPDK know
>>> when DRIVER_OK bit is valid. It's an indication of all VQ
>>> configuration has sent,
>>> otherwise DPDK has to rely on first queue pair is ready, then
>>> receiving/applying
>>> VQ configuration one by one.
>>>
>>> During live migration, configuring VQ one by one is very time
>>> consuming.
>>
>> One question I have here is why it wasn’t then introduced in the live
>> migration code, but in the general VM stop/cont code instead. It does
>> seem time-consuming to do this every time the VM is paused and resumed.
>>
>>> For VIRTIO
>>> net vDPA, HW needs to know how many VQs are enabled to set
>>> RSS(Receive-Side Scaling).
>>>
>>> If you don’t want SET_STATUS message, backend can remove protocol
>>> feature bit
>>> VHOST_USER_PROTOCOL_F_STATUS.
>>
>> The problem isn’t back-ends that don’t want the message, the problem
>> is that qemu uses the message wrongly, which prevents well-behaving
>> back-ends from implementing the message.
>>
>>> DPDK is ignoring SET_STATUS 0, but using GET_VRING_BASE to do device
>>> close/reset.
>>
>> So the right thing to do for back-ends is to announce STATUS support
>> and then not implement it correctly?
>>
>> GET_VRING_BASE should not reset the close or reset the device, by the
>> way. It should stop that one vring, not more. We have a
>> RESET_DEVICE command for resetting.
>>
>>> I'm not involved in discussion about adding SET_STATUS in Vhost
>>> protocol. This feature
>>> is essential for vDPA(same as vhost-vdpa implements
>>> VHOST_VDPA_SET_STATUS).
>>
>> So from what I gather from your response is that there is only a
>> single use for SET_STATUS, which is the DRIVER_OK bit. If so,
>> documenting that all other bits are to be ignored by both back-end
>> and front-end would be fine by me.
>>
>> I’m not fully serious about that suggestion, but I hear the strong
>> implication that nothing but DRIVER_OK was of any concern, and this
>> is really important to note when we talk about the status of the
>> STATUS feature in vhost today. It seems to me now that it was not
>> intended to be the virtio-level status byte, but just a DRIVER_OK
>> signalling path from front-end to back-end. That makes it a
>> vhost-level protocol feature to me.
>
> On second thought, it just is a pure vhost-level protocol feature, and
> has nothing to do with the virtio status byte as-is. The only stated
> purpose is for the front-end to send DRIVER_OK after migration, but
> migration is transparent to the guest, so the guest would never change
> the status byte during migration. Therefore, if this feature is
> essential, we will never be able to have a status byte that is
> transparently shared between guest and back-end device, i.e. the
> virtio status byte.
On third thought, scratch that. The guest wouldn’t set it, but
naturally, after migration, the front-end will need to restore the
status byte from the source, so the front-end will always need to set
it, even if it were otherwise used controlled only by the guest and the
back-end device. So technically, this doesn’t prevent such a use case.
(In practice, it isn’t controlled by the guest right now, but that could
be fixed.)
> Cc-ing Alex on this mail, because to me, this seems like an important
> detail when he plans on using the byte in the future. If we need a
> virtio status byte, I can’t see how we could use the existing F_STATUS
> for it.
>
> Hanna
More information about the Virtio-fs
mailing list