[Libguestfs] [PATCH v3 11/14] nbd/client: Accept 64-bit block status chunks

Eric Blake eblake at redhat.com
Wed May 31 17:40:08 UTC 2023


On Wed, May 31, 2023 at 08:00:43PM +0300, Vladimir Sementsov-Ogievskiy wrote:
> On 15.05.23 22:53, Eric Blake wrote:
> > Because we use NBD_CMD_FLAG_REQ_ONE with NBD_CMD_BLOCK_STATUS, a
> > client in narrow mode should not be able to provoke a server into
> > sending a block status result larger than the client's 32-bit request.
> > But in extended mode, a 64-bit status request must be able to handle a
> > 64-bit status result, once a future patch enables the client
> > requesting extended mode.  We can also tolerate a non-compliant server
> > sending the new chunk even when it should not.
> > 
> > @@ -672,7 +681,8 @@ static int nbd_parse_blockstatus_payload(BDRVNBDState *s,
> >        * connection; just ignore trailing extents, and clamp things to
> >        * the length of our request.
> >        */
> > -    if (chunk->length > sizeof(context_id) + sizeof(*extent)) {
> > +    if (count != wide ||
> 
> hard to read for me. Could it be simply "count > 1 ||" ?

For existing commands (compact), count is initialized to 0 as it is
not transferred over the wire.  For extended commands, count is
transferred over the wire, but we expect it to be 1 (and not 0).
Comparing count != wide is more precise than checking count > 0 (which
should never happen for compact, but would be a bug for extended).

> 
> > +        chunk->length > sizeof(context_id) + wide * sizeof(count) + len) {
> >           trace_nbd_parse_blockstatus_compliance("more than one extent");
> >       }
> >       if (extent->length > orig_length) {
> > @@ -1117,7 +1127,7 @@ static int coroutine_fn nbd_co_receive_cmdread_reply(BDRVNBDState *s, uint64_t h
> > 
> >   static int coroutine_fn nbd_co_receive_blockstatus_reply(BDRVNBDState *s,
> >                                                            uint64_t handle, uint64_t length,
> > -                                                         NBDExtent *extent,
> > +                                                         NBDExtentExt *extent,
> >                                                            int *request_ret, Error **errp)
> >   {
> >       NBDReplyChunkIter iter;
> > @@ -1125,6 +1135,7 @@ static int coroutine_fn nbd_co_receive_blockstatus_reply(BDRVNBDState *s,
> >       void *payload = NULL;
> >       Error *local_err = NULL;
> >       bool received = false;
> > +    bool wide = false;
> > 
> >       assert(!extent->length);
> >       NBD_FOREACH_REPLY_CHUNK(s, iter, handle, false, NULL, &reply, &payload) {
> > @@ -1134,7 +1145,13 @@ static int coroutine_fn nbd_co_receive_blockstatus_reply(BDRVNBDState *s,
> >           assert(nbd_reply_is_structured(&reply));
> > 
> >           switch (chunk->type) {
> > +        case NBD_REPLY_TYPE_BLOCK_STATUS_EXT:
> > +            wide = true;
> > +            /* fallthrough */
> >           case NBD_REPLY_TYPE_BLOCK_STATUS:
> > +            if (s->info.extended_headers != wide) {
> > +                trace_nbd_extended_headers_compliance("block_status");
> 
> You set wide to true once, on first "NBD_REPLY_TYPE_BLOCK_STATUS_EXT", and then parse following "NBD_REPLY_TYPE_BLOCK_STATUS" if the come with wide=true.
> 
> Should it be:
> 
> --- a/block/nbd.c
> +++ b/block/nbd.c
> @@ -1135,7 +1135,7 @@ static int coroutine_fn nbd_co_receive_blockstatus_reply(BDRVNBDState *s,
>      void *payload = NULL;
>      Error *local_err = NULL;
>      bool received = false;
> -    bool wide = false;
> +    bool wide;
>      assert(!extent->length);
>      NBD_FOREACH_REPLY_CHUNK(s, iter, handle, false, NULL, &reply, &payload) {
> @@ -1146,9 +1146,8 @@ static int coroutine_fn nbd_co_receive_blockstatus_reply(BDRVNBDState *s,
>          switch (chunk->type) {
>          case NBD_REPLY_TYPE_BLOCK_STATUS_EXT:
> -            wide = true;
> -            /* fallthrough */
>          case NBD_REPLY_TYPE_BLOCK_STATUS:
> +            wide = chunk->type == NBD_REPLY_TYPE_BLOCK_STATUS_EXT;
>              if (s->info.extended_headers != wide) {

Good observation, since we reach this multiple times in a loop.  I'm
squashing that in.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org


More information about the Libguestfs mailing list