[Libguestfs] [libnbd PATCH v2 3/6] api: Sanitize sizes larger than INT64_MAX

Eric Blake eblake at redhat.com
Thu Sep 21 20:58:02 UTC 2023


Our stable API has always claimed that nbd_get_size() reports a
non-negative value on success, and -1 on failure.  While we know of no
existing production server (such as nbdkit, qemu-nbd, nbd-server) that
would advertise a size larger than off_t, the NBD spec has not yet
committed to enforcing a hard maximum of an export size as a signed
integer; at present, the protocol mandates merely:
  S: 64 bits, size of the export in bytes (unsigned)

I've considered proposing a spec patch to upstream NBD to require a
positive signed value, and can point to this patch to help justify
such a patch - but that hasn't happened yet.  Thus, it should be no
surprise that our fuzzer was quickly able to emulate a theoretical
server that claims a size larger than INT64_MAX - at which point,
nbd_get_size() is returning a negative value that is not -1, and the
API documentation was unclear whether the application should treat
this as success or failure.  However, as we did not crash, the fuzzer
did not flag it as interesting in v1.16.

I considered changing nbd_internal_set_size_and_flags() to move the
state machine to DEAD for any server advertising something so large
(that is, nbd_connect_*() would be unable to connect to such a
server); but without the NBD spec mandating such a limit, that is an
artificial constraint.  Instead, this patch chooses to normalize all
wire values that can't fit in the int64_t return type to an EOVERFLOW
error, and clarifies the API documentation accordingly.  Existing
clients that depend on a known positive size and check for
nbd_get_size()<0 are not impacted, while those that check for ==-1
will now reject such uncommon servers instead of potentially using a
negative value in subsequent computations (the latter includes
nbdinfo, which changes from reporting a negative size to instead
printing an error message).  Meanwhile, clients that never called
nbd_get_size() (presumably because they infer the size from other
means, or because they intend to access offsets above 2^63 despite not
being able to reliably see that size from nbd_get_size) can still do
so.

Next, I audited all instances of 'exportsize', to see if libnbd itself
has any arithmetic issues when a large size is stored.  My results for
v1.16 are as follows:

lib/nbd-protocol.h and lib/internal.h both have uint64_t exportsize
(so we are already treating size as unsigned both for wire and
internal representation - no nasty surprises with sign extension).

generator/states-{oldstyle,opt-{go,export-name}}.c grab exportsize off
the wire, byteswap if needed, and pass it to
nbd_internal_set_size_and_flags() without further inspection.

lib/flags.c has nbd_internal_set_size_and_flags() which stores the
wire value into the internal value, then nbd_unlocked_get_size() which
reports the wire value as-is (prior to this patch adding EOVERFLOW
normalization). nbd_internal_set_block_size() mentions exportsize in
comments, but does not actually compare against it.

lib/rw.c added LIBNBD_STRICT_BOUNDS checking in v1.6 via:
    if ((h->strict & LIBNBD_STRICT_BOUNDS) &&
        (offset > h->exportsize || count > h->exportsize - offset)) {
where offset and count are also unsigned values (count is 32-bit in
1.16, 64-bit in master); but the order of comparisons is robust
against wraparound when using unsigned math.  Earlier releases, or
clients which use nbd_set_strict_mode(,0) to skip bounds checking,
have been assuming the server will reject requests with a weird
offset (most servers do, although the NBD spec only recommends
that sever behavior, by stating that the burden is on the client
to pass in-bounds requests in the first place).

generator/states-reply-chunk.c added comparisons against exportsize
for NBD_OPT_BLOCK_STATUS only after 1.16, and that's the subject of
the next patch.

No other direct uses of exportsize exist, so libnbd itself can
internally handle sizes larger than 2^63 without misbehaving, outside
of nbd_get_size(), even if such an offset was computed from taking the
negative int64_t value of nbd_get_size() and turning it back into
uint64_t offset parameter.

I also audited our other APIs - everything else uses a parameter of
type UInt64 in the generator for offsets, which is translated in C.ml
to uint64_t; and we already know we are safe when C converts negative
int64_t values into uint64_t.

For other languages, Python.ml generates code for UInt64 by using
'PyArg_ParseValue("K")' with a detour through 'unsigned long long' in
case 'uint64_t' is a different type rank despite being the same number
of bits.  The Python documentation states that "K" converts an
arbitrary python integer value to a C uint64_t without overflow
checking - so it is already possible to pass offset values larger than
2^63 in nbdsh; while values larger than 2^64 or negative values are
effectively truncated as if with modulo math.  Enhancing the language
bindings to explicitly detect over/underflow is outside the scope of
this patch (and could surprise users who were depending on the current
truncation semantics).

GoLang.ml generates UInt64 via native Go 'uint64' passed through
'C.uint64_t()', and Rust.ml generates UInt64 via native Rust 'u64'
interpreted as C uint64_t.  In both cases, while I am unsure whether
those languages (which have tighter type rules than C) let you get
away with directly assigning a negative value to the native type when
you really want a positive value over 2^63; but since it is a direct
map of an unsigned 64-bit value between the native type and C, there
should be no surprises to people fluent in those languages.

OCaml.ml is a bit different; as OCaml lacks a native unsigned 64-bit
type, it generates UInt64 as native 'int64' converted to C via
'Int64_val()'.  Thus, an OCaml client MUST pass a negative value if
they want to access offsets beyond 2^63.  But again, someone familiar
with the language should be familiar with the limitations.

Finally to demonstrate the difference in this patch, I temporarily
applied this patch to qemu (here, on top of qemu commit 49076448):

| diff --git i/nbd/server.c w/nbd/server.c
| index b5f93a20c9c..228ce66ed2b 100644
| --- i/nbd/server.c
| +++ w/nbd/server.c
| @@ -691,7 +691,7 @@ static int nbd_negotiate_handle_info(NBDClient *client, Error **errp)
|          myflags |= NBD_FLAG_SEND_DF;
|      }
|      trace_nbd_negotiate_new_style_size_flags(exp->size, myflags);
| -    stq_be_p(buf, exp->size);
| +    stq_be_p(buf, exp->size | 0x8000000000000000ULL);
|      stw_be_p(buf + 8, myflags);
|      rc = nbd_negotiate_send_info(client, NBD_INFO_EXPORT,
|                                   sizeof(buf), buf, errp);

then with a just-built 'qemu-nbd -f raw -r file -t &' running, we have
pre-patch:

$ nbdsh --base -u nbd://localhost -c - <<\EOF
> try:
>  print(h.get_size())
> except nbd.Error as ex:
>  print(ex.string)
> EOF
-9223372036854770176

vs. post-patch:

$ ./run nbdsh --base -u nbd://localhost -c - <<\EOF
> try:
>  print(h.get_size())
> except nbd.Error as ex:
>  print(ex.string)
> EOF
nbd_get_size: server claims size 9223372036854781440 which does not fit in signed result: Value too large for defined data type

A more complex patch to qemu to mask that bit back off from the offset
parameter to NBD_CMD_READ/WRITE is also possible to explore behavior
of passing large offsets over the wire, although I don't show it here.

All stable releases of NBD have had this return value issue.  We
cannot guarantee whether clients may have their own arithmetic bugs
(such as treating the size as signed, then entering an infinite loop
when using a negative size as a loop bound), so we will be issuing a
security notice in case client apps need to file their own CVEs.
However, since all known production servers do not produces sizes that
large, and our audit shows that all stable releases of libnbd
gracefully handle large offsets even when a client convers a negative
int64_t result of nbd_get_size() back into large uint64_t offset
values in subsequent API calls, we did not deem it high enough risk to
issue a CVE against libnbd proper at this time, although we have
reached out to Red Hat's secalert team to see if revisiting that
decision might be warranted.

Based on recent IRC chatter, there is also a slight possibility that
some future extension to the NBD protocol could specifically allow
clients to opt in to an extension where the server reports an export
size of 2^64-1 (all ones) for a unidirectional connection where
offsets are ignored (either a read-only export of indefinite length,
or an append-only export data sink - either way, basically turning NBD
into a cross-system FIFO rather than a seekable device); if such an
extension materializes, we'd probably add a named constant negative
sentinel value distinct from -1 for actual return from nbd_get_size()
at that time.

Fixes: 40881fce75 ("lib: Expose flags and export size through the API.", v0.1)
Signed-off-by: Eric Blake <eblake at redhat.com>
---
 generator/API.ml | 6 +++++-
 lib/flags.c      | 6 ++++++
 2 files changed, 11 insertions(+), 1 deletion(-)

diff --git a/generator/API.ml b/generator/API.ml
index 14988403..c4547615 100644
--- a/generator/API.ml
+++ b/generator/API.ml
@@ -2492,7 +2492,11 @@   "get_size", {
     permitted_states = [ Negotiating; Connected; Closed ];
     shortdesc = "return the export size";
     longdesc = "\
-Returns the size in bytes of the NBD export."
+Returns the size in bytes of the NBD export.
+
+Note that this call fails with C<EOVERFLOW> for an unlikely
+server that advertises a size which cannot fit in a 64-bit
+signed integer."
 ^ non_blocking_test_call_description;
     see_also = [SectionLink "Size of the export"; Link "opt_info"];
     example = Some "examples/get-size.c";
diff --git a/lib/flags.c b/lib/flags.c
index 7e6ddedd..394abe87 100644
--- a/lib/flags.c
+++ b/lib/flags.c
@@ -253,6 +253,12 @@ nbd_unlocked_get_size (struct nbd_handle *h)
     return -1;
   }

+  if (h->exportsize > INT64_MAX) {
+    set_error (EOVERFLOW, "server claims size %" PRIu64
+               " which does not fit in signed result", h->exportsize);
+    return -1;
+  }
+
   return h->exportsize;
 }

-- 
2.41.0



More information about the Libguestfs mailing list