[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

[Libguestfs] [nbdkit PATCH 2/2] connections: Hang up early on insanely large WRITE requests



We have logic to send an ENOMEM error to a client that tries
NBD_CMD_WRITE with a payload larger than MAX_REQUEST_SIZE (64M),
but still end up trying to skip over the client's payload to
stay in sync for receiving the next command.  If the bad client
request is only partially larger than our maximum, this is still
nice behavior; but a worst-case client could cause us to waste
time on read()ing nearly 4G of data before we ever get to send
our error reply.

For a client that bad, it is better to just disconnect.  Even if
we wanted to be nice and send an error message reply, we'd still
be out of sync for further reads, so the simplest option is to
just silently disconnect.

Signed-off-by: Eric Blake <eblake redhat com>
---
 src/connections.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/src/connections.c b/src/connections.c
index d0ef6a5..8dc1925 100644
--- a/src/connections.c
+++ b/src/connections.c
@@ -879,6 +879,11 @@ skip_over_write_buffer (int sock, size_t count)
   char buf[BUFSIZ];
   ssize_t r;

+  if (count > MAX_REQUEST_SIZE * 2) {
+    nbdkit_error ("write request too large to skip");
+    return -1;
+  }
+
   while (count > 0) {
     r = read (sock, buf, count > BUFSIZ ? BUFSIZ : count);
     if (r == -1) {
-- 
2.13.6


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]