1. 02 4月, 2018 2 次提交
    • E
      nbd: trace meta context negotiation · 2b53af25
      Eric Blake 提交于
      Having a more detailed log of the interaction between client and
      server is invaluable in debugging how meta context negotiation
      actually works.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      Message-Id: <20180330130950.1931229-1-eblake@redhat.com>
      Reviewed-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      2b53af25
    • E
      nbd/client: Correctly handle bad server REP_META_CONTEXT · 260e34db
      Eric Blake 提交于
      It's never a good idea to blindly read for size bytes as
      returned by the server without first validating that the size
      is within bounds; a malicious or buggy server could cause us
      to hang or get out of sync from reading further messages.
      
      It may be smarter to try and teach the client to cope with
      unexpected context ids by silently ignoring them instead of
      hanging up on the server, but for now, if the server doesn't
      reply with exactly the one context we expect, it's easier to
      just give up - however, if we give up for any reason other
      than an I/O failure, we might as well try to politely tell
      the server we are quitting rather than continuing.
      
      Fix some typos in the process.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      Message-Id: <20180329231837.1914680-1-eblake@redhat.com>
      Reviewed-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      260e34db
  2. 14 3月, 2018 10 次提交
  3. 06 3月, 2018 1 次提交
  4. 02 3月, 2018 2 次提交
  5. 09 2月, 2018 1 次提交
  6. 26 1月, 2018 1 次提交
  7. 18 1月, 2018 6 次提交
  8. 11 1月, 2018 1 次提交
  9. 10 1月, 2018 1 次提交
  10. 08 1月, 2018 2 次提交
    • E
      nbd/server: Optimize final chunk of sparse read · e2de3256
      Eric Blake 提交于
      If we are careful to handle 0-length read requests correctly,
      we can optimize our sparse read to send the NBD_REPLY_FLAG_DONE
      bit on our last OFFSET_DATA or OFFSET_HOLE chunk rather than
      needing a separate chunk.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      Message-Id: <20171107030912.23930-3-eblake@redhat.com>
      Reviewed-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      e2de3256
    • E
      nbd/server: Implement sparse reads atop structured reply · 418638d3
      Eric Blake 提交于
      The reason that NBD added structured reply in the first place was
      to allow for efficient reads of sparse files, by allowing the
      reply to include chunks to quickly communicate holes to the client
      without sending lots of zeroes over the wire.  Time to implement
      this in the server; our client can already read such data.
      
      We can only skip holes insofar as the block layer can query them;
      and only if the client is okay with a fragmented request (if a
      client requests NBD_CMD_FLAG_DF and the entire read is a hole, we
      could technically return a single NBD_REPLY_TYPE_OFFSET_HOLE, but
      that's a fringe case not worth catering to here).  Sadly, the
      control flow is a bit wonkier than I would have preferred, but
      it was minimally invasive to have a split in the action between
      a fragmented read (handled directly where we recognize
      NBD_CMD_READ with the right conditions, and sending multiple
      chunks) vs. a single read (handled at the end of nbd_trip, for
      both simple and structured replies, when we know there is only
      one thing being read).  Likewise, I didn't make any effort to
      optimize the final chunk of a fragmented read to set the
      NBD_REPLY_FLAG_DONE, but unconditionally send that as a separate
      NBD_REPLY_TYPE_NONE.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      Message-Id: <20171107030912.23930-2-eblake@redhat.com>
      Reviewed-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      418638d3
  11. 28 11月, 2017 2 次提交
    • E
      nbd/server: CVE-2017-15118 Stack smash on large export name · 51ae4f84
      Eric Blake 提交于
      Introduced in commit f37708f6 (2.10).  The NBD spec says a client
      can request export names up to 4096 bytes in length, even though
      they should not expect success on names longer than 256.  However,
      qemu hard-codes the limit of 256, and fails to filter out a client
      that probes for a longer name; the result is a stack smash that can
      potentially give an attacker arbitrary control over the qemu
      process.
      
      The smash can be easily demonstrated with this client:
      $ qemu-io f raw nbd://localhost:10809/$(printf %3000d 1 | tr ' ' a)
      
      If the qemu NBD server binary (whether the standalone qemu-nbd, or
      the builtin server of QMP nbd-server-start) was compiled with
      -fstack-protector-strong, the ability to exploit the stack smash
      into arbitrary execution is a lot more difficult (but still
      theoretically possible to a determined attacker, perhaps in
      combination with other CVEs).  Still, crashing a running qemu (and
      losing the VM) is bad enough, even if the attacker did not obtain
      full execution control.
      
      CC: qemu-stable@nongnu.org
      Signed-off-by: NEric Blake <eblake@redhat.com>
      51ae4f84
    • E
      nbd/server: CVE-2017-15119 Reject options larger than 32M · fdad35ef
      Eric Blake 提交于
      The NBD spec gives us permission to abruptly disconnect on clients
      that send outrageously large option requests, rather than having
      to spend the time reading to the end of the option.  No real
      option request requires that much data anyways; and meanwhile, we
      already have the practice of abruptly dropping the connection on
      any client that sends NBD_CMD_WRITE with a payload larger than 32M.
      
      For comparison, nbdkit drops the connection on any request with
      more than 4096 bytes; however, that limit is probably too low
      (as the NBD spec states an export name can theoretically be up
      to 4096 bytes, which means a valid NBD_OPT_INFO could be even
      longer) - even if qemu doesn't permit exports longer than 256
      bytes.
      
      It could be argued that a malicious client trying to get us to
      read nearly 4G of data on a bad request is a form of denial of
      service.  In particular, if the server requires TLS, but a client
      that does not know the TLS credentials sends any option (other
      than NBD_OPT_STARTTLS or NBD_OPT_EXPORT_NAME) with a stated
      payload of nearly 4G, then the server was keeping the connection
      alive trying to read all the payload, tying up resources that it
      would rather be spending on a client that can get past the TLS
      handshake.  Hence, this warranted a CVE.
      
      Present since at least 2.5 when handling known options, and made
      worse in 2.6 when fixing support for NBD_FLAG_C_FIXED_NEWSTYLE
      to handle unknown options.
      
      CC: qemu-stable@nongnu.org
      Signed-off-by: NEric Blake <eblake@redhat.com>
      fdad35ef
  12. 17 11月, 2017 3 次提交
    • E
      nbd/server: Fix error reporting for bad requests · fed5f8f8
      Eric Blake 提交于
      The NBD spec says an attempt to NBD_CMD_TRIM on a read-only
      export should fail with EPERM, as a trim has the potential
      to change disk contents, but we were relying on the block
      layer to catch that for us, which might not always give the
      right error (and even if it does, it does not let us pass
      back a sane message for structured replies).
      
      The NBD spec says an attempt to NBD_CMD_WRITE_ZEROES out of
      bounds should fail with ENOSPC, not EINVAL.
      
      Our check for u64 offset + u32 length wraparound up front is
      pointless; nothing uses offset until after the second round
      of sanity checks, and we can just as easily ensure there is
      no wraparound by checking whether offset is in bounds (since
      a disk size cannot exceed off_t which is 63 bits, adding a
      32-bit number for a valid offset can't overflow).  Bonus:
      dropping the up-front check lets us keep the connection alive
      after NBD_CMD_WRITE, whereas before we would drop the
      connection (of course, any client sending a packet that would
      trigger the failure is already buggy, so it's also okay to
      drop the connection, but better quality-of-implementation
      never hurts).
      
      Solve all of these issues by some code motion and improved
      request validation.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      Message-Id: <20171115213557.3548-1-eblake@redhat.com>
      Reviewed-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      fed5f8f8
    • E
      nbd/client: Don't hard-disconnect on ESHUTDOWN from server · 01b05c66
      Eric Blake 提交于
      The NBD spec says that a server may fail any transmission request
      with ESHUTDOWN when it is apparent that no further request from
      the client can be successfully honored.  The client is supposed
      to then initiate a soft shutdown (wait for all remaining in-flight
      requests to be answered, then send NBD_CMD_DISC).  However, since
      qemu's server never uses ESHUTDOWN errors, this code was mostly
      untested since its introduction in commit b6f5d3b5.
      
      More recently, I learned that nbdkit as the NBD server is able to
      send ESHUTDOWN errors, so I finally tested this code, and noticed
      that our client was special-casing ESHUTDOWN to cause a hard
      shutdown (immediate disconnect, with no NBD_CMD_DISC), but only
      if the server sends this error as a simple reply.  Further
      investigation found that commit d2febedb introduced a regression
      where structured replies behave differently than simple replies -
      but that the structured reply behavior is more in line with the
      spec (even if we still lack code in nbd-client.c to properly quit
      sending further requests).  So this patch reverts the portion of
      b6f5d3b5 that introduced an improper hard-disconnect special-case
      at the lower level, and leaves the future enhancement of a nicer
      soft-disconnect at the higher level for another day.
      
      CC: qemu-stable@nongnu.org
      Signed-off-by: NEric Blake <eblake@redhat.com>
      Message-Id: <20171113194857.13933-1-eblake@redhat.com>
      Reviewed-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      01b05c66
    • E
      nbd/client: Use error_prepend() correctly · cb6b1a3f
      Eric Blake 提交于
      When using error prepend(), it is necessary to end with a space
      in the format string; otherwise, messages come out incorrectly,
      such as when connecting to a socket that hangs up immediately:
      
      can't open device nbd://localhost:10809/: Failed to read dataUnexpected end-of-file before all bytes were read
      
      Originally botched in commit e44ed99d, then several more instances
      added in the meantime.
      
      Pre-existing and not fixed here: we are inconsistent on capitalization;
      some of our messages start with lower case, and others start with upper,
      although the use of error_prepend() is much nicer to read when all
      fragments consistently start with lower.
      
      CC: qemu-stable@nongnu.org
      Signed-off-by: NEric Blake <eblake@redhat.com>
      Message-Id: <20171113152424.25381-1-eblake@redhat.com>
      Reviewed-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Reviewed-by: NMarkus Armbruster <armbru@redhat.com>
      cb6b1a3f
  13. 10 11月, 2017 3 次提交
  14. 09 11月, 2017 1 次提交
    • V
      nbd/server: fix nbd_negotiate_handle_info · 46321d6b
      Vladimir Sementsov-Ogievskiy 提交于
      namelen should be here, length is unrelated, and always 0 at this
      point.  Broken in introduction in commit f37708f6, but mostly
      harmless (replying with '' as the name does not violate protocol,
      and does not confuse qemu as the nbd client since our implementation
      does not ask for the name; but might confuse some other client that
      does ask for the name especially if the default export is different
      than the export name being queried).
      
      Adding an assert makes it obvious that we are not skipping any bytes
      in the client's message, as well as making it obvious that we were
      using the wrong variable.
      Signed-off-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      CC: qemu-stable@nongnu.org
      Message-Id: <20171101154204.27146-1-vsementsov@virtuozzo.com>
      [eblake: improve commit message, squash in assert addition]
      Signed-off-by: NEric Blake <eblake@redhat.com>
      46321d6b
  15. 31 10月, 2017 4 次提交