1. 01 12月, 2018 1 次提交
  2. 20 11月, 2018 2 次提交
  3. 30 10月, 2018 1 次提交
  4. 04 10月, 2018 1 次提交
  5. 03 10月, 2018 2 次提交
    • V
      nbd/server: fix NBD_CMD_CACHE · 2f454def
      Vladimir Sementsov-Ogievskiy 提交于
      We should not go to structured-read branch on CACHE command, fix that.
      
      Bug introduced in bc37b06a "nbd/server: introduce NBD_CMD_CACHE"
      with the whole feature and affects 3.0.0 release.
      Signed-off-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      CC: qemu-stable@nongnu.org
      Message-Id: <20181003144738.70670-1-vsementsov@virtuozzo.com>
      Reviewed-by: NEric Blake <eblake@redhat.com>
      [eblake: commit message typo fix]
      Signed-off-by: NEric Blake <eblake@redhat.com>
      2f454def
    • P
      nbd: Don't take address of fields in packed structs · 80c7c2b0
      Peter Maydell 提交于
      Taking the address of a field in a packed struct is a bad idea, because
      it might not be actually aligned enough for that pointer type (and
      thus cause a crash on dereference on some host architectures). Newer
      versions of clang warn about this. Avoid the bug by not using the
      "modify in place" byte swapping functions.
      
      This patch was produced with the following spatch script:
      @@
      expression E;
      @@
      -be16_to_cpus(&E);
      +E = be16_to_cpu(E);
      @@
      expression E;
      @@
      -be32_to_cpus(&E);
      +E = be32_to_cpu(E);
      @@
      expression E;
      @@
      -be64_to_cpus(&E);
      +E = be64_to_cpu(E);
      @@
      expression E;
      @@
      -cpu_to_be16s(&E);
      +E = cpu_to_be16(E);
      @@
      expression E;
      @@
      -cpu_to_be32s(&E);
      +E = cpu_to_be32(E);
      @@
      expression E;
      @@
      -cpu_to_be64s(&E);
      +E = cpu_to_be64(E);
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      Message-Id: <20180927164200.15097-1-peter.maydell@linaro.org>
      Reviewed-by: NEric Blake <eblake@redhat.com>
      [eblake: rebase, and squash in missed changes]
      Signed-off-by: NEric Blake <eblake@redhat.com>
      80c7c2b0
  6. 27 9月, 2018 1 次提交
  7. 26 9月, 2018 1 次提交
    • V
      nbd/server: fix bitmap export · 6545916d
      Vladimir Sementsov-Ogievskiy 提交于
      bitmap_to_extents function is broken: it switches dirty variable after
      every iteration, however it can process only part of dirty (or zero)
      area during one iteration in case when this area is too large for one
      extent.
      
      Fortunately, the bug doesn't produce wrong extent flags: it just inserts
      a zero-length extent between sequential extents representing large dirty
      (or zero) area. However, zero-length extents are forbidden by the NBD
      protocol. So, a careful client should consider such a reply as a server
      fault, while a less-careful will likely ignore zero-length extents.
      
      The bug can only be triggered by a client that requests block status
      for nearly 4G at once (a request of 4G and larger is impossible per
      the protocol, and requests smaller than 4G less the bitmap granularity
      cause the loop to quit iterating rather than revisit the tail of the
      large area); it also cannot trigger if the client used the
      NBD_CMD_FLAG_REQ_ONE flag.  Since qemu 3.0 as client (using the
      x-dirty-bitmap extension) always passes the flag, it is immune; and
      we are not aware of other open-source clients that know how to request
      qemu:dirty-bitmap:FOO contexts.  Clients that want to avoid the bug
      could cap block status requests to a smaller length, such as 2G or 3G.
      
      Fix this by more careful handling of dirty variable.
      
      Bug was introduced in 3d068aff
       "nbd/server: implement dirty bitmap export", with the whole function.
      and is present in v3.0.0 release.
      Signed-off-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Message-Id: <20180914165116.23182-1-vsementsov@virtuozzo.com>
      CC: qemu-stable@nongnu.org
      Reviewed-by: NEric Blake <eblake@redhat.com>
      [eblake: improved commit message]
      Signed-off-by: NEric Blake <eblake@redhat.com>
      6545916d
  8. 08 7月, 2018 1 次提交
  9. 03 7月, 2018 1 次提交
    • E
      nbd/server: Fix dirty bitmap logic regression · 7606c99a
      Eric Blake 提交于
      In my hurry to fix a build failure, I introduced a logic bug.
      The assertion conditional is backwards, meaning that qemu will
      now abort instead of reporting dirty bitmap status.
      
      The bug can only be tickled by an NBD client using an exported
      dirty bitmap (which is still an experimental QMP command), so
      it's not the end of the world for supported usage (and neither
      'make check' nor qemu-iotests fails); but it also shows that we
      really want qemu-io support for reading dirty bitmaps if only
      so that I can add iotests coverage to prevent future
      brown-bag-of-shame events like this one.
      
      Fixes: 45eb6fb6Signed-off-by: NEric Blake <eblake@redhat.com>
      Message-Id: <20180622153509.375130-1-eblake@redhat.com>
      7606c99a
  10. 22 6月, 2018 1 次提交
  11. 21 6月, 2018 6 次提交
  12. 02 4月, 2018 1 次提交
  13. 14 3月, 2018 9 次提交
  14. 06 3月, 2018 1 次提交
  15. 02 3月, 2018 1 次提交
  16. 26 1月, 2018 1 次提交
  17. 18 1月, 2018 6 次提交
  18. 10 1月, 2018 1 次提交
  19. 08 1月, 2018 2 次提交
    • E
      nbd/server: Optimize final chunk of sparse read · e2de3256
      Eric Blake 提交于
      If we are careful to handle 0-length read requests correctly,
      we can optimize our sparse read to send the NBD_REPLY_FLAG_DONE
      bit on our last OFFSET_DATA or OFFSET_HOLE chunk rather than
      needing a separate chunk.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      Message-Id: <20171107030912.23930-3-eblake@redhat.com>
      Reviewed-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      e2de3256
    • E
      nbd/server: Implement sparse reads atop structured reply · 418638d3
      Eric Blake 提交于
      The reason that NBD added structured reply in the first place was
      to allow for efficient reads of sparse files, by allowing the
      reply to include chunks to quickly communicate holes to the client
      without sending lots of zeroes over the wire.  Time to implement
      this in the server; our client can already read such data.
      
      We can only skip holes insofar as the block layer can query them;
      and only if the client is okay with a fragmented request (if a
      client requests NBD_CMD_FLAG_DF and the entire read is a hole, we
      could technically return a single NBD_REPLY_TYPE_OFFSET_HOLE, but
      that's a fringe case not worth catering to here).  Sadly, the
      control flow is a bit wonkier than I would have preferred, but
      it was minimally invasive to have a split in the action between
      a fragmented read (handled directly where we recognize
      NBD_CMD_READ with the right conditions, and sending multiple
      chunks) vs. a single read (handled at the end of nbd_trip, for
      both simple and structured replies, when we know there is only
      one thing being read).  Likewise, I didn't make any effort to
      optimize the final chunk of a fragmented read to set the
      NBD_REPLY_FLAG_DONE, but unconditionally send that as a separate
      NBD_REPLY_TYPE_NONE.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      Message-Id: <20171107030912.23930-2-eblake@redhat.com>
      Reviewed-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      418638d3