1. 03 11月, 2014 3 次提交
  2. 23 10月, 2014 1 次提交
  3. 22 9月, 2014 1 次提交
  4. 20 8月, 2014 1 次提交
    • M
      block: Use g_new() & friends where that makes obvious sense · 5839e53b
      Markus Armbruster 提交于
      g_new(T, n) is neater than g_malloc(sizeof(T) * n).  It's also safer,
      for two reasons.  One, it catches multiplication overflowing size_t.
      Two, it returns T * rather than void *, which lets the compiler catch
      more type errors.
      
      Patch created with Coccinelle, with two manual changes on top:
      
      * Add const to bdrv_iterate_format() to keep the types straight
      
      * Convert the allocation in bdrv_drop_intermediate(), which Coccinelle
        inexplicably misses
      
      Coccinelle semantic patch:
      
          @@
          type T;
          @@
          -g_malloc(sizeof(T))
          +g_new(T, 1)
          @@
          type T;
          @@
          -g_try_malloc(sizeof(T))
          +g_try_new(T, 1)
          @@
          type T;
          @@
          -g_malloc0(sizeof(T))
          +g_new0(T, 1)
          @@
          type T;
          @@
          -g_try_malloc0(sizeof(T))
          +g_try_new0(T, 1)
          @@
          type T;
          expression n;
          @@
          -g_malloc(sizeof(T) * (n))
          +g_new(T, n)
          @@
          type T;
          expression n;
          @@
          -g_try_malloc(sizeof(T) * (n))
          +g_try_new(T, n)
          @@
          type T;
          expression n;
          @@
          -g_malloc0(sizeof(T) * (n))
          +g_new0(T, n)
          @@
          type T;
          expression n;
          @@
          -g_try_malloc0(sizeof(T) * (n))
          +g_try_new0(T, n)
          @@
          type T;
          expression p, n;
          @@
          -g_realloc(p, sizeof(T) * (n))
          +g_renew(T, p, n)
          @@
          type T;
          expression p, n;
          @@
          -g_try_realloc(p, sizeof(T) * (n))
          +g_try_renew(T, p, n)
      Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
      Reviewed-by: NMax Reitz <mreitz@redhat.com>
      Reviewed-by: NJeff Cody <jcody@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      5839e53b
  5. 15 8月, 2014 2 次提交
    • M
      qcow2: Catch !*host_offset for data allocation · ff52aab2
      Max Reitz 提交于
      qcow2_alloc_cluster_offset() uses host_offset == 0 as "no preferred
      offset" for the (data) cluster range to be allocated. However, this
      offset is actually valid and may be allocated on images with a corrupted
      refcount table or first refcount block.
      
      In this case, the corruption prevention should normally catch that
      write anyway (because it would overwrite the image header). But since 0
      is a special value here, the function assumes that nothing has been
      allocated at all which it asserts against.
      
      Because this condition is not qemu's fault but rather that of a broken
      image, it shouldn't throw an assertion but rather mark the image corrupt
      and show an appropriate message, which this patch does by calling the
      corruption check earlier than it would be called normally (before the
      assertion).
      Signed-off-by: NMax Reitz <mreitz@redhat.com>
      Reviewed-by: NEric Blake <eblake@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      ff52aab2
    • K
      qcow2: Handle failure for potentially large allocations · de82815d
      Kevin Wolf 提交于
      Some code in the block layer makes potentially huge allocations. Failure
      is not completely unexpected there, so avoid aborting qemu and handle
      out-of-memory situations gracefully.
      
      This patch addresses the allocations in the qcow2 block driver.
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
      de82815d
  6. 28 5月, 2014 1 次提交
  7. 30 4月, 2014 1 次提交
    • M
      qcow2: Check min_size in qcow2_grow_l1_table() · b93f9950
      Max Reitz 提交于
      First, new_l1_size is an int64_t, whereas min_size is a uint64_t.
      Therefore, during the loop which adjusts new_l1_size until it equals or
      exceeds min_size, new_l1_size might overflow and become negative. The
      comparison in the loop condition however will take it as an unsigned
      value (because min_size is unsigned) and therefore recognize it as
      exceeding min_size. Therefore, the loop is left with a negative
      new_l1_size, which is not correct. This could be fixed by making
      new_l1_size uint64_t.
      
      On the other hand, however, by doing this, the while loop may take
      forever. If min_size is e.g. UINT64_MAX, it will take new_l1_size
      probably multiple overflows to reach the exact same value (if it reaches
      it at all). Then, right after the loop, new_l1_size will be recognized
      as being too big anyway.
      
      Both problems require a ridiculously high min_size value, which is very
      unlikely to occur; but both problems are also simply avoided by checking
      whether min_size is sane before calculating new_l1_size (which should
      still be checked separately, though).
      Signed-off-by: NMax Reitz <mreitz@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      b93f9950
  8. 29 4月, 2014 1 次提交
    • M
      qcow2: Fix discard · c883db0d
      Max Reitz 提交于
      discard_single_l2() should not implement its own version of
      qcow2_get_cluster_type(), but rather rely on this already existing
      function. By doing so, it will work for compressed clusters as well
      (which it did not so far).
      
      Also, rename "old_offset" to "old_l2_entry", as both are quite different
      (and the value is indeed of the latter kind).
      Signed-off-by: NMax Reitz <mreitz@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      c883db0d
  9. 04 4月, 2014 1 次提交
  10. 01 4月, 2014 2 次提交
  11. 13 3月, 2014 1 次提交
  12. 22 2月, 2014 1 次提交
  13. 09 2月, 2014 1 次提交
  14. 06 12月, 2013 1 次提交
  15. 28 11月, 2013 1 次提交
  16. 14 11月, 2013 1 次提交
    • P
      qcow2: fix possible corruption when reading multiple clusters · 78a52ad5
      Peter Lieven 提交于
      if multiple sectors spanning multiple clusters are read the
      function count_contiguous_clusters should ensure that the
      cluster type should not change between the clusters.
      
      Especially the for-loop should break when we have one
      or more normal clusters followed by a compressed cluster.
      
      Unfortunately the wrong macro was used in the mask to
      compare the flags.
      
      This was discovered while debugging a data corruption
      issue when converting a compressed qcow2 image to raw.
      qemu-img reads 2MB chunks which span multiple clusters.
      
      CC: qemu-stable@nongnu.org
      Signed-off-by: NPeter Lieven <pl@kamp.de>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      78a52ad5
  17. 06 11月, 2013 1 次提交
  18. 11 10月, 2013 1 次提交
  19. 07 10月, 2013 1 次提交
  20. 02 10月, 2013 1 次提交
  21. 27 9月, 2013 7 次提交
  22. 26 9月, 2013 1 次提交
    • M
      qcow2: Assert against currently impossible overflow · c01dbccb
      Max Reitz 提交于
      If qcow2_alloc_cluster_link_l2 is called with a QCowL2Meta describing a
      request crossing L2 boundaries, a buffer overflow will occur. This is
      impossible right now since such requests are never generated (every
      request is shortened to L2 boundaries before) and probably also
      completely unintended (considering the name "QCowL2Meta"), however, it
      is still worth an assertion.
      Signed-off-by: NMax Reitz <mreitz@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      c01dbccb
  23. 12 9月, 2013 2 次提交
  24. 30 8月, 2013 2 次提交
  25. 24 6月, 2013 2 次提交
  26. 14 5月, 2013 1 次提交
    • K
      qcow2: Catch some L1 table index overflows · 2cf7cfa1
      Kevin Wolf 提交于
      This catches the situation that is described in the bug report at
      https://bugs.launchpad.net/qemu/+bug/865518 and goes like this:
      
          $ qemu-img create -f qcow2 huge.qcow2 $((1024*1024))T
          Formatting 'huge.qcow2', fmt=qcow2 size=1152921504606846976 encryption=off cluster_size=65536 lazy_refcounts=off
          $ qemu-io /tmp/huge.qcow2 -c "write $((1024*1024*1024*1024*1024*1024 - 1024)) 512"
          Segmentation fault
      
      With this patch applied the segfault will be avoided, however the case
      will still fail, though gracefully:
      
          $ qemu-img create -f qcow2 /tmp/huge.qcow2 $((1024*1024))T
          Formatting 'huge.qcow2', fmt=qcow2 size=1152921504606846976 encryption=off cluster_size=65536 lazy_refcounts=off
          qemu-img: The image size is too large for file format 'qcow2'
      
      Note that even long before these overflow checks kick in, you get
      insanely high memory usage (up to INT_MAX * sizeof(uint64_t) = 16 GB for
      the L1 table), so with somewhat smaller image sizes you'll probably see
      qemu aborting for a failed g_malloc().
      
      If you need huge image sizes, you should increase the cluster size to
      the maximum of 2 MB in order to get higher limits.
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      2cf7cfa1
  27. 28 3月, 2013 1 次提交
    • K
      qcow2: Gather clusters in a looping loop · ecdd5333
      Kevin Wolf 提交于
      Instead of just checking once in exactly this order if there are
      dependendies, non-COW clusters and new allocation, this starts looping
      around these. This way we can, for example, gather non-COW clusters after
      new allocations as long as the host cluster offsets stay contiguous.
      
      Once handle_dependencies() is extended so that COW areas of in-flight
      allocations can be overwritten, this allows to continue with gathering
      other clusters (we wouldn't be able to do that without this change
      because we would have missed a possible second dependency in one of the
      next clusters).
      
      This means that in the typical sequential write case, we can combine the
      COW overwrite of one cluster with the allocation of the next cluster as
      soon as something like Delayed COW gets actually implemented. It is only
      by avoiding splitting requests this way that Delayed COW actually starts
      improving performance noticably.
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      ecdd5333