1. 28 3月, 2013 18 次提交
  2. 15 3月, 2013 2 次提交
  3. 19 12月, 2012 1 次提交
  4. 13 12月, 2012 6 次提交
  5. 07 8月, 2012 1 次提交
    • S
      qcow2: implement lazy refcounts · bfe8043e
      Stefan Hajnoczi 提交于
      Lazy refcounts is a performance optimization for qcow2 that postpones
      refcount metadata updates and instead marks the image dirty.  In the
      case of crash or power failure the image will be left in a dirty state
      and repaired next time it is opened.
      
      Reducing metadata I/O is important for cache=writethrough and
      cache=directsync because these modes guarantee that data is on disk
      after each write (hence we cannot take advantage of caching updates in
      RAM).  Refcount metadata is not needed for guest->file block address
      translation and therefore does not need to be on-disk at the time of
      write completion - this is the motivation behind the lazy refcount
      optimization.
      
      The lazy refcount optimization must be enabled at image creation time:
      
        qemu-img create -f qcow2 -o compat=1.1,lazy_refcounts=on a.qcow2 10G
        qemu-system-x86_64 -drive if=virtio,file=a.qcow2,cache=writethrough
      
      Update qemu-iotests 031 and 036 since the extension header size changes
      when we add feature bit table entries.
      Signed-off-by: NStefan Hajnoczi <stefanha@linux.vnet.ibm.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      bfe8043e
  6. 15 6月, 2012 4 次提交
  7. 26 5月, 2012 1 次提交
  8. 08 5月, 2012 1 次提交
    • K
      qcow2: Limit COW to where it's needed · 54e68143
      Kevin Wolf 提交于
      This fixes a regression introduced in commit 250196f1. The bug leads to
      data corruption, found during an Autotest run with a Fedora 8 guest.
      
      Consider a write request whose first part is covered by an already
      allocated cluster, but additional clusters need to be newly allocated.
      When counting the number of clusters to allocate, the qcow2 code would
      decide to do COW for all remaining clusters of the write request, even
      if some of them are already allocated.
      
      If during this COW operation another write request is issued that touches
      the same cluster, it will still refer to the old cluster. When the COW
      completes, the first request will update the L2 table and the second
      write request will be lost. Note that the requests need not overlap, it's
      enough for them to touch the same cluster.
      
      This patch ensures that only clusters that really require COW are
      considered for allocation. In this case any other request writing to the
      same cluster will be an allocating write and gets serialised.
      Reported-by: NMarcelo Tosatti <mtosatti@redhat.com>
      Tested-by: NMarcelo Tosatti <mtosatti@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      54e68143
  9. 03 5月, 2012 2 次提交
  10. 20 4月, 2012 4 次提交