1. 08 3月, 2019 3 次提交
  2. 12 2月, 2019 1 次提交
  3. 01 2月, 2019 1 次提交
  4. 11 1月, 2019 1 次提交
    • P
      qemu/queue.h: leave head structs anonymous unless necessary · b58deb34
      Paolo Bonzini 提交于
      Most list head structs need not be given a name.  In most cases the
      name is given just in case one is going to use QTAILQ_LAST, QTAILQ_PREV
      or reverse iteration, but this does not apply to lists of other kinds,
      and even for QTAILQ in practice this is only rarely needed.  In addition,
      we will soon reimplement those macros completely so that they do not
      need a name for the head struct.  So clean up everything, not giving a
      name except in the rare case where it is necessary.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      b58deb34
  5. 14 12月, 2018 1 次提交
  6. 19 11月, 2018 1 次提交
    • E
      qcow2: Don't allow overflow during cluster allocation · 77d6a215
      Eric Blake 提交于
      Our code was already checking that we did not attempt to
      allocate more clusters than what would fit in an INT64 (the
      physical maximimum if we can access a full off_t's worth of
      data).  But this does not catch smaller limits enforced by
      various spots in the qcow2 image description: L1 and normal
      clusters of L2 are documented as having bits 63-56 reserved
      for other purposes, capping our maximum offset at 64PB (bit
      55 is the maximum bit set).  And for compressed images with
      2M clusters, the cap drops the maximum offset to bit 48, or
      a maximum offset of 512TB.  If we overflow that offset, we
      would write compressed data into one place, but try to
      decompress from another, which won't work.
      
      It's actually possible to prove that overflow can cause image
      corruption without this patch; I'll add the iotests separately
      in the next commit.
      Signed-off-by: NEric Blake <eblake@redhat.com>
      Reviewed-by: NAlberto Garcia <berto@igalia.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      77d6a215
  7. 30 10月, 2018 1 次提交
  8. 01 10月, 2018 4 次提交
  9. 10 7月, 2018 1 次提交
  10. 05 7月, 2018 1 次提交
  11. 29 6月, 2018 1 次提交
  12. 15 5月, 2018 1 次提交
    • A
      qcow2: Give the refcount cache the minimum possible size by default · 52253998
      Alberto Garcia 提交于
      The L2 and refcount caches have default sizes that can be overridden
      using the l2-cache-size and refcount-cache-size (an additional
      parameter named cache-size sets the combined size of both caches).
      
      Unless forced by one of the aforementioned parameters, QEMU will set
      the unspecified sizes so that the L2 cache is 4 times larger than the
      refcount cache.
      
      This is based on the premise that the refcount metadata needs to be
      only a fourth of the L2 metadata to cover the same amount of disk
      space. This is incorrect for two reasons:
      
       a) The amount of disk covered by an L2 table depends solely on the
          cluster size, but in the case of a refcount block it depends on
          the cluster size *and* the width of each refcount entry.
          The 4/1 ratio is only valid with 16-bit entries (the default).
      
       b) When we talk about disk space and L2 tables we are talking about
          guest space (L2 tables map guest clusters to host clusters),
          whereas refcount blocks are used for host clusters (including
          L1/L2 tables and the refcount blocks themselves). On a fully
          populated (and uncompressed) qcow2 file, image size > virtual size
          so there are more refcount entries than L2 entries.
      
      Problem (a) could be fixed by adjusting the algorithm to take into
      account the refcount entry width. Problem (b) could be fixed by
      increasing a bit the refcount cache size to account for the clusters
      used for qcow2 metadata.
      
      However this patch takes a completely different approach and instead
      of keeping a ratio between both cache sizes it assigns as much as
      possible to the L2 cache and the remainder to the refcount cache.
      
      The reason is that L2 tables are used for every single I/O request
      from the guest and the effect of increasing the cache is significant
      and clearly measurable. Refcount blocks are however only used for
      cluster allocation and internal snapshots and in practice are accessed
      sequentially in most cases, so the effect of increasing the cache is
      negligible (even when doing random writes from the guest).
      
      So, make the refcount cache as small as possible unless the user
      explicitly asks for a larger one.
      Signed-off-by: NAlberto Garcia <berto@igalia.com>
      Reviewed-by: NEric Blake <eblake@redhat.com>
      Reviewed-by: NMax Reitz <mreitz@redhat.com>
      Message-id: 9695182c2eb11b77cb319689a1ebaa4e7c9d6591.1523968389.git.berto@igalia.com
      Signed-off-by: NMax Reitz <mreitz@redhat.com>
      52253998
  13. 16 4月, 2018 1 次提交
  14. 27 3月, 2018 1 次提交
  15. 09 3月, 2018 2 次提交
  16. 03 3月, 2018 1 次提交
  17. 14 2月, 2018 1 次提交
    • A
      qcow2: Allow configuring the L2 slice size · 1221fe6f
      Alberto Garcia 提交于
      Now that the code is ready to handle L2 slices we can finally add an
      option to allow configuring their size.
      
      An L2 slice is the portion of an L2 table that is read by the qcow2
      cache. Until now the cache was always reading full L2 tables, and
      since the L2 table size is equal to the cluster size this was not very
      efficient with large clusters. Here's a more detailed explanation of
      why it makes sense to have smaller cache entries in order to load L2
      data:
      
         https://lists.gnu.org/archive/html/qemu-block/2017-09/msg00635.html
      
      This patch introduces a new command-line option to the qcow2 driver
      named l2-cache-entry-size (cf. l2-cache-size). The cache entry size
      has the same restrictions as the cluster size: it must be a power of
      two and it has the same range of allowed values, with the additional
      requirement that it must not be larger than the cluster size.
      
      The L2 cache entry size (L2 slice size) remains equal to the cluster
      size for now by default, so this feature must be explicitly enabled.
      Although my tests show that 4KB slices consistently improve
      performance and give the best results, let's wait and make more tests
      with different cluster sizes before deciding on an optimal default.
      
      Now that the cache entry size is not necessarily equal to the cluster
      size we need to reflect that in the MIN_L2_CACHE_SIZE documentation.
      That minimum value is a requirement of the COW algorithm: we need to
      read two L2 slices (and not two L2 tables) in order to do COW, see
      l2_allocate() for the actual code.
      Signed-off-by: NAlberto Garcia <berto@igalia.com>
      Reviewed-by: NEric Blake <eblake@redhat.com>
      Reviewed-by: NMax Reitz <mreitz@redhat.com>
      Message-id: c73e5611ff4a9ec5d20de68a6c289553a13d2354.1517840877.git.berto@igalia.com
      Signed-off-by: NMax Reitz <mreitz@redhat.com>
      1221fe6f
  18. 13 2月, 2018 10 次提交
  19. 22 12月, 2017 1 次提交
  20. 18 11月, 2017 1 次提交
  21. 06 10月, 2017 1 次提交
  22. 26 9月, 2017 2 次提交
  23. 11 7月, 2017 2 次提交