1. 16 10月, 2019 33 次提交
  2. 15 10月, 2019 6 次提交
  3. 14 10月, 2019 1 次提交
    • M
      iotests: Test large write request to qcow2 file · a1406a92
      Max Reitz 提交于
      Without HEAD^, the following happens when you attempt a large write
      request to a qcow2 file such that the number of bytes covered by all
      clusters involved in a single allocation will exceed INT_MAX:
      
      (A) handle_alloc_space() decides to fill the whole area with zeroes and
          fails because bdrv_co_pwrite_zeroes() fails (the request is too
          large).
      
      (B) If handle_alloc_space() does not do anything, but merge_cow()
          decides that the requests can be merged, it will create a too long
          IOV that later cannot be written.
      
      (C) Otherwise, all parts will be written separately, so those requests
          will work.
      
      In either B or C, though, qcow2_alloc_cluster_link_l2() will have an
      overflow: We use an int (i) to iterate over nb_clusters, and then
      calculate the L2 entry based on "i << s->cluster_bits" -- which will
      overflow if the range covers more than INT_MAX bytes.  This then leads
      to image corruption because the L2 entry will be wrong (it will be
      recognized as a compressed cluster).
      
      Even if that were not the case, the .cow_end area would be empty
      (because handle_alloc() will cap avail_bytes and nb_bytes at INT_MAX, so
      their difference (which is the .cow_end size) will be 0).
      
      So this test checks that on such large requests, the image will not be
      corrupted.  Unfortunately, we cannot check whether COW will be handled
      correctly, because that data is discarded when it is written to null-co
      (but we have to use null-co, because writing 2 GB of data in a test is
      not quite reasonable).
      Signed-off-by: NMax Reitz <mreitz@redhat.com>
      Reviewed-by: NEric Blake <eblake@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      a1406a92