1. 28 1月, 2020 12 次提交
    • F
      iscsi: Cap block count from GET LBA STATUS (CVE-2020-1711) · 693fd2ac
      Felipe Franciosi 提交于
      When querying an iSCSI server for the provisioning status of blocks (via
      GET LBA STATUS), Qemu only validates that the response descriptor zero's
      LBA matches the one requested. Given the SCSI spec allows servers to
      respond with the status of blocks beyond the end of the LUN, Qemu may
      have its heap corrupted by clearing/setting too many bits at the end of
      its allocmap for the LUN.
      
      A malicious guest in control of the iSCSI server could carefully program
      Qemu's heap (by selectively setting the bitmap) and then smash it.
      
      This limits the number of bits that iscsi_co_block_status() will try to
      update in the allocmap so it can't overflow the bitmap.
      
      Fixes: CVE-2020-1711
      Cc: qemu-stable@nongnu.org
      Signed-off-by: NFelipe Franciosi <felipe@nutanix.com>
      Signed-off-by: NPeter Turschmid <peter.turschm@nutanix.com>
      Signed-off-by: NRaphael Norwitz <raphael.norwitz@nutanix.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      693fd2ac
    • E
      block/backup: fix memory leak in bdrv_backup_top_append() · fb574de8
      Eiichi Tsukata 提交于
      bdrv_open_driver() allocates bs->opaque according to drv->instance_size.
      There is no need to allocate it and overwrite opaque in
      bdrv_backup_top_append().
      
      Reproducer:
      
        $ QTEST_QEMU_BINARY=./x86_64-softmmu/qemu-system-x86_64 valgrind -q --leak-check=full tests/test-replication -p /replication/secondary/start
        ==29792== 24 bytes in 1 blocks are definitely lost in loss record 52 of 226
        ==29792==    at 0x483AB1A: calloc (vg_replace_malloc.c:762)
        ==29792==    by 0x4B07CE0: g_malloc0 (in /usr/lib64/libglib-2.0.so.0.6000.7)
        ==29792==    by 0x12BAB9: bdrv_open_driver (block.c:1289)
        ==29792==    by 0x12BEA9: bdrv_new_open_driver (block.c:1359)
        ==29792==    by 0x1D15CB: bdrv_backup_top_append (backup-top.c:190)
        ==29792==    by 0x1CC11A: backup_job_create (backup.c:439)
        ==29792==    by 0x1CD542: replication_start (replication.c:544)
        ==29792==    by 0x1401B9: replication_start_all (replication.c:52)
        ==29792==    by 0x128B50: test_secondary_start (test-replication.c:427)
        ...
      
      Fixes: 7df7868b ("block: introduce backup-top filter driver")
      Signed-off-by: NEiichi Tsukata <devel@etsukata.com>
      Reviewed-by: NVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      fb574de8
    • S
      iotests: Test handling of AioContexts with some blockdev actions · 9b8c59e7
      Sergio Lopez 提交于
      Includes the following tests:
      
       - Adding a dirty bitmap.
         * RHBZ: 1782175
      
       - Starting a drive-mirror to an NBD-backed target.
         * RHBZ: 1746217, 1773517
      
       - Aborting an external snapshot transaction.
         * RHBZ: 1779036
      
       - Aborting a blockdev backup transaction.
         * RHBZ: 1782111
      
      For each one of them, a VM with a number of disks running in an
      IOThread AioContext is used.
      Signed-off-by: NSergio Lopez <slp@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      9b8c59e7
    • S
      blockdev: Return bs to the proper context on snapshot abort · 377410f6
      Sergio Lopez 提交于
      external_snapshot_abort() calls to bdrv_set_backing_hd(), which
      returns state->old_bs to the main AioContext, as it's intended to be
      used then the BDS is going to be released. As that's not the case when
      aborting an external snapshot, return it to the AioContext it was
      before the call.
      
      This issue can be triggered by issuing a transaction with two actions,
      a proper blockdev-snapshot-sync and a bogus one, so the second will
      trigger a transaction abort. This results in a crash with an stack
      trace like this one:
      
       #0  0x00007fa1048b28df in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
       #1  0x00007fa10489ccf5 in __GI_abort () at abort.c:79
       #2  0x00007fa10489cbc9 in __assert_fail_base
           (fmt=0x7fa104a03300 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=0x5572240b44d8 "bdrv_get_aio_context(old_bs) == bdrv_get_aio_context(new_bs)", file=0x557224014d30 "block.c", line=2240, function=<optimized out>) at assert.c:92
       #3  0x00007fa1048aae96 in __GI___assert_fail
           (assertion=assertion@entry=0x5572240b44d8 "bdrv_get_aio_context(old_bs) == bdrv_get_aio_context(new_bs)", file=file@entry=0x557224014d30 "block.c", line=line@entry=2240, function=function@entry=0x5572240b5d60 <__PRETTY_FUNCTION__.31620> "bdrv_replace_child_noperm") at assert.c:101
       #4  0x0000557223e631f8 in bdrv_replace_child_noperm (child=0x557225b9c980, new_bs=new_bs@entry=0x557225c42e40) at block.c:2240
       #5  0x0000557223e68be7 in bdrv_replace_node (from=0x557226951a60, to=0x557225c42e40, errp=0x5572247d6138 <error_abort>) at block.c:4196
       #6  0x0000557223d069c4 in external_snapshot_abort (common=0x557225d7e170) at blockdev.c:1731
       #7  0x0000557223d069c4 in external_snapshot_abort (common=0x557225d7e170) at blockdev.c:1717
       #8  0x0000557223d09013 in qmp_transaction (dev_list=<optimized out>, has_props=<optimized out>, props=0x557225cc7d70, errp=errp@entry=0x7ffe704c0c98) at blockdev.c:2360
       #9  0x0000557223e32085 in qmp_marshal_transaction (args=<optimized out>, ret=<optimized out>, errp=0x7ffe704c0d08) at qapi/qapi-commands-transaction.c:44
       #10 0x0000557223ee798c in do_qmp_dispatch (errp=0x7ffe704c0d00, allow_oob=<optimized out>, request=<optimized out>, cmds=0x5572247d3cc0 <qmp_commands>) at qapi/qmp-dispatch.c:132
       #11 0x0000557223ee798c in qmp_dispatch (cmds=0x5572247d3cc0 <qmp_commands>, request=<optimized out>, allow_oob=<optimized out>) at qapi/qmp-dispatch.c:175
       #12 0x0000557223e06141 in monitor_qmp_dispatch (mon=0x557225c69ff0, req=<optimized out>) at monitor/qmp.c:120
       #13 0x0000557223e0678a in monitor_qmp_bh_dispatcher (data=<optimized out>) at monitor/qmp.c:209
       #14 0x0000557223f2f366 in aio_bh_call (bh=0x557225b9dc60) at util/async.c:117
       #15 0x0000557223f2f366 in aio_bh_poll (ctx=ctx@entry=0x557225b9c840) at util/async.c:117
       #16 0x0000557223f32754 in aio_dispatch (ctx=0x557225b9c840) at util/aio-posix.c:459
       #17 0x0000557223f2f242 in aio_ctx_dispatch (source=<optimized out>, callback=<optimized out>, user_data=<optimized out>) at util/async.c:260
       #18 0x00007fa10913467d in g_main_dispatch (context=0x557225c28e80) at gmain.c:3176
       #19 0x00007fa10913467d in g_main_context_dispatch (context=context@entry=0x557225c28e80) at gmain.c:3829
       #20 0x0000557223f31808 in glib_pollfds_poll () at util/main-loop.c:219
       #21 0x0000557223f31808 in os_host_main_loop_wait (timeout=<optimized out>) at util/main-loop.c:242
       #22 0x0000557223f31808 in main_loop_wait (nonblocking=<optimized out>) at util/main-loop.c:518
       #23 0x0000557223d13201 in main_loop () at vl.c:1828
       #24 0x0000557223bbfb82 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4504
      
      RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=1779036Signed-off-by: NSergio Lopez <slp@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      377410f6
    • S
      blockdev: Acquire AioContext on dirty bitmap functions · 91005a49
      Sergio Lopez 提交于
      Dirty map addition and removal functions are not acquiring to BDS
      AioContext, while they may call to code that expects it to be
      acquired.
      
      This may trigger a crash with a stack trace like this one:
      
       #0  0x00007f0ef146370f in __GI_raise (sig=sig@entry=6)
           at ../sysdeps/unix/sysv/linux/raise.c:50
       #1  0x00007f0ef144db25 in __GI_abort () at abort.c:79
       #2  0x0000565022294dce in error_exit
           (err=<optimized out>, msg=msg@entry=0x56502243a730 <__func__.16350> "qemu_mutex_unlock_impl") at util/qemu-thread-posix.c:36
       #3  0x00005650222950ba in qemu_mutex_unlock_impl
           (mutex=mutex@entry=0x5650244b0240, file=file@entry=0x565022439adf "util/async.c", line=line@entry=526) at util/qemu-thread-posix.c:108
       #4  0x0000565022290029 in aio_context_release
           (ctx=ctx@entry=0x5650244b01e0) at util/async.c:526
       #5  0x000056502221cd08 in bdrv_can_store_new_dirty_bitmap
           (bs=bs@entry=0x5650244dc820, name=name@entry=0x56502481d360 "bitmap1", granularity=granularity@entry=65536, errp=errp@entry=0x7fff22831718)
           at block/dirty-bitmap.c:542
       #6  0x000056502206ae53 in qmp_block_dirty_bitmap_add
           (errp=0x7fff22831718, disabled=false, has_disabled=<optimized out>, persistent=<optimized out>, has_persistent=true, granularity=65536, has_granularity=<optimized out>, name=0x56502481d360 "bitmap1", node=<optimized out>) at blockdev.c:2894
       #7  0x000056502206ae53 in qmp_block_dirty_bitmap_add
           (node=<optimized out>, name=0x56502481d360 "bitmap1", has_granularity=<optimized out>, granularity=<optimized out>, has_persistent=true, persistent=<optimized out>, has_disabled=false, disabled=false, errp=0x7fff22831718) at blockdev.c:2856
       #8  0x00005650221847a3 in qmp_marshal_block_dirty_bitmap_add
           (args=<optimized out>, ret=<optimized out>, errp=0x7fff22831798)
           at qapi/qapi-commands-block-core.c:651
       #9  0x0000565022247e6c in do_qmp_dispatch
           (errp=0x7fff22831790, allow_oob=<optimized out>, request=<optimized out>, cmds=0x565022b32d60 <qmp_commands>) at qapi/qmp-dispatch.c:132
       #10 0x0000565022247e6c in qmp_dispatch
           (cmds=0x565022b32d60 <qmp_commands>, request=<optimized out>, allow_oob=<optimized out>) at qapi/qmp-dispatch.c:175
       #11 0x0000565022166061 in monitor_qmp_dispatch
           (mon=0x56502450faa0, req=<optimized out>) at monitor/qmp.c:145
       #12 0x00005650221666fa in monitor_qmp_bh_dispatcher
           (data=<optimized out>) at monitor/qmp.c:234
       #13 0x000056502228f866 in aio_bh_call (bh=0x56502440eae0)
           at util/async.c:117
       #14 0x000056502228f866 in aio_bh_poll (ctx=ctx@entry=0x56502440d7a0)
           at util/async.c:117
       #15 0x0000565022292c54 in aio_dispatch (ctx=0x56502440d7a0)
           at util/aio-posix.c:459
       #16 0x000056502228f742 in aio_ctx_dispatch
           (source=<optimized out>, callback=<optimized out>, user_data=<optimized out>) at util/async.c:260
       #17 0x00007f0ef5ce667d in g_main_dispatch (context=0x56502449aa40)
           at gmain.c:3176
       #18 0x00007f0ef5ce667d in g_main_context_dispatch
           (context=context@entry=0x56502449aa40) at gmain.c:3829
       #19 0x0000565022291d08 in glib_pollfds_poll () at util/main-loop.c:219
       #20 0x0000565022291d08 in os_host_main_loop_wait
           (timeout=<optimized out>) at util/main-loop.c:242
       #21 0x0000565022291d08 in main_loop_wait (nonblocking=<optimized out>)
           at util/main-loop.c:518
       #22 0x00005650220743c1 in main_loop () at vl.c:1828
       #23 0x0000565021f20a72 in main
           (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>)
           at vl.c:4504
      
      Fix this by acquiring the AioContext at qmp_block_dirty_bitmap_add()
      and qmp_block_dirty_bitmap_add().
      
      RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=1782175Signed-off-by: NSergio Lopez <slp@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      91005a49
    • S
      block/backup-top: Don't acquire context while dropping top · 0abf2581
      Sergio Lopez 提交于
      All paths that lead to bdrv_backup_top_drop(), except for the call
      from backup_clean(), imply that the BDS AioContext has already been
      acquired, so doing it there too can potentially lead to QEMU hanging
      on AIO_WAIT_WHILE().
      
      An easy way to trigger this situation is by issuing a two actions
      transaction, with a proper and a bogus blockdev-backup, so the second
      one will trigger a rollback. This will trigger a hang with an stack
      trace like this one:
      
       #0  0x00007fb680c75016 in __GI_ppoll (fds=0x55e74580f7c0, nfds=1, timeout=<optimized out>,
           timeout@entry=0x0, sigmask=sigmask@entry=0x0) at ../sysdeps/unix/sysv/linux/ppoll.c:39
       #1  0x000055e743386e09 in ppoll (__ss=0x0, __timeout=0x0, __nfds=<optimized out>, __fds=<optimized out>)
           at /usr/include/bits/poll2.h:77
       #2  0x000055e743386e09 in qemu_poll_ns
           (fds=<optimized out>, nfds=<optimized out>, timeout=<optimized out>) at util/qemu-timer.c:336
       #3  0x000055e743388dc4 in aio_poll (ctx=0x55e7458925d0, blocking=blocking@entry=true)
           at util/aio-posix.c:669
       #4  0x000055e743305dea in bdrv_flush (bs=bs@entry=0x55e74593c0d0) at block/io.c:2878
       #5  0x000055e7432be58e in bdrv_close (bs=0x55e74593c0d0) at block.c:4017
       #6  0x000055e7432be58e in bdrv_delete (bs=<optimized out>) at block.c:4262
       #7  0x000055e7432be58e in bdrv_unref (bs=bs@entry=0x55e74593c0d0) at block.c:5644
       #8  0x000055e743316b9b in bdrv_backup_top_drop (bs=bs@entry=0x55e74593c0d0) at block/backup-top.c:273
       #9  0x000055e74331461f in backup_job_create
           (job_id=0x0, bs=bs@entry=0x55e7458d5820, target=target@entry=0x55e74589f640, speed=0, sync_mode=MIRROR_SYNC_MODE_FULL, sync_bitmap=sync_bitmap@entry=0x0, bitmap_mode=BITMAP_SYNC_MODE_ON_SUCCESS, compress=false, filter_node_name=0x0, on_source_error=BLOCKDEV_ON_ERROR_REPORT, on_target_error=BLOCKDEV_ON_ERROR_REPORT, creation_flags=0, cb=0x0, opaque=0x0, txn=0x0, errp=0x7ffddfd1efb0) at block/backup.c:478
       #10 0x000055e74315bc52 in do_backup_common
           (backup=backup@entry=0x55e746c066d0, bs=bs@entry=0x55e7458d5820, target_bs=target_bs@entry=0x55e74589f640, aio_context=aio_context@entry=0x55e7458a91e0, txn=txn@entry=0x0, errp=errp@entry=0x7ffddfd1efb0)
           at blockdev.c:3580
       #11 0x000055e74315c37c in do_blockdev_backup
           (backup=backup@entry=0x55e746c066d0, txn=0x0, errp=errp@entry=0x7ffddfd1efb0)
           at /usr/src/debug/qemu-kvm-4.2.0-2.module+el8.2.0+5135+ed3b2489.x86_64/./qapi/qapi-types-block-core.h:1492
       #12 0x000055e74315c449 in blockdev_backup_prepare (common=0x55e746a8de90, errp=0x7ffddfd1f018)
           at blockdev.c:1885
       #13 0x000055e743160152 in qmp_transaction
           (dev_list=<optimized out>, has_props=<optimized out>, props=0x55e7467fe2c0, errp=errp@entry=0x7ffddfd1f088) at blockdev.c:2340
       #14 0x000055e743287ff5 in qmp_marshal_transaction
           (args=<optimized out>, ret=<optimized out>, errp=0x7ffddfd1f0f8)
           at qapi/qapi-commands-transaction.c:44
       #15 0x000055e74333de6c in do_qmp_dispatch
           (errp=0x7ffddfd1f0f0, allow_oob=<optimized out>, request=<optimized out>, cmds=0x55e743c28d60 <qmp_commands>) at qapi/qmp-dispatch.c:132
       #16 0x000055e74333de6c in qmp_dispatch
           (cmds=0x55e743c28d60 <qmp_commands>, request=<optimized out>, allow_oob=<optimized out>)
           at qapi/qmp-dispatch.c:175
       #17 0x000055e74325c061 in monitor_qmp_dispatch (mon=0x55e745908030, req=<optimized out>)
           at monitor/qmp.c:145
       #18 0x000055e74325c6fa in monitor_qmp_bh_dispatcher (data=<optimized out>) at monitor/qmp.c:234
       #19 0x000055e743385866 in aio_bh_call (bh=0x55e745807ae0) at util/async.c:117
       #20 0x000055e743385866 in aio_bh_poll (ctx=ctx@entry=0x55e7458067a0) at util/async.c:117
       #21 0x000055e743388c54 in aio_dispatch (ctx=0x55e7458067a0) at util/aio-posix.c:459
       #22 0x000055e743385742 in aio_ctx_dispatch
           (source=<optimized out>, callback=<optimized out>, user_data=<optimized out>) at util/async.c:260
       #23 0x00007fb68543e67d in g_main_dispatch (context=0x55e745893a40) at gmain.c:3176
       #24 0x00007fb68543e67d in g_main_context_dispatch (context=context@entry=0x55e745893a40) at gmain.c:3829
       #25 0x000055e743387d08 in glib_pollfds_poll () at util/main-loop.c:219
       #26 0x000055e743387d08 in os_host_main_loop_wait (timeout=<optimized out>) at util/main-loop.c:242
       #27 0x000055e743387d08 in main_loop_wait (nonblocking=<optimized out>) at util/main-loop.c:518
       #28 0x000055e74316a3c1 in main_loop () at vl.c:1828
       #29 0x000055e743016a72 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>)
           at vl.c:4504
      
      Fix this by not acquiring the AioContext there, and ensuring all paths
      leading to it have it already acquired (backup_clean()).
      
      RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=1782111Signed-off-by: NSergio Lopez <slp@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      0abf2581
    • S
      blockdev: honor bdrv_try_set_aio_context() context requirements · 3ea67e08
      Sergio Lopez 提交于
      bdrv_try_set_aio_context() requires that the old context is held, and
      the new context is not held. Fix all the occurrences where it's not
      done this way.
      Suggested-by: NMax Reitz <mreitz@redhat.com>
      Signed-off-by: NSergio Lopez <slp@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      3ea67e08
    • S
      blockdev: unify qmp_blockdev_backup and blockdev-backup transaction paths · 5b7bfe51
      Sergio Lopez 提交于
      Issuing a blockdev-backup from qmp_blockdev_backup takes a slightly
      different path than when it's issued from a transaction. In the code,
      this is manifested as some redundancy between do_blockdev_backup() and
      blockdev_backup_prepare().
      
      This change unifies both paths, merging do_blockdev_backup() and
      blockdev_backup_prepare(), and changing qmp_blockdev_backup() to
      create a transaction instead of calling do_backup_common() direcly.
      
      As a side-effect, now qmp_blockdev_backup() is executed inside a
      drained section, as it happens when creating a blockdev-backup
      transaction. This change is visible from the user's perspective, as
      the job gets paused and immediately resumed before starting the actual
      work.
      Signed-off-by: NSergio Lopez <slp@redhat.com>
      Reviewed-by: NMax Reitz <mreitz@redhat.com>
      Reviewed-by: NKevin Wolf <kwolf@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      5b7bfe51
    • S
      blockdev: unify qmp_drive_backup and drive-backup transaction paths · 2288ccfa
      Sergio Lopez 提交于
      Issuing a drive-backup from qmp_drive_backup takes a slightly
      different path than when it's issued from a transaction. In the code,
      this is manifested as some redundancy between do_drive_backup() and
      drive_backup_prepare().
      
      This change unifies both paths, merging do_drive_backup() and
      drive_backup_prepare(), and changing qmp_drive_backup() to create a
      transaction instead of calling do_backup_common() direcly.
      
      As a side-effect, now qmp_drive_backup() is executed inside a drained
      section, as it happens when creating a drive-backup transaction. This
      change is visible from the user's perspective, as the job gets paused
      and immediately resumed before starting the actual work.
      
      Also fix tests 141, 185 and 219 to cope with the extra
      JOB_STATUS_CHANGE lines.
      Signed-off-by: NSergio Lopez <slp@redhat.com>
      Reviewed-by: NKevin Wolf <kwolf@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      2288ccfa
    • S
      blockdev: fix coding style issues in drive_backup_prepare · 471ded69
      Sergio Lopez 提交于
      Fix a couple of minor coding style issues in drive_backup_prepare.
      Signed-off-by: NSergio Lopez <slp@redhat.com>
      Reviewed-by: NMax Reitz <mreitz@redhat.com>
      Reviewed-by: NKevin Wolf <kwolf@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      471ded69
    • T
      iotests: Add more "skip_if_unsupported" statements to the python tests · 9442bebe
      Thomas Huth 提交于
      The python code already contains a possibility to skip tests if the
      corresponding driver is not available in the qemu binary - use it
      in more spots to avoid that the tests are failing if the driver has
      been disabled.
      
      While we're at it, we can now also remove some of the old checks that
      were using iotests.supports_quorum() - and which were apparently not
      working as expected since the tests aborted instead of being skipped
      when "quorum" was missing in the QEMU binary.
      Signed-off-by: NThomas Huth <thuth@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      9442bebe
    • M
      iotests.py: Let wait_migration wait even more · 8da7969b
      Max Reitz 提交于
      The "migration completed" event may be sent (on the source, to be
      specific) before the migration is actually completed, so the VM runstate
      will still be "finish-migrate" instead of "postmigrate".  So ask the
      users of VM.wait_migration() to specify the final runstate they desire
      and then poll the VM until it has reached that state.  (This should be
      over very quickly, so busy polling is fine.)
      
      Without this patch, I see intermittent failures in the new iotest 280
      under high system load.  I have not yet seen such failures with other
      iotests that use VM.wait_migration() and query-status afterwards, but
      maybe they just occur even more rarely, or it is because they also wait
      on the destination VM to be running.
      Signed-off-by: NMax Reitz <mreitz@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      8da7969b
  2. 27 1月, 2020 17 次提交
  3. 25 1月, 2020 11 次提交