1. 04 8月, 2017 1 次提交
  2. 01 8月, 2017 22 次提交
  3. 20 4月, 2017 1 次提交
  4. 19 4月, 2017 1 次提交
  5. 18 4月, 2017 4 次提交
    • P
      Merge remote-tracking branch 'remotes/famz/tags/block-pull-request' into staging · d1263f8f
      Peter Maydell 提交于
      # gpg: Signature made Tue 18 Apr 2017 15:58:32 BST
      # gpg:                using RSA key 0xCA35624C6A9171C6
      # gpg: Good signature from "Fam Zheng <famz@redhat.com>"
      # gpg: WARNING: This key is not certified with sufficiently trusted signatures!
      # gpg:          It is not certain that the signature belongs to the owner.
      # Primary key fingerprint: 5003 7CB7 9706 0F76 F021  AD56 CA35 624C 6A91 71C6
      
      * remotes/famz/tags/block-pull-request:
        block: Drain BH in bdrv_drained_begin
        block: Walk bs->children carefully in bdrv_drain_recurse
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      d1263f8f
    • F
      block: Drain BH in bdrv_drained_begin · 91af091f
      Fam Zheng 提交于
      During block job completion, nothing is preventing
      block_job_defer_to_main_loop_bh from being called in a nested
      aio_poll(), which is a trouble, such as in this code path:
      
          qmp_block_commit
            commit_active_start
              bdrv_reopen
                bdrv_reopen_multiple
                  bdrv_reopen_prepare
                    bdrv_flush
                      aio_poll
                        aio_bh_poll
                          aio_bh_call
                            block_job_defer_to_main_loop_bh
                              stream_complete
                                bdrv_reopen
      
      block_job_defer_to_main_loop_bh is the last step of the stream job,
      which should have been "paused" by the bdrv_drained_begin/end in
      bdrv_reopen_multiple, but it is not done because it's in the form of a
      main loop BH.
      
      Similar to why block jobs should be paused between drained_begin and
      drained_end, BHs they schedule must be excluded as well.  To achieve
      this, this patch forces draining the BH in BDRV_POLL_WHILE.
      
      As a side effect this fixes a hang in block_job_detach_aio_context
      during system_reset when a block job is ready:
      
          #0  0x0000555555aa79f3 in bdrv_drain_recurse
          #1  0x0000555555aa825d in bdrv_drained_begin
          #2  0x0000555555aa8449 in bdrv_drain
          #3  0x0000555555a9c356 in blk_drain
          #4  0x0000555555aa3cfd in mirror_drain
          #5  0x0000555555a66e11 in block_job_detach_aio_context
          #6  0x0000555555a62f4d in bdrv_detach_aio_context
          #7  0x0000555555a63116 in bdrv_set_aio_context
          #8  0x0000555555a9d326 in blk_set_aio_context
          #9  0x00005555557e38da in virtio_blk_data_plane_stop
          #10 0x00005555559f9d5f in virtio_bus_stop_ioeventfd
          #11 0x00005555559fa49b in virtio_bus_stop_ioeventfd
          #12 0x00005555559f6a18 in virtio_pci_stop_ioeventfd
          #13 0x00005555559f6a18 in virtio_pci_reset
          #14 0x00005555559139a9 in qdev_reset_one
          #15 0x0000555555916738 in qbus_walk_children
          #16 0x0000555555913318 in qdev_walk_children
          #17 0x0000555555916738 in qbus_walk_children
          #18 0x00005555559168ca in qemu_devices_reset
          #19 0x000055555581fcbb in pc_machine_reset
          #20 0x00005555558a4d96 in qemu_system_reset
          #21 0x000055555577157a in main_loop_should_exit
          #22 0x000055555577157a in main_loop
          #23 0x000055555577157a in main
      
      The rationale is that the loop in block_job_detach_aio_context cannot
      make any progress in pausing/completing the job, because bs->in_flight
      is 0, so bdrv_drain doesn't process the block_job_defer_to_main_loop
      BH. With this patch, it does.
      Reported-by: NJeff Cody <jcody@redhat.com>
      Signed-off-by: NFam Zheng <famz@redhat.com>
      Message-Id: <20170418143044.12187-3-famz@redhat.com>
      Reviewed-by: NJeff Cody <jcody@redhat.com>
      Tested-by: NJeff Cody <jcody@redhat.com>
      Signed-off-by: NFam Zheng <famz@redhat.com>
      91af091f
    • F
      block: Walk bs->children carefully in bdrv_drain_recurse · 178bd438
      Fam Zheng 提交于
      The recursive bdrv_drain_recurse may run a block job completion BH that
      drops nodes. The coming changes will make that more likely and use-after-free
      would happen without this patch
      
      Stash the bs pointer and use bdrv_ref/bdrv_unref in addition to
      QLIST_FOREACH_SAFE to prevent such a case from happening.
      
      Since bdrv_unref accesses global state that is not protected by the AioContext
      lock, we cannot use bdrv_ref/bdrv_unref unconditionally.  Fortunately the
      protection is not needed in IOThread because only main loop can modify a graph
      with the AioContext lock held.
      Signed-off-by: NFam Zheng <famz@redhat.com>
      Message-Id: <20170418143044.12187-2-famz@redhat.com>
      Reviewed-by: NJeff Cody <jcody@redhat.com>
      Tested-by: NJeff Cody <jcody@redhat.com>
      Signed-off-by: NFam Zheng <famz@redhat.com>
      178bd438
    • G
      9pfs: local: set the path of the export root to "." · 9c6b899f
      Greg Kurz 提交于
      The local backend was recently converted to using "at*()" syscalls in order
      to ensure all accesses happen below the shared directory. This requires that
      we only pass relative paths, otherwise the dirfd argument to the "at*()"
      syscalls is ignored and the path is treated as an absolute path in the host.
      This is actually the case for paths in all fids, with the notable exception
      of the root fid, whose path is "/". This causes the following backend ops to
      act on the "/" directory of the host instead of the virtfs shared directory
      when the export root is involved:
      - lstat
      - chmod
      - chown
      - utimensat
      
      ie, chmod /9p_mount_point in the guest will be converted to chmod / in the
      host for example. This could cause security issues with a privileged QEMU.
      
      All "*at()" syscalls are being passed an open file descriptor. In the case
      of the export root, this file descriptor points to the path in the host that
      was passed to -fsdev.
      
      The fix is thus as simple as changing the path of the export root fid to be
      "." instead of "/".
      
      This is CVE-2017-7471.
      
      Cc: qemu-stable@nongnu.org
      Reported-by: NLéo Gaspard <leo@gaspard.io>
      Signed-off-by: NGreg Kurz <groug@kaod.org>
      Reviewed-by: NEric Blake <eblake@redhat.com>
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      9c6b899f
  6. 12 4月, 2017 1 次提交
  7. 11 4月, 2017 10 次提交
    • M
      block/io: Comment out permission assertions · e3e0003a
      Max Reitz 提交于
      In case of block migration, there may be writes to BlockBackends that do
      not have the write permission taken. Before this issue is fixed (which
      is not going to happen in 2.9), we therefore cannot assert that this is
      the case.
      Suggested-by: NKevin Wolf <kwolf@redhat.com>
      Signed-off-by: NMax Reitz <mreitz@redhat.com>
      Reviewed-by: NKevin Wolf <kwolf@redhat.com>
      Tested-by: NKevin Wolf <kwolf@redhat.com>
      Message-id: 20170411145050.31290-1-mreitz@redhat.com
      Tested-by: NLaurent Vivier <lvivier@redhat.com>
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      e3e0003a
    • K
      sheepdog: Fix crash in co_read_response() · 5eceb01a
      Kevin Wolf 提交于
      This fixes a regression introduced in commit 9d456654.
      
      aio_co_wake() can only be used to reenter a coroutine that was already
      previously entered, otherwise co->ctx is uninitialised and we access
      garbage. Using it immediately after qemu_coroutine_create() like in
      co_read_response() is wrong and causes segfaults.
      
      Replace the call with aio_co_enter(), which gets an explicit AioContext
      parameter and works even for new coroutines.
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      Tested-by: NKashyap Chamarthy <kchamart@redhat.com>
      Reviewed-by: NMax Reitz <mreitz@redhat.com>
      Message-id: 1491919733-21065-1-git-send-email-kwolf@redhat.com
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      5eceb01a
    • P
      Merge remote-tracking branch 'remotes/maxreitz/tags/pull-block-2017-04-11' into staging · f5ac5cfe
      Peter Maydell 提交于
      Block patches for 2.9.0-rc4
      
      # gpg: Signature made Tue 11 Apr 2017 14:40:07 BST
      # gpg:                using RSA key 0xF407DB0061D5CF40
      # gpg: Good signature from "Max Reitz <mreitz@redhat.com>"
      # Primary key fingerprint: 91BE B60A 30DB 3E88 57D1  1829 F407 DB00 61D5 CF40
      
      * remotes/maxreitz/tags/pull-block-2017-04-11:
        iscsi: Fix iscsi_create
        throttle: Remove block from group on hot-unplug
        block: pass the right options for BlockDriver.bdrv_open()
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      f5ac5cfe
    • F
      iscsi: Fix iscsi_create · 2ec9a782
      Fam Zheng 提交于
      Since d5895fcb (iscsi: Split URL into individual options), creating
      qcow2 image on an iscsi LUN fails:
      
          qemu-img create -f qcow2 iscsi://$SERVER/$IQN/0 1G
          qemu-img: iscsi://$SERVER/$IQN/0: Could not create image: Invalid
              argument
      
      The problem is iscsi_open now expects that transport_name, portal and
      target are already parsed into structured options by
      iscsi_parse_filename, but it is not called in iscsi_create.
      Signed-off-by: NFam Zheng <famz@redhat.com>
      Message-id: 20170410075451.21329-1-famz@redhat.com
      Reviewed-by: NEric Blake <eblake@redhat.com>
      [mreitz: Dropped now superfluous
               qdict_put(bs_options, "filename", ...)]
      Signed-off-by: NMax Reitz <mreitz@redhat.com>
      2ec9a782
    • E
      throttle: Remove block from group on hot-unplug · 1606e4cf
      Eric Blake 提交于
      When a block device that is part of a throttle group is hot-unplugged,
      we forgot to remove it from the throttle group. This leaves stale
      memory around, and causes an easily reproducible crash:
      
      $ ./x86_64-softmmu/qemu-system-x86_64 -nodefaults -nographic -qmp stdio \
      -device virtio-scsi-pci,bus=pci.0 -drive \
      id=drive_image2,if=none,format=raw,file=file2,bps=512000,iops=100,group=foo \
      -device scsi-hd,id=image2,drive=drive_image2 -drive \
      id=drive_image3,if=none,format=raw,file=file3,bps=512000,iops=100,group=foo \
      -device scsi-hd,id=image3,drive=drive_image3
      {'execute':'qmp_capabilities'}
      {'execute':'device_del','arguments':{'id':'image3'}}
      {'execute':'system_reset'}
      
      Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1428810Suggested-by: NAlberto Garcia <berto@igalia.com>
      Signed-off-by: NEric Blake <eblake@redhat.com>
      Message-id: 20170406190847.29347-1-eblake@redhat.com
      Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
      Signed-off-by: NMax Reitz <mreitz@redhat.com>
      1606e4cf
    • D
      block: pass the right options for BlockDriver.bdrv_open() · 7a9e5119
      Dong Jia Shi 提交于
      raw_open() expects the caller always passing in the right actual
      @options parameter. But when trying to applying snapshot on a RBD
      image, bdrv_snapshot_goto() calls raw_open() (by calling the
      bdrv_open callback on the BlockDriver) with a NULL @options, and
      that will result in a Segmentation fault.
      
      For the other non-raw format drivers, it also makes sense to passing
      in the actual options, althought they don't trigger the problem so
      far.
      
      Let's prepare a @options by adding the "file" key-value pair to a
      copy of the actual options that were given for the node (i.e.
      bs->options), and pass it to the callback.
      
      BlockDriver.bdrv_open() expects bs->file to be NULL and just
      overwrites it with the result from bdrv_open_child(). That means we
      should actually make sure it's NULL because otherwise the child BDS
      will have a reference count that is 1 too high. So we unconditionally
      invoke bdrv_unref_child() before calling BlockDriver.bdrv_open(), and
      we wrap everything in bdrv_ref()/bdrv_unref() so the BDS isn't
      deleted in the meantime.
      Suggested-by: NMax Reitz <mreitz@redhat.com>
      Signed-off-by: NDong Jia Shi <bjsdjshi@linux.vnet.ibm.com>
      Message-id: 20170405091909.36357-2-bjsdjshi@linux.vnet.ibm.com
      Signed-off-by: NMax Reitz <mreitz@redhat.com>
      7a9e5119
    • P
      Merge remote-tracking branch 'remotes/famz/tags/block-pull-request' into staging · aa388ddc
      Peter Maydell 提交于
      # gpg: Signature made Tue 11 Apr 2017 13:10:55 BST
      # gpg:                using RSA key 0xCA35624C6A9171C6
      # gpg: Good signature from "Fam Zheng <famz@redhat.com>"
      # gpg: WARNING: This key is not certified with sufficiently trusted signatures!
      # gpg:          It is not certain that the signature belongs to the owner.
      # Primary key fingerprint: 5003 7CB7 9706 0F76 F021  AD56 CA35 624C 6A91 71C6
      
      * remotes/famz/tags/block-pull-request:
        sheepdog: Use bdrv_coroutine_enter before BDRV_POLL_WHILE
        block: Fix bdrv_co_flush early return
        block: Use bdrv_coroutine_enter to start I/O coroutines
        qemu-io-cmds: Use bdrv_coroutine_enter
        blockjob: Use bdrv_coroutine_enter to start coroutine
        block: Introduce bdrv_coroutine_enter
        async: Introduce aio_co_enter
        coroutine: Extract qemu_aio_coroutine_enter
        tests/block-job-txn: Don't start block job before adding to txn
        block: Quiesce old aio context during bdrv_set_aio_context
        block: Make bdrv_parent_drained_begin/end public
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      aa388ddc
    • F
      sheepdog: Use bdrv_coroutine_enter before BDRV_POLL_WHILE · 76296dff
      Fam Zheng 提交于
      When called from main thread, the coroutine should run in the context of
      bs. Use bdrv_coroutine_enter to ensure that.
      Signed-off-by: NFam Zheng <famz@redhat.com>
      76296dff
    • F
      block: Fix bdrv_co_flush early return · 49ca6259
      Fam Zheng 提交于
      bdrv_inc_in_flight and bdrv_dec_in_flight are mandatory for
      BDRV_POLL_WHILE to work, even for the shortcut case where flush is
      unnecessary. Move the if block to below bdrv_dec_in_flight, and BTW fix
      the variable declaration position.
      Signed-off-by: NFam Zheng <famz@redhat.com>
      Acked-by: NStefan Hajnoczi <stefanha@redhat.com>
      Reviewed-by: NKevin Wolf <kwolf@redhat.com>
      Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com>
      49ca6259
    • F
      block: Use bdrv_coroutine_enter to start I/O coroutines · e92f0e19
      Fam Zheng 提交于
      BDRV_POLL_WHILE waits for the started I/O by releasing bs's ctx then polling
      the main context, which relies on the yielded coroutine continuing on bs->ctx
      before notifying qemu_aio_context with bdrv_wakeup().
      
      Thus, using qemu_coroutine_enter to start I/O is wrong because if the coroutine
      is entered from main loop, co->ctx will be qemu_aio_context, as a result of the
      "release, poll, acquire" loop of BDRV_POLL_WHILE, race conditions happen when
      both main thread and the iothread access the same BDS:
      
        main loop                                iothread
      -----------------------------------------------------------------------
        blockdev_snapshot
          aio_context_acquire(bs->ctx)
                                                 virtio_scsi_data_plane_handle_cmd
          bdrv_drained_begin(bs->ctx)
          bdrv_flush(bs)
            bdrv_co_flush(bs)                      aio_context_acquire(bs->ctx).enter
              ...
              qemu_coroutine_yield(co)
            BDRV_POLL_WHILE()
              aio_context_release(bs->ctx)
                                                   aio_context_acquire(bs->ctx).return
                                                     ...
                                                       aio_co_wake(co)
              aio_poll(qemu_aio_context)               ...
                co_schedule_bh_cb()                    ...
                  qemu_coroutine_enter(co)             ...
      
                    /* (A) bdrv_co_flush(bs)           /* (B) I/O on bs */
                            continues... */
                                                   aio_context_release(bs->ctx)
              aio_context_acquire(bs->ctx)
      
      Note that in above case, bdrv_drained_begin() doesn't do the "release,
      poll, acquire" in BDRV_POLL_WHILE, because bs->in_flight == 0.
      
      Fix this by using bdrv_coroutine_enter and enter coroutine in the right
      context.
      
      iotests 109 output is updated because the coroutine reenter flow during
      mirror job complete is different (now through co_queue_wakeup, instead
      of the unconditional qemu_coroutine_switch before), making the end job
      len different.
      Signed-off-by: NFam Zheng <famz@redhat.com>
      Acked-by: NStefan Hajnoczi <stefanha@redhat.com>
      Reviewed-by: NKevin Wolf <kwolf@redhat.com>
      e92f0e19