1. 10 6月, 2019 4 次提交
  2. 08 6月, 2019 1 次提交
  3. 07 6月, 2019 2 次提交
  4. 06 6月, 2019 12 次提交
  5. 04 6月, 2019 5 次提交
    • K
      scsi-disk: Use qdev_prop_drive_iothread · 4f71fb43
      Kevin Wolf 提交于
      This makes use of qdev_prop_drive_iothread for scsi-disk so that the
      disk can be attached to a node that is already in the target AioContext.
      We need to check that the HBA actually supports iothreads, otherwise
      scsi-disk must make sure that the node is already in the main
      AioContext.
      
      This changes the error message for conflicting iothread settings.
      Previously, virtio-scsi produced the error message, now it comes from
      blk_set_aio_context(). Update a test case accordingly.
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      4f71fb43
    • K
      block: Add qdev_prop_drive_iothread property type · 307a5f60
      Kevin Wolf 提交于
      Some qdev block devices have support for iothreads and take care of the
      AioContext they are running in, but most devices don't know about any of
      this. For the latter category, the qdev drive property must make sure
      that their BlockBackend is in the main AioContext.
      
      Unfortunately, while the current code just does the same thing for
      devices that do support iothreads, this is not correct and it would show
      as soon as we actually try to keep a consistent AioContext assignment
      across all nodes and users of a block graph subtree: If a node is
      already in a non-default AioContext because of one of its users,
      attaching a new device should still be possible if that device can work
      in the same AioContext. Switching the node back to the main context
      first and only then into the device AioContext causes failure (because
      the existing user wouldn't allow the switch to the main context).
      
      So devices that support iothreads need a different kind of drive
      property that leaves the node in its current AioContext, but by using
      this type, the device promises to check later that it can work with this
      context.
      
      This patch adds the qdev infrastructure that allows devices to signal
      that they handle iothreads and qdev should leave the AioContext alone.
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      307a5f60
    • K
      block: Add BlockBackend.ctx · d861ab3a
      Kevin Wolf 提交于
      This adds a new parameter to blk_new() which requires its callers to
      declare from which AioContext this BlockBackend is going to be used (or
      the locks of which AioContext need to be taken anyway).
      
      The given context is only stored and kept up to date when changing
      AioContexts. Actually applying the stored AioContext to the root node
      is saved for another commit.
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      d861ab3a
    • K
      block: Add Error to blk_set_aio_context() · 97896a48
      Kevin Wolf 提交于
      Add an Error parameter to blk_set_aio_context() and use
      bdrv_child_try_set_aio_context() internally to check whether all
      involved nodes can actually support the AioContext switch.
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      97896a48
    • K
  6. 03 6月, 2019 8 次提交
    • A
      q35: Revert to kernel irqchip · c87759ce
      Alex Williamson 提交于
      Commit b2fc91db ("q35: set split kernel irqchip as default") changed
      the default for the pc-q35-4.0 machine type to use split irqchip, which
      turned out to have disasterous effects on vfio-pci INTx support.  KVM
      resampling irqfds are registered for handling these interrupts, but
      these are non-functional in split irqchip mode.  We can't simply test
      for split irqchip in QEMU as userspace handling of this interrupt is a
      significant performance regression versus KVM handling (GeForce GPUs
      assigned to Windows VMs are non-functional without forcing MSI mode or
      re-enabling kernel irqchip).
      
      The resolution is to revert the change in default irqchip mode in the
      pc-q35-4.1 machine and create a pc-q35-4.0.1 machine for the 4.0-stable
      branch.  The qemu-q35-4.0 machine type should not be used in vfio-pci
      configurations for devices requiring legacy INTx support without
      explicitly modifying the VM configuration to use kernel irqchip.
      
      Link: https://bugs.launchpad.net/qemu/+bug/1826422
      Fixes: b2fc91db ("q35: set split kernel irqchip as default")
      Signed-off-by: NAlex Williamson <alex.williamson@redhat.com>
      Reviewed-by: NPeter Xu <peterx@redhat.com>
      Message-Id: <155786484688.13873.6037015630912983760.stgit@gimli.home>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      c87759ce
    • P
      imx25-pdk: create ds1338 for qtest inside the test · c4f00daa
      Paolo Bonzini 提交于
      There is no need to have a test device created by the board.
      Instead, create it in the qtest so that we will be able to run
      it on other boards too.
      Reviewed-by: NThomas Huth <thuth@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      c4f00daa
    • L
      edu: uses uint64_t in dma operation · 7fca21c8
      Li Qiang 提交于
      The dma related variable dma.dst/src/cnt is dma_addr_t, it is
      uint64_t in x64 platform. Change these usage from uint32_to
      uint64_t to avoid trancation in edu_dma_timer.
      Signed-off-by: NLi Qiang <liq3ea@163.com>
      Message-Id: <20190510164349.81507-4-liq3ea@163.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      7fca21c8
    • L
      edu: mmio: allow 64-bit access in read dispatch · c45eb53a
      Li Qiang 提交于
      The edu spec says when address >= 0x80, the MMIO area can
      be accessed by 64-bit.
      Signed-off-by: NLi Qiang <liq3ea@163.com>
      Reviewed-by: NPhilippe Mathieu-Daude <philmd@redhat.com>
      Message-Id: <20190510164349.81507-3-liq3ea@163.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      c45eb53a
    • L
      edu: mmio: allow 64-bit access · 20fb3105
      Li Qiang 提交于
      The edu spec says the MMIO area can be accessed by 64-bit.
      However currently the 'max_access_size' is not so the MMIO
      access dispatch can only access 32-bit one time. This patch fixes
      this to respect the spec.
      Signed-off-by: NLi Qiang <liq3ea@163.com>
      Reviewed-by: NPhilippe Mathieu-Daude <philmd@redhat.com>
      Message-Id: <20190510164349.81507-2-liq3ea@163.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      20fb3105
    • L
      vhost-scsi: Allow user to enable migration · b3e89c94
      Liran Alon 提交于
      In order to perform a valid migration of a vhost-scsi device,
      the following requirements must be met:
      (1) The virtio-scsi device state needs to be saved & loaded.
      (2) The vhost backend must be stopped before virtio-scsi device state
      is saved:
        (2.1) Sync vhost backend state to virtio-scsi device state.
        (2.2) No further I/O requests are made by vhost backend to target
              SCSI device.
        (2.3) No further guest memory access takes place after VM is stopped.
      (3) Requests in-flight to target SCSI device are completed before
          migration handover.
      (4) Target SCSI device state needs to be saved & loaded into the
          destination host target SCSI device.
      
      Previous commit ("vhost-scsi: Add VMState descriptor")
      add support to save & load the device state using VMState.
      This meets requirement (1).
      
      When VM is stopped by migration thread (On Pre-Copy complete), the
      following code path is executed:
      migration_completion() -> vm_stop_force_state() -> vm_stop() ->
      do_vm_stop().
      
      do_vm_stop() calls first pause_all_vcpus() which pause all guest
      vCPUs and then call vm_state_notify().
      In case of vhost-scsi device, this will lead to the following code path
      to be executed:
      vm_state_notify() -> virtio_vmstate_change() ->
      virtio_set_status() -> vhost_scsi_set_status() -> vhost_scsi_stop().
      vhost_scsi_stop() then calls vhost_scsi_clear_endpoint() and
      vhost_scsi_common_stop().
      
      vhost_scsi_clear_endpoint() sends VHOST_SCSI_CLEAR_ENDPOINT ioctl to
      vhost backend which will reach kernel's vhost_scsi_clear_endpoint()
      which process all pending I/O requests and wait for them to complete
      (vhost_scsi_flush()). This meets requirement (3).
      
      vhost_scsi_common_stop() will stop the vhost backend.
      As part of this stop, dirty-bitmap is synced and vhost backend state is
      synced with virtio-scsi device state. As at this point guest vCPUs are
      already paused, this meets requirement (2).
      
      At this point we are left with requirement (4) which is target SCSI
      device specific and therefore cannot be done by QEMU. Which is the main
      reason why vhost-scsi adds a migration blocker.
      However, as this can be handled either by an external orchestrator or
      by using shared-storage (i.e. iSCSI), there is no reason to limit the
      orchestrator from being able to explictly specify it wish to enable
      migration even when VM have a vhost-scsi device.
      
      Considering all the above, this commit allows orchestrator to explictly
      specify that it is responsbile for taking care of requirement (4) and
      therefore vhost-scsi should not add a migration blocker.
      Reviewed-by: NNir Weiner <nir.weiner@oracle.com>
      Reviewed-by: NBijan Mottahedeh <bijan.mottahedeh@oracle.com>
      Signed-off-by: NLiran Alon <liran.alon@oracle.com>
      Message-Id: <20190416125912.44001-4-liran.alon@oracle.com>
      Reviewed-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
      b3e89c94
    • N
      vhost-scsi: Add VMState descriptor · 4ea57425
      Nir Weiner 提交于
      As preparation of enabling migration of vhost-scsi device,
      define it’s VMState. Note, we keep the convention of
      verifying in the pre_save() method that the vhost backend
      must be stopped before attempting to save the device
      state. Similar to how it is done for vhost-vsock.
      Reviewed-by: NBijan Mottahedeh <bijan.mottahedeh@oracle.com>
      Reviewed-by: NLiran Alon <liran.alon@oracle.com>
      Signed-off-by: NNir Weiner <nir.weiner@oracle.com>
      Message-Id: <20190416125912.44001-3-liran.alon@oracle.com>
      Reviewed-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
      4ea57425
    • N
      vhost-scsi: The vhost backend should be stopped when the VM is not running · c6d369fd
      Nir Weiner 提交于
      vhost-scsi doesn’t takes into account whether the VM is running or not in
      order to decide if it should start/stop vhost backend.
      This would lead to vhost backend still being active when VM's RunState
      suddenly change to stopped.
      
      An example of when this issue is encountered is when Live-Migration Pre-Copy
      phase completes. As in this case, VM state will be changed to stopped (while
      vhost backend is still active), which will result in
      virtio_vmstate_change() -> virtio_set_status() -> vhost_scsi_set_status()
      executed but vhost_scsi_set_status() will just return without stopping
      vhost backend.
      
      To handle this, change code to consider that vhost processing should be
      stopped when VM is not running. Similar to how it is done in vhost-vsock
      device at vhost_vsock_set_status().
      
      Fixes: 5e9be92d ("vhost-scsi: new device supporting the tcm_vhost Linux kernel module”)
      Reviewed-by: NBijan Mottahedeh <bijan.mottahedeh@oracle.com>
      Reviewed-by: NLiran Alon <liran.alon@oracle.com>
      Signed-off-by: NNir Weiner <nir.weiner@oracle.com>
      Message-Id: <20190416125912.44001-2-liran.alon@oracle.com>
      Reviewed-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
      c6d369fd
  7. 30 5月, 2019 8 次提交