1. 20 12月, 2018 1 次提交
  2. 12 12月, 2018 2 次提交
  3. 27 11月, 2018 1 次提交
  4. 19 10月, 2018 2 次提交
    • M
      cpus hw target: Use warn_report() & friends to report warnings · 0765691e
      Markus Armbruster 提交于
      Calling error_report() in a function that takes an Error ** argument
      is suspicious.  Convert a few that are actually warnings to
      warn_report().
      
      While there, split a warning consisting of multiple sentences to
      conform to conventions spelled out in warn_report()'s contract.
      
      Cc: Alex Bennée <alex.bennee@linaro.org>
      Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
      Cc: Alex Williamson <alex.williamson@redhat.com>
      Cc: Fam Zheng <famz@redhat.com>
      Cc: Wei Huang <wei@redhat.com>
      Cc: David Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
      Acked-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Reviewed-by: NAlex Bennée <alex.bennee@linaro.org>
      Message-Id: <20181017082702.5581-5-armbru@redhat.com>
      0765691e
    • L
      clean up callback when del virtqueue · 7da2d99f
      liujunjie 提交于
      Before, we did not clear callback like handle_output when delete
      the virtqueue which may result be segmentfault.
      The scene is as follows:
      1. Start a vm with multiqueue vhost-net,
      2. then we write VIRTIO_PCI_GUEST_FEATURES in PCI configuration to
      triger multiqueue disable in this vm which will delete the virtqueue.
      In this step, the tx_bh is deleted but the callback virtio_net_handle_tx_bh
      still exist.
      3. Finally, we write VIRTIO_PCI_QUEUE_NOTIFY in PCI configuration to
      notify the deleted virtqueue. In this way, virtio_net_handle_tx_bh
      will be called and qemu will be crashed.
      
      Although the way described above is uncommon, we had better reinforce it.
      
      CC: qemu-stable@nongnu.org
      Signed-off-by: Nliujunjie <liujunjie23@huawei.com>
      Signed-off-by: NJason Wang <jasowang@redhat.com>
      7da2d99f
  5. 17 10月, 2018 1 次提交
  6. 12 10月, 2018 2 次提交
  7. 03 10月, 2018 1 次提交
    • F
      virtio: Return true from virtio_queue_empty if broken · 2d1df859
      Fam Zheng 提交于
      Both virtio-blk and virtio-scsi use virtio_queue_empty() as the
      loop condition in VQ handlers (virtio_blk_handle_vq,
      virtio_scsi_handle_cmd_vq). When a device is marked broken in
      virtqueue_pop, for example if a vIOMMU address translation failed, we
      want to break out of the loop.
      
      This fixes a hanging problem when booting a CentOS 3.10.0-862.el7.x86_64
      kernel with ATS enabled:
      
        $ qemu-system-x86_64 \
          ... \
          -device intel-iommu,intremap=on,caching-mode=on,eim=on,device-iotlb=on \
          -device virtio-scsi-pci,iommu_platform=on,ats=on,id=scsi0,bus=pci.4,addr=0x0
      
      The dead loop happens immediately when the kernel boots and initializes
      the device, where virtio_scsi_data_plane_handle_cmd will not return:
      
          > ...
          > #13 0x00005586602b7793 in virtio_scsi_handle_cmd_vq
          > #14 0x00005586602b8d66 in virtio_scsi_data_plane_handle_cmd
          > #15 0x00005586602ddab7 in virtio_queue_notify_aio_vq
          > #16 0x00005586602dfc9f in virtio_queue_host_notifier_aio_poll
          > #17 0x00005586607885da in run_poll_handlers_once
          > #18 0x000055866078880e in try_poll_mode
          > #19 0x00005586607888eb in aio_poll
          > #20 0x0000558660784561 in aio_wait_bh_oneshot
          > #21 0x00005586602b9582 in virtio_scsi_dataplane_stop
          > #22 0x00005586605a7110 in virtio_bus_stop_ioeventfd
          > #23 0x00005586605a9426 in virtio_pci_stop_ioeventfd
          > #24 0x00005586605ab808 in virtio_pci_common_write
          > #25 0x0000558660242396 in memory_region_write_accessor
          > #26 0x00005586602425ab in access_with_adjusted_size
          > #27 0x0000558660245281 in memory_region_dispatch_write
          > #28 0x00005586601e008e in flatview_write_continue
          > #29 0x00005586601e01d8 in flatview_write
          > #30 0x00005586601e04de in address_space_write
          > #31 0x00005586601e052f in address_space_rw
          > #32 0x00005586602607f2 in kvm_cpu_exec
          > #33 0x0000558660227148 in qemu_kvm_cpu_thread_fn
          > #34 0x000055866078bde7 in qemu_thread_start
          > #35 0x00007f5784906594 in start_thread
          > #36 0x00007f5784639e6f in clone
      
      With this patch, virtio_queue_empty will now return 1 as soon as the
      vdev is marked as broken, after a "virtio: zero sized buffers are not
      allowed" error.
      
      To be consistent, update virtio_queue_empty_rcu as well.
      Signed-off-by: NFam Zheng <famz@redhat.com>
      Message-Id: <20180910145616.8598-2-famz@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      2d1df859
  8. 08 9月, 2018 2 次提交
  9. 29 8月, 2018 1 次提交
  10. 17 8月, 2018 1 次提交
  11. 03 8月, 2018 1 次提交
  12. 28 6月, 2018 1 次提交
  13. 15 6月, 2018 1 次提交
    • P
      iommu: Add IOMMU index argument to notifier APIs · cb1efcf4
      Peter Maydell 提交于
      Add support for multiple IOMMU indexes to the IOMMU notifier APIs.
      When initializing a notifier with iommu_notifier_init(), the caller
      must pass the IOMMU index that it is interested in. When a change
      happens, the IOMMU implementation must pass
      memory_region_notify_iommu() the IOMMU index that has changed and
      that notifiers must be called for.
      
      IOMMUs which support only a single index don't need to change.
      Callers which only really support working with IOMMUs with a single
      index can use the result of passing MEMTXATTRS_UNSPECIFIED to
      memory_region_iommu_attrs_to_index().
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      Reviewed-by: NRichard Henderson <richard.henderson@linaro.org>
      Reviewed-by: NAlex Bennée <alex.bennee@linaro.org>
      Message-id: 20180604152941.20374-3-peter.maydell@linaro.org
      cb1efcf4
  14. 01 6月, 2018 3 次提交
  15. 31 5月, 2018 1 次提交
  16. 25 5月, 2018 4 次提交
  17. 23 5月, 2018 6 次提交
  18. 17 4月, 2018 1 次提交
  19. 09 4月, 2018 4 次提交
  20. 20 3月, 2018 4 次提交