1. 02 8月, 2016 3 次提交
    • J
      vhost: new device IOTLB API · 6b1e6cc7
      Jason Wang 提交于
      This patch tries to implement an device IOTLB for vhost. This could be
      used with userspace(qemu) implementation of DMA remapping
      to emulate an IOMMU for the guest.
      
      The idea is simple, cache the translation in a software device IOTLB
      (which is implemented as an interval tree) in vhost and use vhost_net
      file descriptor for reporting IOTLB miss and IOTLB
      update/invalidation. When vhost meets an IOTLB miss, the fault
      address, size and access can be read from the file. After userspace
      finishes the translation, it writes the translated address to the
      vhost_net file to update the device IOTLB.
      
      When device IOTLB is enabled by setting VIRTIO_F_IOMMU_PLATFORM all vq
      addresses set by ioctl are treated as iova instead of virtual address and
      the accessing can only be done through IOTLB instead of direct userspace
      memory access. Before each round or vq processing, all vq metadata is
      prefetched in device IOTLB to make sure no translation fault happens
      during vq processing.
      
      In most cases, virtqueues are contiguous even in virtual address space.
      The IOTLB translation for virtqueue itself may make it a little
      slower. We might add fast path cache on top of this patch.
      Signed-off-by: NJason Wang <jasowang@redhat.com>
      [mst: use virtio feature bit: VHOST_F_DEVICE_IOTLB -> VIRTIO_F_IOMMU_PLATFORM ]
      [mst: fix build warnings ]
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      [ weiyj.lk: missing unlock on error ]
      Signed-off-by: NWei Yongjun <weiyj.lk@gmail.com>
      6b1e6cc7
    • J
      vhost: convert pre sorted vhost memory array to interval tree · a9709d68
      Jason Wang 提交于
      Current pre-sorted memory region array has some limitations for future
      device IOTLB conversion:
      
      1) need extra work for adding and removing a single region, and it's
         expected to be slow because of sorting or memory re-allocation.
      2) need extra work of removing a large range which may intersect
         several regions with different size.
      3) need trick for a replacement policy like LRU
      
      To overcome the above shortcomings, this patch convert it to interval
      tree which can easily address the above issue with almost no extra
      work.
      
      The patch could be used for:
      
      - Extend the current API and only let the userspace to send diffs of
        memory table.
      - Simplify Device IOTLB implementation.
      Signed-off-by: NJason Wang <jasowang@redhat.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      a9709d68
    • J
      vhost: lockless enqueuing · 04b96e55
      Jason Wang 提交于
      We use spinlock to synchronize the work list now which may cause
      unnecessary contentions. So this patch switch to use llist to remove
      this contention. Pktgen tests shows about 5% improvement:
      
      Before:
      ~1300000 pps
      After:
      ~1370000 pps
      Signed-off-by: NJason Wang <jasowang@redhat.com>
      Reviewed-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      04b96e55
  2. 11 3月, 2016 3 次提交
  3. 02 3月, 2016 1 次提交
    • G
      vhost: rename vhost_init_used() · 80f7d030
      Greg Kurz 提交于
      Looking at how callers use this, maybe we should just rename init_used
      to vhost_vq_init_access. The _used suffix was a hint that we
      access the vq used ring. But maybe what callers care about is
      that it must be called after access_ok.
      
      Also, this function manipulates the vq->is_le field which isn't related
      to the vq used ring.
      
      This patch simply renames vhost_init_used() to vhost_vq_init_access() as
      suggested by Michael.
      
      No behaviour change.
      Signed-off-by: NGreg Kurz <gkurz@linux.vnet.ibm.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      80f7d030
  4. 28 10月, 2015 1 次提交
  5. 16 9月, 2015 1 次提交
  6. 01 6月, 2015 3 次提交
  7. 09 12月, 2014 3 次提交
  8. 09 6月, 2014 2 次提交
    • M
      vhost: move memory pointer to VQs · 47283bef
      Michael S. Tsirkin 提交于
      commit 2ae76693b8bcabf370b981cd00c36cd41d33fabc
          vhost: replace rcu with mutex
      replaced rcu sync for memory accesses with VQ mutex locl/unlock.
      This is correct since all accesses are under VQ mutex, but incomplete:
      we still do useless rcu lock/unlock operations, someone might copy this
      code into some other context where this won't be right.
      This use of RCU is also non standard and hard to understand.
      Let's copy the pointer to each VQ structure, this way
      the access rules become straight-forward, and there's
      no need for RCU anymore.
      Reported-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      47283bef
    • M
      vhost: move acked_features to VQs · ea16c514
      Michael S. Tsirkin 提交于
      Refactor code to make sure features are only accessed
      under VQ mutex. This makes everything simpler, no need
      for RCU here anymore.
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      ea16c514
  9. 07 12月, 2013 1 次提交
  10. 11 7月, 2013 1 次提交
  11. 07 7月, 2013 1 次提交
  12. 11 6月, 2013 1 次提交
  13. 06 5月, 2013 4 次提交
  14. 01 5月, 2013 4 次提交
  15. 30 1月, 2013 1 次提交
    • J
      vhost_net: handle polling errors when setting backend · 2b8b328b
      Jason Wang 提交于
      Currently, the polling errors were ignored, which can lead following issues:
      
      - vhost remove itself unconditionally from waitqueue when stopping the poll,
        this may crash the kernel since the previous attempt of starting may fail to
        add itself to the waitqueue
      - userspace may think the backend were successfully set even when the polling
        failed.
      
      Solve this by:
      
      - check poll->wqh before trying to remove from waitqueue
      - report polling errors in vhost_poll_start(), tx_poll_start(), the return value
        will be checked and returned when userspace want to set the backend
      
      After this fix, there still could be a polling failure after backend is set, it
      will addressed by the next patch.
      Signed-off-by: NJason Wang <jasowang@redhat.com>
      Acked-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2b8b328b
  16. 06 12月, 2012 1 次提交
    • M
      vhost: avoid backend flush on vring ops · 935cdee7
      Michael S. Tsirkin 提交于
      vring changes already do a flush internally where appropriate, so we do
      not need a second flush.
      
      It's currently not very expensive but a follow-up patch makes flush more
      heavy-weight, so remove the extra flush here to avoid regressing
      performance if call or kick fds are changed on data path.
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      935cdee7
  17. 03 11月, 2012 4 次提交
  18. 22 7月, 2012 2 次提交
  19. 14 4月, 2012 1 次提交
  20. 28 2月, 2012 1 次提交
  21. 27 7月, 2011 1 次提交