1. 02 8月, 2016 1 次提交
  2. 11 3月, 2016 3 次提交
  3. 02 3月, 2016 3 次提交
    • G
      vhost: rename vhost_init_used() · 80f7d030
      Greg Kurz 提交于
      Looking at how callers use this, maybe we should just rename init_used
      to vhost_vq_init_access. The _used suffix was a hint that we
      access the vq used ring. But maybe what callers care about is
      that it must be called after access_ok.
      
      Also, this function manipulates the vq->is_le field which isn't related
      to the vq used ring.
      
      This patch simply renames vhost_init_used() to vhost_vq_init_access() as
      suggested by Michael.
      
      No behaviour change.
      Signed-off-by: NGreg Kurz <gkurz@linux.vnet.ibm.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      80f7d030
    • G
      vhost: rename cross-endian helpers · c5072037
      Greg Kurz 提交于
      The default use case for vhost is when the host and the vring have the
      same endianness (default native endianness). But there are cases where
      they differ and vhost should byteswap when accessing the vring.
      
      The first case is when the host is big endian and the vring belongs to
      a virtio 1.0 device, which is always little endian.
      
      This is covered by the vq->is_le field. This field is initialized when
      userspace calls the VHOST_SET_FEATURES ioctl. It is reset when the device
      stops.
      
      We already have a vhost_init_is_le() helper, but the reset operation is
      opencoded as follows:
      
      	vq->is_le = virtio_legacy_is_little_endian();
      
      It isn't clear that we are resetting vq->is_le here.
      
      This patch moves the code to a helper with a more explicit name.
      
      The other case where we may have to byteswap is when the architecture can
      switch endianness at runtime (bi-endian). If endianness differs in the host
      and in the guest, then legacy devices need to be used in cross-endian mode.
      
      This mode is available with CONFIG_VHOST_CROSS_ENDIAN_LEGACY=y, which
      introduces a vq->user_be field. Userspace may enable cross-endian mode
      by calling the SET_VRING_ENDIAN ioctl before the device is started. The
      cross-endian mode is disabled when the device is stopped.
      
      The current names of the helpers that manipulate vq->user_be are unclear.
      
      This patch renames those helpers to clearly show that this is cross-endian
      stuff and with explicit enable/disable semantics.
      
      No behaviour change.
      Signed-off-by: NGreg Kurz <gkurz@linux.vnet.ibm.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      c5072037
    • G
      vhost: fix error path in vhost_init_used() · e1f33be9
      Greg Kurz 提交于
      We don't want side effects. If something fails, we rollback vq->is_le to
      its previous value.
      Signed-off-by: NGreg Kurz <gkurz@linux.vnet.ibm.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      e1f33be9
  4. 07 12月, 2015 2 次提交
  5. 27 7月, 2015 2 次提交
  6. 14 7月, 2015 2 次提交
    • I
      vhost: add max_mem_regions module parameter · c9ce42f7
      Igor Mammedov 提交于
      it became possible to use a bigger amount of memory
      slots, which is used by memory hotplug for
      registering hotplugged memory.
      However QEMU crashes if it's used with more than ~60
      pc-dimm devices and vhost-net enabled since host kernel
      in module vhost-net refuses to accept more than 64
      memory regions.
      
      Allow to tweak limit via max_mem_regions module paramemter
      with default value set to 64 slots.
      Signed-off-by: NIgor Mammedov <imammedo@redhat.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      c9ce42f7
    • I
      vhost: extend memory regions allocation to vmalloc · 4de7255f
      Igor Mammedov 提交于
      with large number of memory regions we could end up with
      high order allocations and kmalloc could fail if
      host is under memory pressure.
      Considering that memory regions array is used on hot path
      try harder to allocate using kmalloc and if it fails resort
      to vmalloc.
      It's still better than just failing vhost_set_memory() and
      causing guest crash due to it when a new memory hotplugged
      to guest.
      
      I'll still look at QEMU side solution to reduce amount of
      memory regions it feeds to vhost to make things even better,
      but it doesn't hurt for kernel to behave smarter and don't
      crash older QEMU's which could use large amount of memory
      regions.
      Signed-off-by: NIgor Mammedov <imammedo@redhat.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      4de7255f
  7. 01 7月, 2015 1 次提交
  8. 01 6月, 2015 1 次提交
    • G
      vhost: cross-endian support for legacy devices · 2751c988
      Greg Kurz 提交于
      This patch brings cross-endian support to vhost when used to implement
      legacy virtio devices. Since it is a relatively rare situation, the
      feature availability is controlled by a kernel config option (not set
      by default).
      
      The vq->is_le boolean field is added to cache the endianness to be
      used for ring accesses. It defaults to native endian, as expected
      by legacy virtio devices. When the ring gets active, we force little
      endian if the device is modern. When the ring is deactivated, we
      revert to the native endian default.
      
      If cross-endian was compiled in, a vq->user_be boolean field is added
      so that userspace may request a specific endianness. This field is
      used to override the default when activating the ring of a legacy
      device. It has no effect on modern devices.
      Signed-off-by: NGreg Kurz <gkurz@linux.vnet.ibm.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Reviewed-by: NCornelia Huck <cornelia.huck@de.ibm.com>
      Reviewed-by: NDavid Gibson <david@gibson.dropbear.id.au>
      2751c988
  9. 04 2月, 2015 1 次提交
  10. 29 12月, 2014 1 次提交
    • M
      vhost: relax used address alignment · 5d9a07b0
      Michael S. Tsirkin 提交于
      virtio 1.0 only requires used address to be 4 byte aligned,
      vhost required 8 bytes (size of vring_used_elem).
      Fix up vhost to match that.
      
      Additionally, while vhost correctly requires 8 byte
      alignment for log, it's unconnected to used ring:
      it's a consequence that log has u64 entries.
      Tweak code to make that clearer.
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      5d9a07b0
  11. 09 12月, 2014 2 次提交
  12. 09 6月, 2014 3 次提交
  13. 07 12月, 2013 1 次提交
  14. 17 9月, 2013 1 次提交
    • Q
      vhost: wake up worker outside spin_lock · ac9fde24
      Qin Chuanyu 提交于
      the wake_up_process func is included by spin_lock/unlock in
      vhost_work_queue,
      but it could be done outside the spin_lock.
      I have test it with kernel 3.0.27 and guest suse11-sp2 using iperf,
      the num as below.
                        original                 modified
      thread_num  tp(Gbps)   vhost(%)  |  tp(Gbps)     vhost(%)
      1           9.59        28.82    |   9.59        27.49
      8           9.61        32.92    |   9.62        26.77
      64          9.58        46.48    |   9.55        38.99
      256         9.6         63.7     |   9.6         52.59
      Signed-off-by: NChuanyu Qin <qinchuanyu@huawei.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      ac9fde24
  15. 04 9月, 2013 1 次提交
  16. 21 8月, 2013 1 次提交
  17. 07 7月, 2013 2 次提交
  18. 11 6月, 2013 1 次提交
  19. 06 5月, 2013 2 次提交
  20. 01 5月, 2013 4 次提交
  21. 12 4月, 2013 1 次提交
    • J
      vhost_net: remove tx polling state · 70181d51
      Jason Wang 提交于
      After commit 2b8b328b (vhost_net: handle polling
      errors when setting backend), we in fact track the polling state through
      poll->wqh, so there's no need to duplicate the work with an extra
      vhost_net_polling_state. So this patch removes this and make the code simpler.
      
      This patch also removes the all tx starting/stopping code in tx path according
      to Michael's suggestion.
      
      Netperf test shows almost the same result in stream test, but gets improvements
      on TCP_RR tests (both zerocopy or copy) especially on low load cases.
      
      Tested between multiqueue kvm guest and external host with two direct
      connected 82599s.
      
      zerocopy disabled:
      
      sessions|transaction rates|normalize|
      before/after/+improvements
      1 | 9510.24/11727.29/+23.3%    | 693.54/887.68/+28.0%   |
      25| 192931.50/241729.87/+25.3% | 2376.80/2771.70/+16.6% |
      50| 277634.64/291905.76/+5%    | 3118.36/3230.11/+3.6%  |
      
      zerocopy enabled:
      
      sessions|transaction rates|normalize|
      before/after/+improvements
      1 | 7318.33/11929.76/+63.0%    | 521.86/843.30/+61.6%   |
      25| 167264.88/242422.15/+44.9% | 2181.60/2788.16/+27.8% |
      50| 272181.02/294347.04/+8.1%  | 3071.56/3257.85/+6.1%  |
      Signed-off-by: NJason Wang <jasowang@redhat.com>
      Acked-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      70181d51
  22. 30 1月, 2013 1 次提交
    • J
      vhost_net: handle polling errors when setting backend · 2b8b328b
      Jason Wang 提交于
      Currently, the polling errors were ignored, which can lead following issues:
      
      - vhost remove itself unconditionally from waitqueue when stopping the poll,
        this may crash the kernel since the previous attempt of starting may fail to
        add itself to the waitqueue
      - userspace may think the backend were successfully set even when the polling
        failed.
      
      Solve this by:
      
      - check poll->wqh before trying to remove from waitqueue
      - report polling errors in vhost_poll_start(), tx_poll_start(), the return value
        will be checked and returned when userspace want to set the backend
      
      After this fix, there still could be a polling failure after backend is set, it
      will addressed by the next patch.
      Signed-off-by: NJason Wang <jasowang@redhat.com>
      Acked-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2b8b328b
  23. 06 12月, 2012 1 次提交
    • M
      vhost: avoid backend flush on vring ops · 935cdee7
      Michael S. Tsirkin 提交于
      vring changes already do a flush internally where appropriate, so we do
      not need a second flush.
      
      It's currently not very expensive but a follow-up patch makes flush more
      heavy-weight, so remove the extra flush here to avoid regressing
      performance if call or kick fds are changed on data path.
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      935cdee7
  24. 29 11月, 2012 1 次提交
  25. 03 11月, 2012 1 次提交