1. 07 12月, 2013 1 次提交
  2. 17 9月, 2013 1 次提交
    • Q
      vhost: wake up worker outside spin_lock · ac9fde24
      Qin Chuanyu 提交于
      the wake_up_process func is included by spin_lock/unlock in
      vhost_work_queue,
      but it could be done outside the spin_lock.
      I have test it with kernel 3.0.27 and guest suse11-sp2 using iperf,
      the num as below.
                        original                 modified
      thread_num  tp(Gbps)   vhost(%)  |  tp(Gbps)     vhost(%)
      1           9.59        28.82    |   9.59        27.49
      8           9.61        32.92    |   9.62        26.77
      64          9.58        46.48    |   9.55        38.99
      256         9.6         63.7     |   9.6         52.59
      Signed-off-by: NChuanyu Qin <qinchuanyu@huawei.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      ac9fde24
  3. 04 9月, 2013 1 次提交
  4. 21 8月, 2013 1 次提交
  5. 07 7月, 2013 2 次提交
  6. 11 6月, 2013 1 次提交
  7. 06 5月, 2013 2 次提交
  8. 01 5月, 2013 4 次提交
  9. 12 4月, 2013 1 次提交
    • J
      vhost_net: remove tx polling state · 70181d51
      Jason Wang 提交于
      After commit 2b8b328b (vhost_net: handle polling
      errors when setting backend), we in fact track the polling state through
      poll->wqh, so there's no need to duplicate the work with an extra
      vhost_net_polling_state. So this patch removes this and make the code simpler.
      
      This patch also removes the all tx starting/stopping code in tx path according
      to Michael's suggestion.
      
      Netperf test shows almost the same result in stream test, but gets improvements
      on TCP_RR tests (both zerocopy or copy) especially on low load cases.
      
      Tested between multiqueue kvm guest and external host with two direct
      connected 82599s.
      
      zerocopy disabled:
      
      sessions|transaction rates|normalize|
      before/after/+improvements
      1 | 9510.24/11727.29/+23.3%    | 693.54/887.68/+28.0%   |
      25| 192931.50/241729.87/+25.3% | 2376.80/2771.70/+16.6% |
      50| 277634.64/291905.76/+5%    | 3118.36/3230.11/+3.6%  |
      
      zerocopy enabled:
      
      sessions|transaction rates|normalize|
      before/after/+improvements
      1 | 7318.33/11929.76/+63.0%    | 521.86/843.30/+61.6%   |
      25| 167264.88/242422.15/+44.9% | 2181.60/2788.16/+27.8% |
      50| 272181.02/294347.04/+8.1%  | 3071.56/3257.85/+6.1%  |
      Signed-off-by: NJason Wang <jasowang@redhat.com>
      Acked-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      70181d51
  10. 30 1月, 2013 1 次提交
    • J
      vhost_net: handle polling errors when setting backend · 2b8b328b
      Jason Wang 提交于
      Currently, the polling errors were ignored, which can lead following issues:
      
      - vhost remove itself unconditionally from waitqueue when stopping the poll,
        this may crash the kernel since the previous attempt of starting may fail to
        add itself to the waitqueue
      - userspace may think the backend were successfully set even when the polling
        failed.
      
      Solve this by:
      
      - check poll->wqh before trying to remove from waitqueue
      - report polling errors in vhost_poll_start(), tx_poll_start(), the return value
        will be checked and returned when userspace want to set the backend
      
      After this fix, there still could be a polling failure after backend is set, it
      will addressed by the next patch.
      Signed-off-by: NJason Wang <jasowang@redhat.com>
      Acked-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2b8b328b
  11. 06 12月, 2012 1 次提交
    • M
      vhost: avoid backend flush on vring ops · 935cdee7
      Michael S. Tsirkin 提交于
      vring changes already do a flush internally where appropriate, so we do
      not need a second flush.
      
      It's currently not very expensive but a follow-up patch makes flush more
      heavy-weight, so remove the extra flush here to avoid regressing
      performance if call or kick fds are changed on data path.
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      935cdee7
  12. 29 11月, 2012 1 次提交
  13. 03 11月, 2012 4 次提交
  14. 27 9月, 2012 1 次提交
  15. 22 7月, 2012 1 次提交
    • S
      vhost: make vhost work queue visible · 163049ae
      Stefan Hajnoczi 提交于
      The vhost work queue allows processing to be done in vhost worker thread
      context, which uses the owner process mm.  Access to the vring and guest
      memory is typically only possible from vhost worker context so it is
      useful to allow work to be queued directly by users.
      
      Currently vhost_net only uses the poll wrappers which do not expose the
      work queue functions.  However, for tcm_vhost (vhost_scsi) it will be
      necessary to queue custom work.
      Signed-off-by: NStefan Hajnoczi <stefanha@linux.vnet.ibm.com>
      Cc: Zhi Yong Wu <wuzhy@cn.ibm.com>
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      163049ae
  16. 27 6月, 2012 1 次提交
  17. 02 5月, 2012 1 次提交
  18. 14 4月, 2012 1 次提交
  19. 20 3月, 2012 1 次提交
  20. 28 2月, 2012 2 次提交
    • M
      vhost: fix release path lockdep checks · ea5d4046
      Michael S. Tsirkin 提交于
      We shouldn't hold any locks on release path. Pass a flag to
      vhost_dev_cleanup to use the lockdep info correctly.
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Tested-by: NSasha Levin <levinsasha928@gmail.com>
      ea5d4046
    • N
      vhost: don't forget to schedule() · d550dda1
      Nadav Har'El 提交于
      This is a tiny, but important, patch to vhost.
      
      Vhost's worker thread only called schedule() when it had no work to do, and
      it wanted to go to sleep. But if there's always work to do, e.g., the guest
      is running a network-intensive program like netperf with small message sizes,
      schedule() was *never* called. This had several negative implications (on
      non-preemptive kernels):
      
       1. Passing time was not properly accounted to the "vhost" process (ps and
          top would wrongly show it using zero CPU time).
      
       2. Sometimes error messages about RCU timeouts would be printed, if the
          core running the vhost thread didn't schedule() for a very long time.
      
       3. Worst of all, a vhost thread would "hog" the core. If several vhost
          threads need to share the same core, typically one would get most of the
          CPU time (and its associated guest most of the performance), while the
          others hardly get any work done.
      
      The trivial solution is to add
      
      	if (need_resched())
      		schedule();
      
      After doing every piece of work. This will not do the heavy schedule() all
      the time, just when the timer interrupt decided a reschedule is warranted
      (so need_resched returns true).
      
      Thanks to Abel Gordon for this patch.
      Signed-off-by: NNadav Har'El <nyh@il.ibm.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      d550dda1
  21. 19 7月, 2011 5 次提交
  22. 30 5月, 2011 1 次提交
  23. 07 5月, 2011 1 次提交
  24. 09 3月, 2011 2 次提交
  25. 10 1月, 2011 1 次提交
  26. 09 12月, 2010 1 次提交