1. 07 7月, 2013 1 次提交
  2. 11 6月, 2013 3 次提交
    • M
      vhost: fix ubuf_info cleanup · 288cfe78
      Michael S. Tsirkin 提交于
      vhost_net_clear_ubuf_info didn't clear ubuf_info
      after kfree, this could trigger double free.
      Fix this and simplify this code to make it more robust: make sure
      ubuf info is always freed through vhost_net_clear_ubuf_info.
      Reported-by: NTommi Rantala <tt.rantala@gmail.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      288cfe78
    • M
      vhost: check owner before we overwrite ubuf_info · 05c05351
      Michael S. Tsirkin 提交于
      If device has an owner, we shouldn't touch ubuf_info
      since it might be in use.
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      05c05351
    • J
      vhost_net: clear msg.control for non-zerocopy case during tx · 4364d5f9
      Jason Wang 提交于
      When we decide not use zero-copy, msg.control should be set to NULL otherwise
      macvtap/tap may set zerocopy callbacks which may decrease the kref of ubufs
      wrongly.
      
      Bug were introduced by commit cedb9bdc
      (vhost-net: skip head management if no outstanding).
      
      This solves the following warnings:
      
      WARNING: at include/linux/kref.h:47 handle_tx+0x477/0x4b0 [vhost_net]()
      Modules linked in: vhost_net macvtap macvlan tun nfsd exportfs bridge stp llc openvswitch kvm_amd kvm bnx2 megaraid_sas [last unloaded: tun]
      CPU: 5 PID: 8670 Comm: vhost-8668 Not tainted 3.10.0-rc2+ #1566
      Hardware name: Dell Inc. PowerEdge R715/00XHKG, BIOS 1.5.2 04/19/2011
      ffffffffa0198323 ffff88007c9ebd08 ffffffff81796b73 ffff88007c9ebd48
      ffffffff8103d66b 000000007b773e20 ffff8800779f0000 ffff8800779f43f0
      ffff8800779f8418 000000000000015c 0000000000000062 ffff88007c9ebd58
      Call Trace:
      [<ffffffff81796b73>] dump_stack+0x19/0x1e
      [<ffffffff8103d66b>] warn_slowpath_common+0x6b/0xa0
      [<ffffffff8103d6b5>] warn_slowpath_null+0x15/0x20
      [<ffffffffa0197627>] handle_tx+0x477/0x4b0 [vhost_net]
      [<ffffffffa0197690>] handle_tx_kick+0x10/0x20 [vhost_net]
      [<ffffffffa019541e>] vhost_worker+0xfe/0x1a0 [vhost_net]
      [<ffffffffa0195320>] ? vhost_attach_cgroups_work+0x30/0x30 [vhost_net]
      [<ffffffffa0195320>] ? vhost_attach_cgroups_work+0x30/0x30 [vhost_net]
      [<ffffffff81061f46>] kthread+0xc6/0xd0
      [<ffffffff81061e80>] ? kthread_freezable_should_stop+0x70/0x70
      [<ffffffff817a1aec>] ret_from_fork+0x7c/0xb0
      [<ffffffff81061e80>] ? kthread_freezable_should_stop+0x70/0x70
      Signed-off-by: NJason Wang <jasowang@redhat.com>
      Acked-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4364d5f9
  3. 08 5月, 2013 1 次提交
  4. 07 5月, 2013 1 次提交
  5. 06 5月, 2013 7 次提交
  6. 02 5月, 2013 3 次提交
  7. 01 5月, 2013 7 次提交
  8. 25 4月, 2013 4 次提交
  9. 12 4月, 2013 1 次提交
    • J
      vhost_net: remove tx polling state · 70181d51
      Jason Wang 提交于
      After commit 2b8b328b (vhost_net: handle polling
      errors when setting backend), we in fact track the polling state through
      poll->wqh, so there's no need to duplicate the work with an extra
      vhost_net_polling_state. So this patch removes this and make the code simpler.
      
      This patch also removes the all tx starting/stopping code in tx path according
      to Michael's suggestion.
      
      Netperf test shows almost the same result in stream test, but gets improvements
      on TCP_RR tests (both zerocopy or copy) especially on low load cases.
      
      Tested between multiqueue kvm guest and external host with two direct
      connected 82599s.
      
      zerocopy disabled:
      
      sessions|transaction rates|normalize|
      before/after/+improvements
      1 | 9510.24/11727.29/+23.3%    | 693.54/887.68/+28.0%   |
      25| 192931.50/241729.87/+25.3% | 2376.80/2771.70/+16.6% |
      50| 277634.64/291905.76/+5%    | 3118.36/3230.11/+3.6%  |
      
      zerocopy enabled:
      
      sessions|transaction rates|normalize|
      before/after/+improvements
      1 | 7318.33/11929.76/+63.0%    | 521.86/843.30/+61.6%   |
      25| 167264.88/242422.15/+44.9% | 2181.60/2788.16/+27.8% |
      50| 272181.02/294347.04/+8.1%  | 3071.56/3257.85/+6.1%  |
      Signed-off-by: NJason Wang <jasowang@redhat.com>
      Acked-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      70181d51
  10. 11 4月, 2013 4 次提交
  11. 09 4月, 2013 2 次提交
    • A
      tcm_vhost: Initialize vq->last_used_idx when set endpoint · dfd5d569
      Asias He 提交于
      This patch fixes guest hang when booting seabios and guest.
      
        [    0.576238] scsi0 : Virtio SCSI HBA
        [    0.616754] virtio_scsi virtio1: request:id 0 is not a head!
      
      vq->last_used_idx is initialized only when /dev/vhost-scsi is
      opened or closed.
      
         vhost_scsi_open -> vhost_dev_init() -> vhost_vq_reset()
         vhost_scsi_release() -> vhost_dev_cleanup -> vhost_vq_reset()
      
      So, when guest talks to tcm_vhost after seabios does, vq->last_used_idx
      still contains the old valule for seabios. This confuses guest.
      
      Fix this by calling vhost_init_used() to init vq->last_used_idx when
      we set endpoint.
      Signed-off-by: NAsias He <asias@redhat.com>
      Acked-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
      dfd5d569
    • A
      tcm_vhost: Use vq->private_data to indicate if the endpoint is setup · 4f7f46d3
      Asias He 提交于
      Currently, vs->vs_endpoint is used indicate if the endpoint is setup or
      not. It is set or cleared in vhost_scsi_set_endpoint() or
      vhost_scsi_clear_endpoint() under the vs->dev.mutex lock. However, when
      we check it in vhost_scsi_handle_vq(), we ignored the lock.
      
      Instead of using the vs->vs_endpoint and the vs->dev.mutex lock to
      indicate the status of the endpoint, we use per virtqueue
      vq->private_data to indicate it. In this way, we can only take the
      vq->mutex lock which is per queue and make the concurrent multiqueue
      process having less lock contention. Further, in the read side of
      vq->private_data, we can even do not take the lock if it is accessed in
      the vhost worker thread, because it is protected by "vhost rcu".
      
      (nab: Do s/VHOST_FEATURES/~VHOST_SCSI_FEATURES)
      Signed-off-by: NAsias He <asias@redhat.com>
      Acked-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
      4f7f46d3
  12. 03 4月, 2013 1 次提交
  13. 29 3月, 2013 1 次提交
  14. 20 3月, 2013 2 次提交
  15. 19 3月, 2013 2 次提交