- 11 6月, 2013 2 次提交
-
-
由 Michael S. Tsirkin 提交于
If device has an owner, we shouldn't touch ubuf_info since it might be in use. Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jason Wang 提交于
When we decide not use zero-copy, msg.control should be set to NULL otherwise macvtap/tap may set zerocopy callbacks which may decrease the kref of ubufs wrongly. Bug were introduced by commit cedb9bdc (vhost-net: skip head management if no outstanding). This solves the following warnings: WARNING: at include/linux/kref.h:47 handle_tx+0x477/0x4b0 [vhost_net]() Modules linked in: vhost_net macvtap macvlan tun nfsd exportfs bridge stp llc openvswitch kvm_amd kvm bnx2 megaraid_sas [last unloaded: tun] CPU: 5 PID: 8670 Comm: vhost-8668 Not tainted 3.10.0-rc2+ #1566 Hardware name: Dell Inc. PowerEdge R715/00XHKG, BIOS 1.5.2 04/19/2011 ffffffffa0198323 ffff88007c9ebd08 ffffffff81796b73 ffff88007c9ebd48 ffffffff8103d66b 000000007b773e20 ffff8800779f0000 ffff8800779f43f0 ffff8800779f8418 000000000000015c 0000000000000062 ffff88007c9ebd58 Call Trace: [<ffffffff81796b73>] dump_stack+0x19/0x1e [<ffffffff8103d66b>] warn_slowpath_common+0x6b/0xa0 [<ffffffff8103d6b5>] warn_slowpath_null+0x15/0x20 [<ffffffffa0197627>] handle_tx+0x477/0x4b0 [vhost_net] [<ffffffffa0197690>] handle_tx_kick+0x10/0x20 [vhost_net] [<ffffffffa019541e>] vhost_worker+0xfe/0x1a0 [vhost_net] [<ffffffffa0195320>] ? vhost_attach_cgroups_work+0x30/0x30 [vhost_net] [<ffffffffa0195320>] ? vhost_attach_cgroups_work+0x30/0x30 [vhost_net] [<ffffffff81061f46>] kthread+0xc6/0xd0 [<ffffffff81061e80>] ? kthread_freezable_should_stop+0x70/0x70 [<ffffffff817a1aec>] ret_from_fork+0x7c/0xb0 [<ffffffff81061e80>] ? kthread_freezable_should_stop+0x70/0x70 Signed-off-by: NJason Wang <jasowang@redhat.com> Acked-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 06 5月, 2013 3 次提交
-
-
由 Asias He 提交于
- Rename vhost_ubuf to vhost_net_ubuf - Rename vhost_zcopy_mask to vhost_net_zcopy_mask - Make funcs static Signed-off-by: NAsias He <asias@redhat.com> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
由 Asias He 提交于
vhost.h should not depend on device specific marcos like VHOST_NET_F_VIRTIO_NET_HDR and VIRTIO_NET_F_MRG_RXBUF. Signed-off-by: NAsias He <asias@redhat.com> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
由 Asias He 提交于
Signed-off-by: NAsias He <asias@redhat.com> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
- 01 5月, 2013 4 次提交
-
-
由 Michael S. Tsirkin 提交于
RESET_OWNER ioctl would leave the fd in a bad state if memory allocation failed: device is stopped but owner is not reset. Make state changes after allocating memory, such that a failed ioctl has no effect. Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
由 Michael S. Tsirkin 提交于
This will remove the need for vhost scsi to pull in virtio-net.h. Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
由 Asias He 提交于
On top of 'vhost: Allow device specific fields per vq', we can move device specific fields to device virt queue from vhost virt queue. Signed-off-by: NAsias He <asias@redhat.com> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
由 Asias He 提交于
This is useful for any device who wants device specific fields per vq. For example, tcm_vhost wants a per vq field to track requests which are in flight on the vq. Also, on top of this we can add patches to move things like ubufs from vhost.h out to net.c. Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NAsias He <asias@redhat.com> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
- 12 4月, 2013 1 次提交
-
-
由 Jason Wang 提交于
After commit 2b8b328b (vhost_net: handle polling errors when setting backend), we in fact track the polling state through poll->wqh, so there's no need to duplicate the work with an extra vhost_net_polling_state. So this patch removes this and make the code simpler. This patch also removes the all tx starting/stopping code in tx path according to Michael's suggestion. Netperf test shows almost the same result in stream test, but gets improvements on TCP_RR tests (both zerocopy or copy) especially on low load cases. Tested between multiqueue kvm guest and external host with two direct connected 82599s. zerocopy disabled: sessions|transaction rates|normalize| before/after/+improvements 1 | 9510.24/11727.29/+23.3% | 693.54/887.68/+28.0% | 25| 192931.50/241729.87/+25.3% | 2376.80/2771.70/+16.6% | 50| 277634.64/291905.76/+5% | 3118.36/3230.11/+3.6% | zerocopy enabled: sessions|transaction rates|normalize| before/after/+improvements 1 | 7318.33/11929.76/+63.0% | 521.86/843.30/+61.6% | 25| 167264.88/242422.15/+44.9% | 2181.60/2788.16/+27.8% | 50| 272181.02/294347.04/+8.1% | 3071.56/3257.85/+6.1% | Signed-off-by: NJason Wang <jasowang@redhat.com> Acked-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 18 3月, 2013 1 次提交
-
-
由 Michael S. Tsirkin 提交于
ubuf info allocator uses guest controlled head as an index, so a malicious guest could put the same head entry in the ring twice, and we will get two callbacks on the same value. To fix use upend_idx which is guaranteed to be unique. Reported-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Cc: stable@kernel.org Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 30 1月, 2013 2 次提交
-
-
由 Jason Wang 提交于
Currently, the polling errors were ignored, which can lead following issues: - vhost remove itself unconditionally from waitqueue when stopping the poll, this may crash the kernel since the previous attempt of starting may fail to add itself to the waitqueue - userspace may think the backend were successfully set even when the polling failed. Solve this by: - check poll->wqh before trying to remove from waitqueue - report polling errors in vhost_poll_start(), tx_poll_start(), the return value will be checked and returned when userspace want to set the backend After this fix, there still could be a polling failure after backend is set, it will addressed by the next patch. Signed-off-by: NJason Wang <jasowang@redhat.com> Acked-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jason Wang 提交于
Currently, when vhost_init_used() fails the sock refcnt and ubufs were leaked. Correct this by calling vhost_init_used() before assign ubufs and restore the oldsock when it fails. Signed-off-by: NJason Wang <jasowang@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 06 12月, 2012 4 次提交
-
-
由 Michael S. Tsirkin 提交于
Zero copy TX has been around for a while now. We seem to be down to eliminating theoretical bugs and performance tuning at this point: it's probably time to enable it by default so that most users get the benefit. Keep the flag around meanwhile so users can experiment with disabling this if they experience regressions. I expect that we will remove it in the future. Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
由 Michael S. Tsirkin 提交于
For short packets zerocopy mode adds overhead of managing heads which isn't necessary: we could simly update used ring directly same as with zerocopy disabled. Things seem to run a bit faster if we detect and bypass head management when zcopy isn't used. Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
由 Michael S. Tsirkin 提交于
When memory map changes, we need to flush outstanding DMAs as they might in theory reference old memory addresses. To do this simply stop initiating new DMAs and wait for ubufs ref count to drop to 0. Afterwards reset the count back to 1. Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
由 Michael S. Tsirkin 提交于
vring changes already do a flush internally where appropriate, so we do not need a second flush. It's currently not very expensive but a follow-up patch makes flush more heavy-weight, so remove the extra flush here to avoid regressing performance if call or kick fds are changed on data path. Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
- 04 12月, 2012 1 次提交
-
-
由 Michael S. Tsirkin 提交于
These packet counters are used to drive the zercopy selection heuristic so nothing too bad happens if they are off a bit - and they are also reset once in a while. But it's cleaner to clear them when backend is set so that we start in a known state. Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 03 11月, 2012 4 次提交
-
-
由 Michael S. Tsirkin 提交于
It seems that to avoid deadlocks it is enough to poll vq before we are going to use the last buffer. This is faster than c70aa540. Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michael S. Tsirkin 提交于
Even when vhost-net is in zero-copy transmit mode, net core might still decide to copy the skb later which is somewhat slower than a copy in user context: data copy overhead is added to the cost of page pin/unpin. The result is that enabling tx zero copy option leads to higher CPU utilization for guest to guest and guest to host traffic. To fix this, suppress zero copy tx after a given number of packets triggered late data copy. Re-enable periodically to detect workload changes. Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michael S. Tsirkin 提交于
Zerocopy handling code is vhost-net specific. Move it from vhost.c/vhost.h out to net.c Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michael S. Tsirkin 提交于
Better document macros for DMA tracking. Add an explicit one for DMA in progress instead of relying on user supplying len != 1. Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 25 10月, 2012 1 次提交
-
-
由 Michael S. Tsirkin 提交于
We copy head count to a 16 bit field, this works by chance on LE but on BE guest gets 0. Fix it up. Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Tested-by: NAlexander Graf <agraf@suse.de> Cc: stable@vger.kernel.org Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 22 7月, 2012 1 次提交
-
-
由 Stefan Hajnoczi 提交于
In order for other vhost devices to use the VHOST_FEATURES bits the vhost-net specific bits need to be moved to their own VHOST_NET_FEATURES constant. (Asias: Update drivers/vhost/test.c to use VHOST_NET_FEATURES) Signed-off-by: NStefan Hajnoczi <stefanha@linux.vnet.ibm.com> Cc: Zhi Yong Wu <wuzhy@cn.ibm.com> Cc: Michael S. Tsirkin <mst@redhat.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Asias He <asias@redhat.com> Signed-off-by: NNicholas A. Bellinger <nab@risingtidesystems.com> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
- 12 5月, 2012 1 次提交
-
-
由 Basil Gor 提交于
Take vlan header length into account, when vlan id is stored as vlan_tci. Otherwise tagged packets coming from macvtap will be truncated. Signed-off-by: NBasil Gor <basil.gor@gmail.com> Acked-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 02 5月, 2012 3 次提交
-
-
由 Jason Wang 提交于
When a packet were fully copied in zerocopy, we don't wait for the DMA done to mark the done flag, so after the packet were passed to lower device, we need to add used and signal guest immediately. Signed-off-by: NJason Wang <jasowang@redhat.com> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
由 Jason Wang 提交于
Currently, we restart tx polling unconditionally when sendmsg() fails. This would cause unnecessary wakeups of vhost wokers and waste cpu utlization when evil userspace(guest driver) is able to hit EFAULT or EINVAL. The polling is only needed when the socket send buffer were exceeded or not enough memory. So fix this by restarting polling only when sendmsg() returns EAGAIN/ENOBUFS. Signed-off-by: NJason Wang <jasowang@redhat.com> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
由 Jason Wang 提交于
When we want to disable vhost_net backend while there's a tx work, a possible NULL pointer defernece may happen we we try to deference the vq->bufs after vhost_net_set_backend() assign a NULL to it. As suggested by Michael, fix this by checking the vq->bufs instead of vhost_sock_zcopy(). Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
- 14 4月, 2012 1 次提交
-
-
由 Michael S. Tsirkin 提交于
The skb struct ubuf_info callback gets passed struct ubuf_info itself, not the arg value as the field name and the function signature seem to imply. Rename the arg field to ctx to match usage, add documentation and change the callback argument type to make usage clear and to have compiler check correctness. Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 28 2月, 2012 1 次提交
-
-
由 Michael S. Tsirkin 提交于
We shouldn't hold any locks on release path. Pass a flag to vhost_dev_cleanup to use the lockdep info correctly. Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Tested-by: NSasha Levin <levinsasha928@gmail.com>
-
- 14 1月, 2012 1 次提交
-
-
由 stephen hemminger 提交于
By adding some module aliases, programs (or users) won't have to explicitly call modprobe. Vhost-net will always be available if built into the kernel. It does require assigning a permanent minor number for depmod to work. Also: - use C99 style initialization. - add missing entry in documentation for loop-control Signed-off-by: NStephen Hemminger <shemminger@vyatta.com> Acked-By: NKay Sievers <kay.sievers@vrfy.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 21 7月, 2011 2 次提交
-
-
由 Shirley Ma 提交于
The meth for calculating the # of outstanding buffers gives incorrect results when vq->upend_idx wraps around zero. Fix that. Signed-off-by: NShirley Ma <xma@us.ibm.com> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
由 Michael S. Tsirkin 提交于
On backend change, we flushed out outstanding skbs but forgot to update the used ring, so that done entries were left in the ubuf_info ring. As a result we lose heads or complete incorrect ones, crashing the guest or leaking memory. Fix by updating the used ring. Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
- 19 7月, 2011 2 次提交
-
-
由 Jason Wang 提交于
Move the used ring initialization after backend was set. This makes it possible to disable the backend and tweak the used ring, then restart. This will also make it possible to log the used ring write correctly. Signed-off-by: NJason Wang <jasowang@redhat.com> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
由 Michael S. Tsirkin 提交于
>From: Shirley Ma <mashirle@us.ibm.com> This adds experimental zero copy support in vhost-net, disabled by default. To enable, set experimental_zcopytx module option to 1. This patch maintains the outstanding userspace buffers in the sequence it is delivered to vhost. The outstanding userspace buffers will be marked as done once the lower device buffers DMA has finished. This is monitored through last reference of kfree_skb callback. Two buffer indices are used for this purpose. The vhost-net device passes the userspace buffers info to lower device skb through message control. DMA done status check and guest notification are handled by handle_tx: in the worst case is all buffers in the vq are in pending/done status, so we need to notify guest to release DMA done buffers first before we get any new buffers from the vq. One known problem is that if the guest stops submitting buffers, buffers might never get used until some further action, e.g. device reset. This does not seem to affect linux guests. Signed-off-by: NShirley <xma@us.ibm.com> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 30 5月, 2011 1 次提交
-
-
由 Michael S. Tsirkin 提交于
Support the new event index feature. When acked, utilize it to reduce the # of interrupts sent to the guest. Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
- 14 3月, 2011 2 次提交
-
-
由 Michael S. Tsirkin 提交于
Use of skb_queue_empty(&sock->sk->sk_receive_queue) without taking the sk_receive_queue.lock is unsafe or useless. Take it out. Reported-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
由 Jason Wang 提交于
vhost takes a sock lock to try and prevent the skb from being pulled from the receive queue after skb_peek. However this is not the right lock to use for that, sk_receive_queue.lock is. Fix that up. Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
- 13 3月, 2011 2 次提交
-
-
由 Jason Wang 提交于
Codes duplication were found between the handling of mergeable and big buffers, so this patch tries to unify them. This could be easily done by adding a quota to the get_rx_bufs() which is used to limit the number of buffers it returns (for mergeable buffer, the quota is simply UIO_MAXIOV, for big buffers, the quota is just 1), and then the previous handle_rx_mergeable() could be resued also for big buffers. Signed-off-by: NJason Wang <jasowang@redhat.com> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
由 Jason Wang 提交于
No need to check the support of mergeable buffer inside the recevie loop as the whole handle_rx()_xx is in the read critical region. So this patch move it ahead of the receiving loop. Signed-off-by: NJason Wang <jasowang@redhat.com> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-