- 18 10月, 2013 2 次提交
-
-
由 Jason Wang 提交于
We used to schedule the refill work unconditionally after changing the number of queues. This may lead an issue if the device is not up. Since we only try to cancel the work in ndo_stop(), this may cause the refill work still work after removing the device. Fix this by only schedule the work when device is up. The bug were introduce by commit 9b9cd802. (virtio-net: fix the race between channels setting and refill) Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: NJason Wang <jasowang@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jason Wang 提交于
We're trying to re-configure the affinity unconditionally in cpu hotplug callback. This may lead the issue during resuming from s3/s4 since - virt queues haven't been allocated at that time. - it's unnecessary since thaw method will re-configure the affinity. Fix this issue by checking the config_enable and do nothing is we're not ready. The bug were introduced by commit 8de4b2f3 (virtio-net: reset virtqueue affinity when doing cpu hotplug). Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Michael S. Tsirkin <mst@redhat.com> Cc: Wanlong Gao <gaowanlong@cn.fujitsu.com> Acked-by: NMichael S. Tsirkin <mst@redhat.com> Reviewed-by: NWanlong Gao <gaowanlong@cn.fujitsu.com> Signed-off-by: NJason Wang <jasowang@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 04 9月, 2013 1 次提交
-
-
由 Thomas Huth 提交于
If the VIRTIO_NET_F_GUEST_CSUM virtio feature is available, the guest does not have to calculate the checksums on all received packets. This is pretty much the same feature as RX checksum offloading on real network cards, so the virtio-net driver should report this by setting the NETIF_F_RXCSUM flag. When the user now runs "ethtool -k", he or she can see whether the virtio-net interface has to calculate RX checksums or not. Signed-off-by: NThomas Huth <thuth@linux.vnet.ibm.com> Acked-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 28 7月, 2013 1 次提交
-
-
由 Michael S. Tsirkin 提交于
For small packets we can simplify xmit processing by linearizing buffers with the header: most packets seem to have enough head room we can use for this purpose. Since existing hypervisors require that header is the first s/g element, we need a feature bit for this. Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 10 7月, 2013 1 次提交
-
-
由 Michael S. Tsirkin 提交于
virtio net called virtqueue_enable_cq on RX path after napi_complete, so with NAPI_STATE_SCHED clear - outside the implicit napi lock. This violates the requirement to synchronize virtqueue_enable_cq wrt virtqueue_add_buf. In particular, used event can move backwards, causing us to lose interrupts. In a debug build, this can trigger panic within START_USE. Jason Wang reports that he can trigger the races artificially, by adding udelay() in virtqueue_enable_cb() after virtio_mb(). However, we must call napi_complete to clear NAPI_STATE_SCHED before polling the virtqueue for used buffers, otherwise napi_schedule_prep in a callback will fail, causing us to lose RX events. To fix, call virtqueue_enable_cb_prepare with NAPI_STATE_SCHED set (under napi lock), later call virtqueue_poll with NAPI_STATE_SCHED clear (outside the lock). Reported-by: NJason Wang <jasowang@redhat.com> Tested-by: NJason Wang <jasowang@redhat.com> Acked-by: NJason Wang <jasowang@redhat.com> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 04 7月, 2013 1 次提交
-
-
由 Jason Wang 提交于
Commit 55257d72 (virtio-net: fill only rx queues which are being used) tries to refill on demand when changing the number of channels by call try_refill_recv() directly, this may race: - the refill work who may do the refill in the same time - the try_refill_recv() called in bh since napi was not disabled Which may led guest complain during setting channels: virtio_net virtio0: input.1:id 0 is not a head! Solve this issue by scheduling a refill work which can guarantee the serialization of refill. Cc: Sasha Levin <sasha.levin@oracle.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: NJason Wang <jasowang@redhat.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
- 23 5月, 2013 1 次提交
-
-
由 Jason Wang 提交于
Commit 55257d72 (virtio-net: fill only rx queues which are being used) only does the napi enabling during open for curr_queue_pairs. This will break multiqueue receiving since napi of new queues were still disabled after changing the number of queues. This patch fixes this by enabling napi for all possible queues during open. Cc: Sasha Levin <sasha.levin@oracle.com> Signed-off-by: NJason Wang <jasowang@redhat.com> Acked-by: NRusty Russell <rusty@rustcorp.com.au> Acked-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 12 5月, 2013 1 次提交
-
-
由 Amerigo Wang 提交于
Since commit 82dc3c63 ("net: introduce NAPI_POLL_WEIGHT") we warn drivers when they use napi weight higher than NAPI_POLL_WEIGHT, but virtio_net still uses 128 by default. This patch makes its default value to NAPI_POLL_WEIGHT. Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: David S. Miller <davem@davemloft.net> Signed-off-by: NCong Wang <amwang@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 29 4月, 2013 1 次提交
-
-
由 Sasha Levin 提交于
Due to MQ support we may allocate a whole bunch of rx queues but never use them. With this patch we'll safe the space used by the receive buffers until they are actually in use: sh-4.2# free -h total used free shared buffers cached Mem: 490M 35M 455M 0B 0B 4.1M -/+ buffers/cache: 31M 459M Swap: 0B 0B 0B sh-4.2# ethtool -L eth0 combined 8 sh-4.2# free -h total used free shared buffers cached Mem: 490M 162M 327M 0B 0B 4.1M -/+ buffers/cache: 158M 331M Swap: 0B 0B 0B Signed-off-by: NSasha Levin <sasha.levin@oracle.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
- 20 4月, 2013 2 次提交
-
-
由 Patrick McHardy 提交于
Change the rx_{add,kill}_vid callbacks to take a protocol argument in preparation of 802.1ad support. The protocol argument used so far is always htons(ETH_P_8021Q). Signed-off-by: NPatrick McHardy <kaber@trash.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Patrick McHardy 提交于
Rename the hardware VLAN acceleration features to include "CTAG" to indicate that they only support CTAGs. Follow up patches will introduce 802.1ad server provider tagging (STAGs) and require the distinction for hardware not supporting acclerating both. Signed-off-by: NPatrick McHardy <kaber@trash.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 12 4月, 2013 1 次提交
-
-
由 Jason Wang 提交于
There's nothing that prevent passing the device features of virtio_net to its vlan device. So this patch simply passes those to vlan device to benefit from advanced features. Netperf shows better sending performance for vlan device since TSO can work on vlan now. before: netperf -H 192.168.5.2 MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.5.2 () port 0 AF_INET : demo Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 16384 10.00 4162.35 after: netperf -H 192.168.5.2 MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.5.2 () port 0 AF_INET : demo Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 16384 10.00 9365.42 Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: "Michael S. Tsirkin" <mst@redhat.com> Signed-off-by: NJason Wang <jasowang@redhat.com> Acked-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 22 3月, 2013 1 次提交
-
-
由 Rusty Russell 提交于
You can access it directly now, since 3.8: v3.7-rc1-13-g06ca287d 'virtio: move queue_index and num_free fields into core struct virtqueue.' Cc: Cornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Acked-by: NCornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 20 3月, 2013 2 次提交
-
-
由 Rusty Russell 提交于
We never add buffers with input and output parts, so use the new accessors. Cc: "Michael S. Tsirkin" <mst@redhat.com> Reviewed-by: NAsias He <asias@redhat.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
由 Rusty Russell 提交于
It's a bit cleaner to hand multiple sgs, rather than one big one. Cc: "Michael S. Tsirkin" <mst@redhat.com> Tested-by: NWanlong Gao <gaowanlong@cn.fujitsu.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
- 14 2月, 2013 1 次提交
-
-
由 Pravin B Shelar 提交于
Patch cef401de (net: fix possible wrong checksum generation) fixed wrong checksum calculation but it broke TSO by defining new GSO type but not a netdev feature for that type. net_gso_ok() would not allow hardware checksum/segmentation offload of such packets without the feature. Following patch fixes TSO and wrong checksum. This patch uses same logic that Eric Dumazet used. Patch introduces new flag SKBTX_SHARED_FRAG if at least one frag can be modified by the user. but SKBTX_SHARED_FRAG flag is kept in skb shared info tx_flags rather than gso_type. tx_flags is better compared to gso_type since we can have skb with shared frag without gso packet. It does not link SHARED_FRAG to GSO, So there is no need to define netdev feature for this. Signed-off-by: NPravin B Shelar <pshelar@nicira.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 13 2月, 2013 1 次提交
-
-
由 Rusty Russell 提交于
Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
- 05 2月, 2013 1 次提交
-
-
由 Joe Perches 提交于
alloc failures already get standardized OOM messages and a dump_stack. Signed-off-by: NJoe Perches <joe@perches.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 28 1月, 2013 1 次提交
-
-
由 Eric Dumazet 提交于
Pravin Shelar mentioned that GSO could potentially generate wrong TX checksum if skb has fragments that are overwritten by the user between the checksum computation and transmit. He suggested to linearize skbs but this extra copy can be avoided for normal tcp skbs cooked by tcp_sendmsg(). This patch introduces a new SKB_GSO_SHARED_FRAG flag, set in skb_shinfo(skb)->gso_type if at least one frag can be modified by the user. Typical sources of such possible overwrites are {vm}splice(), sendfile(), and macvtap/tun/virtio_net drivers. Tested: $ netperf -H 7.7.8.84 MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 7.7.8.84 () port 0 AF_INET Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 16384 10.00 3959.52 $ netperf -H 7.7.8.84 -t TCP_SENDFILE TCP SENDFILE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 7.7.8.84 () port 0 AF_INET Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 16384 10.00 3216.80 Performance of the SENDFILE is impacted by the extra allocation and copy, and because we use order-0 pages, while the TCP_STREAM uses bigger pages. Reported-by: NPravin Shelar <pshelar@nicira.com> Signed-off-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 27 1月, 2013 3 次提交
-
-
由 Wanlong Gao 提交于
Add a cpu notifier to virtio-net, so that we can reset the virtqueue affinity if the cpu hotplug happens. It improve the performance through enabling or disabling the virtqueue affinity after doing cpu hotplug. Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Jason Wang <jasowang@redhat.com> Cc: Eric Dumazet <erdnetdev@gmail.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: virtualization@lists.linux-foundation.org Cc: netdev@vger.kernel.org Signed-off-by: NWanlong Gao <gaowanlong@cn.fujitsu.com> Acked-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Wanlong Gao 提交于
Split out the clean affinity function to virtnet_clean_affinity(). Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Jason Wang <jasowang@redhat.com> Cc: Eric Dumazet <erdnetdev@gmail.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: virtualization@lists.linux-foundation.org Cc: netdev@vger.kernel.org Signed-off-by: NWanlong Gao <gaowanlong@cn.fujitsu.com> Acked-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Wanlong Gao 提交于
As Michael mentioned, set affinity and select queue will not work very well when CPU IDs are not consecutive, this can happen with hot unplug. Fix this bug by traversal the online CPUs, and create a per cpu variable to find the mapping from CPU to the preferable virtual-queue. Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Jason Wang <jasowang@redhat.com> Cc: Eric Dumazet <erdnetdev@gmail.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: virtualization@lists.linux-foundation.org Cc: netdev@vger.kernel.org Signed-off-by: NWanlong Gao <gaowanlong@cn.fujitsu.com> Acked-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 22 1月, 2013 2 次提交
-
-
由 Amos Kong 提交于
Currently we write MAC address to pci config space byte by byte, this means that we have an intermediate step where mac is wrong. This patch introduced a new control command to set MAC address, it's atomic. VIRTIO_NET_F_CTRL_MAC_ADDR is a new feature bit for compatibility. Signed-off-by: NAmos Kong <akong@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Amos Kong 提交于
We want to send vq command to set mac address in virtnet_set_mac_address(), so do this function moving. Fixed a little issue of coding style. Signed-off-by: NAmos Kong <akong@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 18 12月, 2012 4 次提交
-
-
由 Rusty Russell 提交于
We simplified virtqueue_add_buf(), make it clear in the callers. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
由 Rusty Russell 提交于
Now we can easily use vq->num_free to determine if there are descriptors left in the queue, we're about to change virtqueue_add_buf() to return 0 on success. The virtio_net driver is the only one which actually uses the return value, so change that. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Acked-by: NMichael S. Tsirkin <mst@redhat.com>
-
由 Michael S. Tsirkin 提交于
[Split from "correct capacity math on ring full" -- Rusty] Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Acked-by: NMichael S. Tsirkin <mst@redhat.com>
-
由 Michael S. Tsirkin 提交于
Capacity math on ring full is wrong: we are looking at num_sg but that might be optimistic because of indirect buffer use. The implementation also penalizes fast path with extra memory accesses for the benefit of ring full condition handling which is slow path. It's easy to query ring capacity so let's do just that. This change also makes it easier to move vnet header for tx around as follow-up patch does. Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Acked-by: NMichael S. Tsirkin <mst@redhat.com>
-
- 11 12月, 2012 1 次提交
-
-
由 Amerigo Wang 提交于
Obviously it should check !vi->rq. Reported-by: NFengguang Wu <fengguang.wu@intel.com> Cc: Jason Wang <jasowang@redhat.com> Cc: David S. Miller <davem@davemloft.net> Signed-off-by: NCong Wang <amwang@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 09 12月, 2012 3 次提交
-
-
由 Jason Wang 提交于
This patch implements the ethtool_{set|get}_channels method of virtio-net to allow user to change the number of queues when the device is running on demand. Signed-off-by: NJason Wang <jasowang@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jason Wang 提交于
This patch adds the multiqueue (VIRTIO_NET_F_MQ) support to virtio_net driver. VIRTIO_NET_F_MQ capable device could allow the driver to do packet transmission and reception through multiple queue pairs and does the packet steering to get better performance. By default, one one queue pair is used, user could change the number of queue pairs by ethtool in the next patch. When multiple queue pairs is used and the number of queue pairs is equal to the number of vcpus. Driver does the following optimizations to implement per-cpu virt queue pairs: - select the txq based on the smp processor id. - smp affinity hint to the cpu that owns the queue pairs. This could be used with the flow steering support of the device to guarantee the packets of a single flow is handled by the same cpu. Signed-off-by: NKrishna Kumar <krkumar2@in.ibm.com> Signed-off-by: NJason Wang <jasowang@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jason Wang 提交于
To support multiqueue transmitq/receiveq, the first step is to separate queue related structure from virtnet_info. This patch introduce send_queue and receive_queue structure and use the pointer to them as the parameter in functions handling sending/receiving. Signed-off-by: NKrishna Kumar <krkumar2@in.ibm.com> Signed-off-by: NJason Wang <jasowang@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 04 12月, 2012 1 次提交
-
-
由 Bill Pemberton 提交于
CONFIG_HOTPLUG is going away as an option. As result the __dev* markings will be going away. Remove use of __devinit, __devexit_p, __devinitdata, __devinitconst, and __devexit. Signed-off-by: NBill Pemberton <wfp5p@virginia.edu> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Michael S. Tsirkin <mst@redhat.com> Cc: virtualization@lists.linux-foundation.org Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 10 11月, 2012 1 次提交
-
-
由 Amerigo Wang 提交于
These can be converted to net_*_ratelimited(). Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: David Miller <davem@davemloft.net> Signed-off-by: NCong Wang <amwang@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 21 8月, 2012 1 次提交
-
-
由 Tejun Heo 提交于
system_nrt[_freezable]_wq are now spurious. Mark them deprecated and convert all users to system[_freezable]_wq. If you're cc'd and wondering what's going on: Now all workqueues are non-reentrant, so there's no reason to use system_nrt[_freezable]_wq. Please use system[_freezable]_wq instead. This patch doesn't make any functional difference. Signed-off-by: NTejun Heo <tj@kernel.org> Acked-By: NLai Jiangshan <laijs@cn.fujitsu.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: David Airlie <airlied@linux.ie> Cc: Jiri Kosina <jkosina@suse.cz> Cc: "David S. Miller" <davem@davemloft.net> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Cc: David Howells <dhowells@redhat.com>
-
- 15 8月, 2012 1 次提交
-
-
由 Amerigo Wang 提交于
I believe net/core/dev.c is a better place for netif_notify_peers(), because other net event notify functions also stay in this file. And rename it to netdev_notify_peers(). Cc: David S. Miller <davem@davemloft.net> Cc: Ian Campbell <Ian.Campbell@citrix.com> Signed-off-by: NCong Wang <amwang@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 23 7月, 2012 1 次提交
-
-
由 Kevin Groeneveld 提交于
Fix race condition in several network drivers when reading stats on 32bit UP architectures. These drivers update their stats in a BH context and therefore should use u64_stats_fetch_begin_bh/u64_stats_fetch_retry_bh instead of u64_stats_fetch_begin/u64_stats_fetch_retry when reading the stats. Signed-off-by: NKevin Groeneveld <kgroeneveld@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 30 6月, 2012 1 次提交
-
-
由 Jiri Pirko 提交于
Acked-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NJiri Pirko <jpirko@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 28 6月, 2012 1 次提交
-
-
由 Jiri Pirko 提交于
Signed-off-by: NJiri Pirko <jpirko@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 11 6月, 2012 1 次提交
-
-
由 Eric Dumazet 提交于
commit 3fa2a1df (virtio-net: per cpu 64 bit stats (v2)) added a race on 32bit arches. We must use separate syncp for rx and tx path as they can be run at the same time on different cpus. Thus one sequence increment can be lost and readers spin forever. Signed-off-by: NEric Dumazet <edumazet@google.com> Cc: Stephen Hemminger <shemminger@vyatta.com> Cc: Michael S. Tsirkin <mst@redhat.com> Cc: Jason Wang <jasowang@redhat.com> Acked-by: NRusty Russell <rusty@rustcorp.com.au> Acked-by: NMichael S. Tsirkin <mst@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-