- 03 9月, 2015 1 次提交
-
-
由 Paul Durrant 提交于
Xen's PV network protocol includes messages to add/remove ethernet multicast addresses to/from a filter list in the backend. This allows the frontend to request the backend only forward multicast packets which are of interest thus preventing unnecessary noise on the shared ring. The canonical netif header in git://xenbits.xen.org/xen.git specifies the message format (two more XEN_NETIF_EXTRA_TYPEs) so the minimal necessary changes have been pulled into include/xen/interface/io/netif.h. To prevent the frontend from extending the multicast filter list arbitrarily a limit (XEN_NETBK_MCAST_MAX) has been set to 64 entries. This limit is not specified by the protocol and so may change in future. If the limit is reached then the next XEN_NETIF_EXTRA_TYPE_MCAST_ADD sent by the frontend will be failed with NETIF_RSP_ERROR. Signed-off-by: NPaul Durrant <paul.durrant@citrix.com> Cc: Ian Campbell <ian.campbell@citrix.com> Cc: Wei Liu <wei.liu2@citrix.com> Acked-by: NWei Liu <wei.liu2@citrix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 07 8月, 2015 1 次提交
-
-
由 Ross Lagerwall 提交于
Waking the dealloc thread before decrementing inflight_packets is racy because it means the thread may go to sleep before inflight_packets is decremented. If kthread_stop() has already been called, the dealloc thread may wait forever with nothing to wake it. Instead, wake the thread only after decrementing inflight_packets. Signed-off-by: NRoss Lagerwall <ross.lagerwall@citrix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 04 8月, 2015 1 次提交
-
-
由 Ross Lagerwall 提交于
Determine if a fraglist is needed in the tx path, and allocate it if necessary before setting up the copy and map operations. Otherwise, undoing the copy and map operations is tricky. This fixes a use-after-free: if allocating the fraglist failed, the copy and map operations that had been set up were still executed, writing over the data area of a freed skb. Signed-off-by: NRoss Lagerwall <ross.lagerwall@citrix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 15 7月, 2015 1 次提交
-
-
由 Dan Carpenter 提交于
The > should be >=. I also added spaces around the '-' operations so the code is a little more consistent and matches the condition better. Fixes: f53c3fe8 ('xen-netback: Introduce TX grant mapping') Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 22 6月, 2015 2 次提交
-
-
由 Julien Grall 提交于
Append 0x to all %x in order to avoid while reading when there is other decimal value in the log. Also replace some of the hexadecimal print to decimal to uniformize the format with netfront. Signed-off-by: NJulien Grall <julien.grall@citrix.com> Cc: Wei Liu <wei.liu2@citrix.com> Cc: Ian Campbell <ian.campbell@citrix.com> Cc: netdev@vger.kernel.org Acked-by: NIan Campbell <ian.campbell@citrix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julien Grall 提交于
The variables old_req_cons and ring_slots_used are assigned but never used since commit 1650d545 "xen-netback: always fully coalesce guest Rx packets". Signed-off-by: NJulien Grall <julien.grall@citrix.com> Acked-by: NWei Liu <wei.liu2@citrix.com> Cc: Ian Campbell <ian.campbell@citrix.com> Cc: netdev@vger.kernel.org Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 17 6月, 2015 1 次提交
-
-
由 Julien Grall 提交于
Using xen/page.h will be necessary later for using common xen page helpers. As xen/page.h already include asm/xen/page.h, always use the later. Signed-off-by: NJulien Grall <julien.grall@citrix.com> Reviewed-by: NDavid Vrabel <david.vrabel@citrix.com> Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Cc: Ian Campbell <ian.campbell@citrix.com> Cc: Wei Liu <wei.liu2@citrix.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: netdev@vger.kernel.org Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
- 02 6月, 2015 1 次提交
-
-
由 Ian Campbell 提交于
drivers/net/xen-netback/netback.c: In function ‘xenvif_tx_build_gops’: drivers/net/xen-netback/netback.c:1253:8: warning: format ‘%lu’ expects argument of type ‘long unsigned int’, but argument 5 has type ‘int’ [-Wformat=] (txreq.offset&~PAGE_MASK) + txreq.size); ^ PAGE_MASK's type can vary by arch, so a cast is needed. Signed-off-by: NIan Campbell <ian.campbell@citrix.com> ---- v2: Cast to unsigned long, since PAGE_MASK can vary by arch. Acked-by: NWei Liu <wei.liu2@citrix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 26 5月, 2015 1 次提交
-
-
由 Shailendra Verma 提交于
The variable separate_tx_rx_irq is bool type so assigning true instead of 1. Signed-off-by: NShailendra Verma <shailendra.capricorn@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 15 4月, 2015 1 次提交
-
-
由 Wei Liu 提交于
Originally Xen PV drivers only use single-page ring to pass along information. This might limit the throughput between frontend and backend. The patch extends Xenbus driver to support multi-page ring, which in general should improve throughput if ring is the bottleneck. Changes to various frontend / backend to adapt to the new interface are also included. Affected Xen drivers: * blkfront/back * netfront/back * pcifront/back * scsifront/back * vtpmfront The interface is documented, as before, in xenbus_client.c. Signed-off-by: NWei Liu <wei.liu2@citrix.com> Signed-off-by: NPaul Durrant <paul.durrant@citrix.com> Signed-off-by: NBob Liu <bob.liu@oracle.com> Cc: Konrad Wilk <konrad.wilk@oracle.com> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
- 21 3月, 2015 1 次提交
-
-
由 Palik, Imre 提交于
With the current netback, the bandwidth limiter's parameters are only settable during vif setup time. This patch register a watch on them, and thus makes them runtime changeable. When the watch fires, the timer is reset. The timer's mutex is used for fencing the change. Cc: Anthony Liguori <aliguori@amazon.com> Signed-off-by: NImre Palik <imrep@amazon.de> Acked-by: NWei Liu <wei.liu2@citrix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 12 3月, 2015 1 次提交
-
-
由 David Vrabel 提交于
This fixes a performance regression introduced by 7fbb9d84 (xen-netback: release pending index before pushing Tx responses) Moving the notify outside of the spin locks means it can be delayed a long time (if the dealloc thread is descheduled or there is an interrupt or softirq). Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Reviewed-by: NZoltan Kiss <zoltan.kiss@linaro.org> Acked-by: NWei Liu <wei.liu2@citrix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 06 3月, 2015 2 次提交
-
-
由 David Vrabel 提交于
When handling a from-guest frag list, xenvif_handle_frag_list() replaces the frags before calling the destructor to clean up the original (foreign) frags. Whilst this is safe (the destructor doesn't actually use the frags), it looks odd. Reorder the function to be less confusing. Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David Vrabel 提交于
Every time a VIF is destroyed up to 256 pages may be leaked if packets with more than MAX_SKB_FRAGS frags were transmitted from the guest. Even worse, if another user of ballooned pages allocated one of these ballooned pages it would not handle the unexpectedly >1 page count (e.g., gntdev would deadlock when unmapping a grant because the page count would never reach 1). When handling a from-guest skb with a frag list, unref the frags before releasing them so they are freed correctly when the VIF is destroyed. Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 25 2月, 2015 1 次提交
-
-
由 David Vrabel 提交于
If the pending indexes are released /after/ pushing the Tx response then a stale pending index may be used if a new Tx request is immediately pushed by the frontend. The may cause various WARNINGs or BUGs if the stale pending index is actually still in use. Fix this by releasing the pending index before pushing the Tx response. The full barrier for the pending ring update is not required since the the Tx response push already has a suitable write barrier. Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Reviewed-by: NWei Liu <wei.liu2@citrix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 03 2月, 2015 1 次提交
-
-
由 David Vrabel 提交于
After commit e9d8b2c2 (xen-netback: disable rogue vif in kthread context), a fatal (protocol) error would leave the guest Rx thread spinning, wasting CPU time. Commit ecf08d2d (xen-netback: reintroduce guest Rx stall detection) made this even worse by removing a cond_resched() from this path. Since a fatal error is non-recoverable, just allow the guest Rx thread to exit. This requires taking additional refs to the task so the thread exiting early is handled safely. Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Reported-by: NJulien Grall <julien.grall@linaro.org> Tested-by: NJulien Grall <julien.grall@linaro.org> Acked-by: NWei Liu <wei.liu2@citrix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 28 1月, 2015 2 次提交
-
-
由 Jennifer Herbert 提交于
Use the foreign page flag in netback to get the domid and grant ref needed for the grant copy. This signficiantly simplifies the netback code and makes netback work with foreign pages from other backends (e.g., blkback). This allows blkback to use iSCSI disks provided by domUs running on the same host. Signed-off-by: NJennifer Herbert <jennifer.herbert@citrix.com> Acked-by: NIan Campbell <ian.campbell@citrix.com> Acked-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Jennifer Herbert 提交于
Ballooned pages are always used for grant maps which means the original frame does not need to be saved in page->index nor restored after the grant unmap. This allows the workaround in netback for the conflicting use of the (unionized) page->index and page->pfmemalloc to be removed. Signed-off-by: NJennifer Herbert <jennifer.herbert@citrix.com> Reviewed-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
- 24 1月, 2015 1 次提交
-
-
由 David Vrabel 提交于
Always fully coalesce guest Rx packets into the minimum number of ring slots. Reducing the number of slots per packet has significant performance benefits when receiving off-host traffic. Results from XenServer's performance benchmarks: Baseline Full coalesce Interhost VM receive 7.2 Gb/s 11 Gb/s Interhost aggregate 24 Gb/s 24 Gb/s Intrahost single stream 14 Gb/s 14 Gb/s Intrahost aggregate 34 Gb/s 34 Gb/s However, this can increase the number of grant ops per packet which decreases performance of backend (dom0) to VM traffic (by ~10%) /unless/ grant copy has been optimized for adjacent ops with the same source or destination (see "grant-table: defer releasing pages acquired in a grant copy"[1] expected in Xen 4.6). [1] http://lists.xen.org/archives/html/xen-devel/2015-01/msg01118.htmlSigned-off-by: NDavid Vrabel <david.vrabel@citrix.com> Acked-by: NIan Campbell <ian.campbell@citrix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 19 12月, 2014 1 次提交
-
-
由 David Vrabel 提交于
Commit bc96f648 (xen-netback: make feature-rx-notify mandatory) incorrectly assumed that there were no frontends in use that did not support this feature. But the frontend driver in MiniOS does not and since this is used by (qemu) stubdoms, these stopped working. Netback sort of works as-is in this mode except: - If there are no Rx requests and the internal Rx queue fills, only the drain timeout will wake the thread. The default drain timeout of 10 s would give unacceptable pauses. - If an Rx stall was detected and the internal Rx queue is drained, then the Rx thread would never wake. Handle these two cases (when feature-rx-notify is disabled) by: - Reducing the drain timeout to 30 ms. - Disabling Rx stall detection. Reported-by: NJohn <jw@nuclearfallout.net> Tested-by: NJohn <jw@nuclearfallout.net> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Reviewed-by: NWei Liu <wei.liu2@citrix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 07 11月, 2014 1 次提交
-
-
由 Malcolm Crossley 提交于
Unconditionally pulling 128 bytes into the linear area is not required for: - security: Every protocol demux starts with pskb_may_pull() to pull frag data into the linear area, if necessary, before looking at headers. - performance: Netback has already grant copied up-to 128 bytes from the first slot of a packet into the linear area. The first slot normally contain all the IPv4/IPv6 and TCP/UDP headers. The unconditional pull would often copy frag data unnecessarily. This is a performance problem when running on a version of Xen where grant unmap avoids TLB flushes for pages which are not accessed. TLB flushes can now be avoided for > 99% of unmaps (it was 0% before). Grant unmap TLB flush avoidance will be available in a future version of Xen (probably 4.6). Signed-off-by: NMalcolm Crossley <malcolm.crossley@citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Acked-by: NIan Campbell <ian.campbell@citrix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 30 10月, 2014 1 次提交
-
-
由 Zoltan Kiss 提交于
This flag is unnecessary, it came from some old code. Suggested-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NZoltan Kiss <zoltan.kiss@linaro.org> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Acked-by: NWei Liu <wei.liu2@citrix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 26 10月, 2014 2 次提交
-
-
由 David Vrabel 提交于
If a frontend not receiving packets it is useful to detect this and turn off the carrier so packets are dropped early instead of being queued and drained when they expire. A to-guest queue is stalled if it doesn't have enough free slots for a an extended period of time (default 60 s). If at least one queue is stalled, the carrier is turned off (in the expectation that the other queues will soon stall as well). The carrier is only turned on once all queues are ready. When the frontend connects, all the queues start in the stalled state and only become ready once the frontend queues enough Rx requests. Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Reviewed-by: NWei Liu <wei.liu2@citrix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David Vrabel 提交于
Netback needs to discard old to-guest skb's (guest Rx queue drain) and it needs detect guest Rx stalls (to disable the carrier so packets are discarded earlier), but the current implementation is very broken. 1. The check in hard_start_xmit of the slot availability did not consider the number of packets that were already in the guest Rx queue. This could allow the queue to grow without bound. The guest stops consuming packets and the ring was allowed to fill leaving S slot free. Netback queues a packet requiring more than S slots (ensuring that the ring stays with S slots free). Netback queue indefinately packets provided that then require S or fewer slots. 2. The Rx stall detection is not triggered in this case since the (host) Tx queue is not stopped. 3. If the Tx queue is stopped and a guest Rx interrupt occurs, netback will consider this an Rx purge event which may result in it taking the carrier down unnecessarily. It also considers a queue with only 1 slot free as unstalled (even though the next packet might not fit in this). The internal guest Rx queue is limited by a byte length (to 512 Kib, enough for half the ring). The (host) Tx queue is stopped and started based on this limit. This sets an upper bound on the amount of memory used by packets on the internal queue. This allows the estimatation of the number of slots for an skb to be removed (it wasn't a very good estimate anyway). Instead, the guest Rx thread just waits for enough free slots for a maximum sized packet. skbs queued on the internal queue have an 'expires' time (set to the current time plus the drain timeout). The guest Rx thread will detect when the skb at the head of the queue has expired and discard expired skbs. This sets a clear upper bound on the length of time an skb can be queued for. For a guest being destroyed the maximum time needed to wait for all the packets it sent to be dropped is still the drain timeout (10 s) since it will not be sending new packets. Rx stall detection is reintroduced in a later commit. Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Reviewed-by: NWei Liu <wei.liu2@citrix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 14 8月, 2014 1 次提交
-
-
由 Wei Liu 提交于
Reference count the number of packets in host stack, so that we don't stop the deallocation thread too early. If not, we can end up with xenvif_free permanently waiting for deallocation thread to unmap grefs. Reported-by: NThomas Leonard <talex5@gmail.com> Signed-off-by: NWei Liu <wei.liu2@citrix.com> Cc: Ian Campbell <ian.campbell@citrix.com> Cc: Zoltan Kiss <zoltan.kiss@citrix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 08 8月, 2014 1 次提交
-
-
由 Zoltan Kiss 提交于
In the patch called "xen-netback: Turn off the carrier if the guest is not able to receive" new branches were introduced to this if statement, risking that a queue with non-zero id can reenable the disabled interface. Signed-off-by: NZoltan Kiss <zoltan.kiss@citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Cc: netdev@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: xen-devel@lists.xenproject.org Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 06 8月, 2014 2 次提交
-
-
由 Zoltan Kiss 提交于
Currently when the guest is not able to receive more packets, qdisc layer starts a timer, and when it goes off, qdisc is started again to deliver a packet again. This is a very slow way to drain the queues, consumes unnecessary resources and slows down other guests shutdown. This patch change the behaviour by turning the carrier off when that timer fires, so all the packets are freed up which were stucked waiting for that vif. Instead of the rx_queue_purge bool it uses the VIF_STATUS_RX_PURGE_EVENT bit to signal the thread that either the timeout happened or an RX interrupt arrived, so the thread can check what it should do. It also disables NAPI, so the guest can't transmit, but leaves the interrupts on, so it can resurrect. Only the queues which brought down the interface can enable it again, the bit QUEUE_STATUS_RX_STALLED makes sure of that. Signed-off-by: NZoltan Kiss <zoltan.kiss@citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Cc: netdev@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: xen-devel@lists.xenproject.org Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Zoltan Kiss 提交于
This patch introduces a new state bit VIF_STATUS_CONNECTED to track whether the vif is in a connected state. Using carrier will not work with the next patch in this series, which aims to turn the carrier temporarily off if the guest doesn't seem to be able to receive packets. Signed-off-by: NZoltan Kiss <zoltan.kiss@citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Cc: netdev@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: xen-devel@lists.xenproject.org v2: - rename the bitshift type to "enum state_bit_shift" here, not in the next patch Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 21 7月, 2014 4 次提交
-
-
由 Zoltan Kiss 提交于
Due to this pointer is increased prematurely, the error log contains rubbish. Signed-off-by: NZoltan Kiss <zoltan.kiss@citrix.com> Reported-by: NArmin Zentai <armin.zentai@ezit.hu> Cc: netdev@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: xen-devel@lists.xenproject.org Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Zoltan Kiss 提交于
This patch makes this function aware that the first frag and the header might share the same ring slot. That could happen if the first slot is bigger than PKT_PROT_LEN. Due to this the error path might release that slot twice or never, depending on the error scenario. xenvif_idx_release is also removed from xenvif_idx_unmap, and called separately. Signed-off-by: NZoltan Kiss <zoltan.kiss@citrix.com> Reported-by: NArmin Zentai <armin.zentai@ezit.hu> Cc: netdev@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: xen-devel@lists.xenproject.org Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Zoltan Kiss 提交于
When the grant operations failed, the skb is freed up eventually, and it tries to release the frags, if there is any. For the main skb nr_frags is set to 0 to avoid this, but on the frag_list it iterates through the frags array, and tries to call put_page on the page pointer which contains garbage at that time. Signed-off-by: NZoltan Kiss <zoltan.kiss@citrix.com> Reported-by: NArmin Zentai <armin.zentai@ezit.hu> Cc: netdev@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: xen-devel@lists.xenproject.org Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Zoltan Kiss 提交于
The error handling for skb's with frag_list was completely wrong, it caused double unmap attempts to happen if the error was on the first skb. Move it to the right place in the loop. Signed-off-by: NZoltan Kiss <zoltan.kiss@citrix.com> Reported-by: NArmin Zentai <armin.zentai@ezit.hu> Cc: netdev@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: xen-devel@lists.xenproject.org Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 09 7月, 2014 1 次提交
-
-
由 Zoltan Kiss 提交于
This patch adds debugfs capabilities to netback. There used to be a similar patch floating around for classic kernel, but it used procfs. It is based on a very similar blkback patch. It creates xen-netback/[vifname]/io_ring_q[queueno] files, reading them output various ring variables etc. Writing "kick" into it imitates an interrupt happened, it can be useful to check whether the ring is just stalled due to a missed interrupt. Signed-off-by: NZoltan Kiss <zoltan.kiss@citrix.com> Cc: netdev@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: xen-devel@lists.xenproject.org Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 06 6月, 2014 1 次提交
-
-
由 Zoltan Kiss 提交于
A recent commit (a02eb4 "xen-netback: worse-case estimate in xenvif_rx_action is underestimating") capped the slot estimation to MAX_SKB_FRAGS, but that triggers the next BUG_ON a few lines down, as the packet consumes more slots than estimated. This patch introduces full_coalesce on the skb callback buffer, which is used in start_new_rx_buffer() to decide whether netback needs coalescing more aggresively. By doing that, no packet should need more than (XEN_NETIF_MAX_TX_SIZE + 1) / PAGE_SIZE data slots (excluding the optional GSO slot, it doesn't carry data, therefore irrelevant in this case), as the provided buffers are fully utilized. Signed-off-by: NZoltan Kiss <zoltan.kiss@citrix.com> Cc: Paul Durrant <paul.durrant@citrix.com> Cc: Wei Liu <wei.liu2@citrix.com> Cc: Ian Campbell <ian.campbell@citrix.com> Cc: David Vrabel <david.vrabel@citrix.com> Reviewed-by: NPaul Durrant <paul.durrant@gmail.com> Acked-by: NWei Liu <wei.liu2@citrix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 05 6月, 2014 2 次提交
-
-
由 Andrew J. Bennieston 提交于
Builds on the refactoring of the previous patch to implement multiple queues between xen-netfront and xen-netback. Writes the maximum supported number of queues into XenStore, and reads the values written by the frontend to determine how many queues to use. Ring references and event channels are read from XenStore on a per-queue basis and rings are connected accordingly. Also adds code to handle the cleanup of any already initialised queues if the initialisation of a subsequent queue fails. Signed-off-by: NAndrew J. Bennieston <andrew.bennieston@citrix.com> Acked-by: NWei Liu <wei.liu2@citrix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Wei Liu 提交于
In preparation for multi-queue support in xen-netback, move the queue-specific data from struct xenvif into struct xenvif_queue, and update the rest of the code to use this. Also adds loops over queues where appropriate, even though only one is configured at this point, and uses alloc_netdev_mq() and the corresponding multi-queue netif wake/start/stop functions in preparation for multiple active queues. Finally, implements a trivial queue selection function suitable for ndo_select_queue, which simply returns 0 for a single queue and uses skb_get_hash() to compute the queue index otherwise. Signed-off-by: NAndrew J. Bennieston <andrew.bennieston@citrix.com> Signed-off-by: NWei Liu <wei.liu2@citrix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 17 5月, 2014 1 次提交
-
-
由 David Vrabel 提交于
When the NAPI budget was not all used, xenvif_poll() would call napi_complete() /after/ enabling the interrupt. This resulted in a race between the napi_complete() and the napi_schedule() in the interrupt handler. The use of local_irq_save/restore() avoided by race iff the handler is running on the same CPU but not if it was running on a different CPU. Fix this properly by calling napi_complete() before reenabling interrupts (in the xenvif_napi_schedule_or_enable_irq() call). Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Acked-by: NWei Liu <wei.liu2@citrix.com> Acked-by: NIan Campbell <ian.campbell@citrix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 16 5月, 2014 1 次提交
-
-
由 Zoltan Kiss 提交于
The original series for reintroducing grant mapping for netback had a patch [1] to handle receiving of packets from an another VIF. Grant copy on the receiving side needs the grant ref of the page to set up the op. The original patch assumed (wrongly) that the frags array haven't changed. In the case reported by Sander, the sending guest sent a packet where the linear buffer and the first frag were under PKT_PROT_LEN (=128) bytes. xenvif_tx_submit() then pulled up the linear area to 128 bytes, and ditched the first frag. The receiving side had an off-by-one problem when gathered the grant refs. This patch fixes that by checking whether the actual frag's page pointer is the same as the page in the original frag list. It can handle any kind of changes on the original frags array, like: - removing granted frags from the array at any point - adding local pages to the frags list anywhere - reordering the frags It's optimized to the most common case, when there is 1:1 relation between the frags and the list, plus works optimal when frags are removed from the end or the beginning. [1]: 3e2234: xen-netback: Handle foreign mapped pages on the guest RX path Reported-by: NSander Eikelenboom <linux@eikelenboom.it> Signed-off-by: NZoltan Kiss <zoltan.kiss@citrix.com> Acked-by: NIan Campbell <ian.campbell@citrix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 04 4月, 2014 2 次提交
-
-
由 Zoltan Kiss 提交于
There is a "%" after pending_idx instead of ":". Signed-off-by: NZoltan Kiss <zoltan.kiss@citrix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Zoltan Kiss 提交于
An old inefficiency of the TX path that we are grant mapping the first slot, and then copy the header part to the linear area. Instead, doing a grant copy for that header straight on is more reasonable. Especially because there are ongoing efforts to make Xen avoiding TLB flush after unmap when the page were not touched in Dom0. In the original way the memcpy ruined that. The key changes: - the vif has a tx_copy_ops array again - xenvif_tx_build_gops sets up the grant copy operations - we don't have to figure out whether the header and first frag are on the same grant mapped page or not Note, we only grant copy PKT_PROT_LEN bytes from the first slot, the rest (if any) will be on the first frag, which is grant mapped. If the first slot is smaller than PKT_PROT_LEN, then we grant copy that, and later __pskb_pull_tail will pull more from the frags (if any) Signed-off-by: NZoltan Kiss <zoltan.kiss@citrix.com> Reviewed-by: NPaul Durrant <paul.durrant@citrix.com> Acked-by: NIan Campbell <ian.campbell@citrix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-