- 13 2月, 2021 2 次提交
-
-
由 Maciej Fijalkowski 提交于
i40e_is_non_eop had a leftover comment and unused skb argument which was used for placing the skb onto rx_buf in case when current buffer was non-eop one. This is not relevant anymore as commit e72e5659 ("i40e/i40evf: Moves skb from i40e_rx_buffer to i40e_ring") pulled the non-complete skb handling out of rx_bufs up to rx_ring. Therefore, let's adjust the function arguments that i40e_is_non_eop takes. Furthermore, since there is already a function responsible for bumping the ntc, make use of that and drop that logic from i40e_is_non_eop so that the scope of this function is limited to what the name actually states. Reviewed-by: NBjörn Töpel <bjorn.topel@intel.com> Signed-off-by: NMaciej Fijalkowski <maciej.fijalkowski@intel.com> Tested-by: NTony Brelinski <tonyx.brelinski@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
由 Maciej Fijalkowski 提交于
i40e_cleanup_headers has a statement about check against skb being linear or not which is not relevant anymore, so let's remove it. Same case for i40e_can_reuse_rx_page, it references things that are not present there anymore. Reviewed-by: NBjörn Töpel <bjorn.topel@intel.com> Signed-off-by: NMaciej Fijalkowski <maciej.fijalkowski@intel.com> Tested-by: NTony Brelinski <tonyx.brelinski@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
- 11 2月, 2021 2 次提交
-
-
由 Przemyslaw Patynowski 提交于
Allow user to specify VLAN field and add it to flow director. Show VLAN field in "ethtool -n ethx" command. Handle VLAN type and tag field provided by ethtool command. Refactored filter addition, by replacing static arrays with runtime dummy packet creation, which allows specifying VLAN field. Previously, VLAN field was omitted. Signed-off-by: NPrzemyslaw Patynowski <przemyslawx.patynowski@intel.com> Tested-by: NTony Brelinski <tonyx.brelinski@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
由 Przemyslaw Patynowski 提交于
Flow director for IPv6 is not supported. 1) Implementation of support for IPv6 flow director. 2) Added handlers for addition of TCP6, UDP6, SCTP6, IPv6. 3) Refactored legacy code to make it more generic. 4) Added packet templates for TCP6, UDP6, SCTP6, IPv6. 5) Added handling of IPv6 source and destination address for flow director. 6) Improved argument passing for source and destination portin TCP6, UDP6 and SCTP6. 7) Added handling of ethtool -n for IPv6, TCP6,UDP6, SCTP6. 8) Used correct bit flag regarding FLEXOFF field of flow director data descriptor. Without this patch, there would be no support for flow director on IPv6, TCP6, UDP6, SCTP6. Tested based on x710 datasheet by using: ethtool -N enp133s0f0 flow-type tcp4 src-port 13 dst-port 37 user-def 0x44142 action 1 ethtool -N enp133s0f0 flow-type tcp6 src-port 13 dst-port 40 user-def 0x44142 action 2 ethtool -N enp133s0f0 flow-type udp4 src-port 20 dst-port 40 user-def 0x44142 action 3 ethtool -N enp133s0f0 flow-type udp6 src-port 25 dst-port 40 user-def 0x44142 action 4 ethtool -N enp133s0f0 flow-type sctp4 src-port 55 dst-port 65 user-def 0x44142 action 5 ethtool -N enp133s0f0 flow-type sctp6 src-port 60 dst-port 40 user-def 0x44142 action 6 ethtool -N enp133s0f0 flow-type ip4 src-ip 1.1.1.1 dst-ip 1.1.1.4 user-def 0x44142 action 7 ethtool -N enp133s0f0 flow-type ip6 src-ip fe80::3efd:feff:fe6f:bbbb dst-ip fe80::3efd:feff:fe6f:aaaa user-def 0x44142 action 8 Then send traffic from client which matches the criteria provided to ethtool. Observe that packets are redirected to user set queues with ethtool -S <interface> Signed-off-by: NPrzemyslaw Patynowski <przemyslawx.patynowski@intel.com> Tested-by: NTony Brelinski <tonyx.brelinski@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
- 05 2月, 2021 1 次提交
-
-
由 Alexander Lobakin 提交于
Now we can remove a bunch of identical functions from the drivers and make them use common dev_page_is_reusable(). All {,un}likely() checks are omitted since it's already present in this helper. Also update some comments near the call sites. Suggested-by: NDavid Rientjes <rientjes@google.com> Suggested-by: NJakub Kicinski <kuba@kernel.org> Cc: John Hubbard <jhubbard@nvidia.com> Signed-off-by: NAlexander Lobakin <alobakin@pm.me> Reviewed-by: NJesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
- 09 1月, 2021 2 次提交
-
-
由 Lorenzo Bianconi 提交于
Introduce xdp_prepare_buff utility routine to initialize per-descriptor xdp_buff fields (e.g. xdp_buff pointers). Rely on xdp_prepare_buff() in all XDP capable drivers. Signed-off-by: NLorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Reviewed-by: NAlexander Duyck <alexanderduyck@fb.com> Acked-by: NJesper Dangaard Brouer <brouer@redhat.com> Acked-by: NJohn Fastabend <john.fastabend@gmail.com> Acked-by: NShay Agroskin <shayagr@amazon.com> Acked-by: NMartin Habets <habetsm.xilinx@gmail.com> Acked-by: NCamelia Groza <camelia.groza@nxp.com> Acked-by: NMarcin Wojtas <mw@semihalf.com> Link: https://lore.kernel.org/bpf/45f46f12295972a97da8ca01990b3e71501e9d89.1608670965.git.lorenzo@kernel.orgSigned-off-by: NAlexei Starovoitov <ast@kernel.org>
-
由 Lorenzo Bianconi 提交于
Introduce xdp_init_buff utility routine to initialize xdp_buff fields const over NAPI iterations (e.g. frame_sz or rxq pointer). Rely on xdp_init_buff in all XDP capable drivers. Signed-off-by: NLorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Reviewed-by: NAlexander Duyck <alexanderduyck@fb.com> Acked-by: NJesper Dangaard Brouer <brouer@redhat.com> Acked-by: NJohn Fastabend <john.fastabend@gmail.com> Acked-by: NShay Agroskin <shayagr@amazon.com> Acked-by: NMartin Habets <habetsm.xilinx@gmail.com> Acked-by: NCamelia Groza <camelia.groza@nxp.com> Acked-by: NMarcin Wojtas <mw@semihalf.com> Link: https://lore.kernel.org/bpf/7f8329b6da1434dc2b05a77f2e800b29628a8913.1608670965.git.lorenzo@kernel.orgSigned-off-by: NAlexei Starovoitov <ast@kernel.org>
-
- 10 12月, 2020 1 次提交
-
-
由 Björn Töpel 提交于
The page recycle code, incorrectly, relied on that a page fragment could not be freed inside xdp_do_redirect(). This assumption leads to that page fragments that are used by the stack/XDP redirect can be reused and overwritten. To avoid this, store the page count prior invoking xdp_do_redirect(). Longer explanation: Intel NICs have a recycle mechanism. The main idea is that a page is split into two parts. One part is owned by the driver, one part might be owned by someone else, such as the stack. t0: Page is allocated, and put on the Rx ring +--------------- used by NIC ->| upper buffer (rx_buffer) +--------------- | lower buffer +--------------- page count == USHRT_MAX rx_buffer->pagecnt_bias == USHRT_MAX t1: Buffer is received, and passed to the stack (e.g.) +--------------- | upper buff (skb) +--------------- used by NIC ->| lower buffer (rx_buffer) +--------------- page count == USHRT_MAX rx_buffer->pagecnt_bias == USHRT_MAX - 1 t2: Buffer is received, and redirected +--------------- | upper buff (skb) +--------------- used by NIC ->| lower buffer (rx_buffer) +--------------- Now, prior calling xdp_do_redirect(): page count == USHRT_MAX rx_buffer->pagecnt_bias == USHRT_MAX - 2 This means that buffer *cannot* be flipped/reused, because the skb is still using it. The problem arises when xdp_do_redirect() actually frees the segment. Then we get: page count == USHRT_MAX - 1 rx_buffer->pagecnt_bias == USHRT_MAX - 2 From a recycle perspective, the buffer can be flipped and reused, which means that the skb data area is passed to the Rx HW ring! To work around this, the page count is stored prior calling xdp_do_redirect(). Note that this is not optimal, since the NIC could actually reuse the "lower buffer" again. However, then we need to track whether XDP_REDIRECT consumed the buffer or not. Fixes: d9314c47 ("i40e: add support for XDP_REDIRECT") Reported-and-analyzed-by: NLi RongQing <lirongqing@baidu.com> Signed-off-by: NBjörn Töpel <bjorn.topel@intel.com> Tested-by: NGeorge Kuruvinakunnel <george.kuruvinakunnel@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
- 01 12月, 2020 1 次提交
-
-
由 Björn Töpel 提交于
Add napi_id to the xdp_rxq_info structure, and make sure the XDP socket pick up the napi_id in the Rx path. The napi_id is used to find the corresponding NAPI structure for socket busy polling. Signed-off-by: NBjörn Töpel <bjorn.topel@intel.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NIlias Apalodimas <ilias.apalodimas@linaro.org> Acked-by: NMichael S. Tsirkin <mst@redhat.com> Acked-by: NTariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/bpf/20201130185205.196029-7-bjorn.topel@gmail.com
-
- 18 11月, 2020 1 次提交
-
-
由 Magnus Karlsson 提交于
Use the new batched xsk interfaces for the Tx path in the i40e driver to improve performance. On my machine, this yields a throughput increase of 4% for the l2fwd sample app in xdpsock. If we instead just look at the Tx part, this patch set increases throughput with above 20% for Tx. Note that I had to explicitly loop unroll the inner loop to get to this performance level, by using a pragma. It is honored by both clang and gcc and should be ignored by versions that do not support it. Using the -funroll-loops compiler command line switch on the source file resulted in a loop unrolling on a higher level that lead to a performance decrease instead of an increase. Signed-off-by: NMagnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NJohn Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/1605525167-14450-6-git-send-email-magnus.karlsson@gmail.com
-
- 26 9月, 2020 1 次提交
-
-
由 Jesse Brandeburg 提交于
This takes care of all of the trivial W=1 fixes in the Intel Ethernet drivers, which allows developers and maintainers to build more of the networking tree with more complete warning checks. There are three classes of kdoc warnings fixed: - cannot understand function prototype: 'x' - Excess function parameter 'x' description in 'y' - Function parameter or member 'x' not described in 'y' All of the changes were trivial comment updates on function headers. Inspired by Lee Jones' series of wireless work to do the same. Compile tested only, and passes simple test of $ git ls-files *.[ch] | egrep drivers/net/ethernet/intel | \ xargs scripts/kernel-doc -none Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 15 9月, 2020 3 次提交
-
-
由 Björn Töpel 提交于
The i40e NIC supports two flavors of HW descriptors, 16 and 32 byte. The latter has, obviously, room for more offloading information. However, the only fields of the 32B HW descriptor that is being used by the driver, is also available in the 16B descriptor. In other words; Reading and writing 32 bytes instead of 16 byte is a waste of bus bandwidth. This commit starts using 16 byte descriptors instead of 32 byte descriptors. For AF_XDP the rx_drop benchmark was improved by 2%. Signed-off-by: NBjörn Töpel <bjorn.topel@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
由 Björn Töpel 提交于
The software prefetching of HW descriptors has a negative impact on the performance. Therefore, it is now removed. Performance for the rx_drop benchmark increased with 2%. Signed-off-by: NBjörn Töpel <bjorn.topel@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
由 Li RongQing 提交于
refcount of rx_buffer page will be added here originally, so prefetchw is needed, but after commit 1793668c ("i40e/i40evf: Update code to better handle incrementing page count"), and refcount is not added every time, so change prefetchw as prefetch. Now it mainly services page_address(), but which accesses struct page only when WANT_PAGE_VIRTUAL or HASHED_PAGE_VIRTUAL is defined otherwise it returns address based on offset, so we prefetch it conditionally. Jakub suggested to define prefetch_page_address in a common header. Reported-by: Nkernel test robot <lkp@intel.com> Suggested-by: NJakub Kicinski <kuba@kernel.org> Signed-off-by: NLi RongQing <lirongqing@baidu.com> Reviewed-by: NJesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
- 01 9月, 2020 1 次提交
-
-
由 Magnus Karlsson 提交于
Replace the explicit umem reference passed to the driver in AF_XDP zero-copy mode with the buffer pool instead. This in preparation for extending the functionality of the zero-copy mode so that umems can be shared between queues on the same netdev and also between netdevs. In this commit, only an umem reference has been added to the buffer pool struct. But later commits will add other entities to it. These are going to be entities that are different between different queue ids and netdevs even though the umem is shared between them. Signed-off-by: NMagnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NBjörn Töpel <bjorn.topel@intel.com> Link: https://lore.kernel.org/bpf/1598603189-32145-2-git-send-email-magnus.karlsson@intel.com
-
- 27 8月, 2020 1 次提交
-
-
由 Tariq Toukan 提交于
Many device drivers use the same prefetch code structure to deal with small L1 cacheline size. Take this code into a function and call it from the drivers. Suggested-by: NJakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Reviewed-by: NSaeed Mahameed <saeedm@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 02 7月, 2020 3 次提交
-
-
由 Magnus Karlsson 提交于
Eliminate a division in the napi_poll data path. This division is executed even though it is only needed in the rare case when there are not enough interrupt lines so they have to be shared between queue pairs. Instead, just test for this case and only execute the division if needed. The code has been lifted from the ice driver. Signed-off-by: NMagnus Karlsson <magnus.karlsson@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
由 Magnus Karlsson 提交于
Improve the performance of the AF_XDP zero-copy Tx completion path. When there are no XDP buffers being sent using XDP_TX or XDP_REDIRECT, we do not have go through the SW ring to clean up any entries since the AF_XDP path does not use these. In these cases, just fast forward the next-to-use counter and skip going through the SW ring. The limit on the maximum number of entries to complete is also removed since the algorithm is now O(1). To simplify the code path, the maximum number of entries to complete for the XDP path is therefore also increased from 256 to 512 (the default number of Tx HW descriptors). This should be fine since the completion in the XDP path is faster than in the SKB path that has 256 as the maximum number. This patch provides around 4% throughput improvement for the l2fwd application in xdpsock on my machine. Signed-off-by: NMagnus Karlsson <magnus.karlsson@intel.com> Reviewed-by: NSridhar Samudrala <sridhar.samudrala@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
由 Jeff Kirsher 提交于
Convert all the remaining 'fall through" code comments to the newer 'fallthrough;' keyword. Suggested-by: NJoe Perches <joe@perches.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
- 02 6月, 2020 1 次提交
-
-
由 Lorenzo Bianconi 提交于
In order to use standard 'xdp' prefix, rename convert_to_xdp_frame utility routine in xdp_convert_buff_to_frame and replace all the occurrences Signed-off-by: NLorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NJesper Dangaard Brouer <brouer@redhat.com> Link: https://lore.kernel.org/bpf/6344f739be0d1a08ab2b9607584c4d5478c8c083.1590698295.git.lorenzo@kernel.org
-
- 22 5月, 2020 2 次提交
-
-
由 Björn Töpel 提交于
Continuing the path to support MEM_TYPE_XSK_BUFF_POOL, the AF_XDP zero-copy/sk_buff rx_bi rings are now separate. Functions to properly allocate the different rings are added as well. v3->v4: Made i40e_fd_handle_status() static. (kbuild test robot) v4->v5: Fix kdoc for i40e_clean_programming_status(). (Jakub) Signed-off-by: NBjörn Töpel <bjorn.topel@intel.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Cc: intel-wired-lan@lists.osuosl.org Link: https://lore.kernel.org/bpf/20200520192103.355233-8-bjorn.topel@gmail.com
-
由 Björn Töpel 提交于
As a first step to migrate i40e to the new MEM_TYPE_XSK_BUFF_POOL APIs, code that accesses the rx_bi (SW/shadow ring) is refactored to use an accessor function. Signed-off-by: NBjörn Töpel <bjorn.topel@intel.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Cc: intel-wired-lan@lists.osuosl.org Link: https://lore.kernel.org/bpf/20200520192103.355233-7-bjorn.topel@gmail.com
-
- 15 5月, 2020 1 次提交
-
-
由 Jesper Dangaard Brouer 提交于
This driver uses different memory models depending on PAGE_SIZE at compile time. For PAGE_SIZE 4K it uses page splitting, meaning for normal MTU frame size is 2048 bytes (and headroom 192 bytes). For larger MTUs the driver still use page splitting, by allocating order-1 pages (8192 bytes) for RX frames. For PAGE_SIZE larger than 4K, driver instead advance its rx_buffer->page_offset with the frame size "truesize". For XDP frame size calculations, this mean that in PAGE_SIZE larger than 4K mode the frame_sz change on a per packet basis. For the page split 4K PAGE_SIZE mode, xdp.frame_sz is more constant and can be updated once outside the main NAPI loop. The default setting in the driver uses build_skb(), which provides the necessary headroom and tailroom for XDP-redirect in RX-frame (in both modes). There is one complication, which is legacy-rx mode (configurable via ethtool priv-flags). There are zero headroom in this mode, which is a requirement for XDP-redirect to work. The conversion to xdp_frame (convert_to_xdp_frame) will detect this insufficient space, and xdp_do_redirect() call will fail. This is deemed acceptable, as it allows other XDP actions to still work in legacy-mode. In legacy-mode + larger PAGE_SIZE due to lacking tailroom, we also accept that xdp_adjust_tail shrink doesn't work. Signed-off-by: NJesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Cc: intel-wired-lan@lists.osuosl.org Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Cc: Alexander Duyck <alexander.duyck@gmail.com> Link: https://lore.kernel.org/bpf/158945346494.97035.12809400414566061815.stgit@firesoul
-
- 30 10月, 2019 1 次提交
-
-
由 Josh Hunt 提交于
Based on a series from Alexander Duyck this change adds UDP segmentation offload support to the i40e driver. CC: Alexander Duyck <alexander.h.duyck@intel.com> CC: Willem de Bruijn <willemb@google.com> Signed-off-by: NJosh Hunt <johunt@akamai.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 31 7月, 2019 1 次提交
-
-
由 Jonathan Lemon 提交于
Use accessor functions for skb fragment's page_offset instead of direct references, in preparation for bvec conversion. Signed-off-by: NJonathan Lemon <jonathan.lemon@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 23 7月, 2019 1 次提交
-
-
由 Matthew Wilcox (Oracle) 提交于
In preparation for unifying the skb_frag and bio_vec, use the fine accessors which already exist and use skb_frag_t instead of struct skb_frag_struct. Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 15 6月, 2019 1 次提交
-
-
由 Mitch Williams 提交于
The counter variable in i40e_clean_tx_irq starts out negative and climbs to 0. So it should not be defined as a u16. This was working by accident due to the fact the u16 overflows and underflows predictably. Replace the u16 with int, which is signed and can handle the negativity. Signed-off-by: NMitch Williams <mitch.a.williams@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 24 4月, 2019 1 次提交
-
-
由 Stanislav Fomichev 提交于
Update all users of eth_get_headlen to pass network device, fetch network namespace from it and pass it down to the flow dissector. This commit is a noop until administrator inserts BPF flow dissector program. Cc: Maxim Krasnyansky <maxk@qti.qualcomm.com> Cc: Saeed Mahameed <saeedm@mellanox.com> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Cc: intel-wired-lan@lists.osuosl.org Cc: Yisen Zhuang <yisen.zhuang@huawei.com> Cc: Salil Mehta <salil.mehta@huawei.com> Cc: Michael Chan <michael.chan@broadcom.com> Cc: Igor Russkikh <igor.russkikh@aquantia.com> Signed-off-by: NStanislav Fomichev <sdf@google.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
-
- 08 4月, 2019 1 次提交
-
-
由 Will Deacon 提交于
mmiowb() is now implied by spin_unlock() on architectures that require it, so there is no reason to call it from driver code. This patch was generated using coccinelle: @mmiowb@ @@ - mmiowb(); and invoked as: $ for d in drivers include/linux/qed sound; do \ spatch --include-headers --sp-file mmiowb.cocci --dir $d --in-place; done NOTE: mmiowb() has only ever guaranteed ordering in conjunction with spin_unlock(). However, pairing each mmiowb() removal in this patch with the corresponding call to spin_unlock() is not at all trivial, so there is a small chance that this change may regress any drivers incorrectly relying on mmiowb() to order MMIO writes between CPUs using lock-free synchronisation. If you've ended up bisecting to this commit, you can reintroduce the mmiowb() calls using wmb() instead, which should restore the old behaviour on all architectures other than some esoteric ia64 systems. Acked-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 02 4月, 2019 1 次提交
-
-
由 Florian Westphal 提交于
There are two reasons for this. First, the xmit_more flag conceptually doesn't fit into the skb, as xmit_more is not a property related to the skb. Its only a hint to the driver that the stack is about to transmit another packet immediately. Second, it was only done this way to not have to pass another argument to ndo_start_xmit(). We can place xmit_more in the softnet data, next to the device recursion. The recursion counter is already written to on each transmit. The "more" indicator is placed right next to it. Drivers can use the netdev_xmit_more() helper instead of skb->xmit_more to check the "more packets coming" hint. skb->xmit_more is retained (but always 0) to not cause build breakage. This change takes care of the simple s/skb->xmit_more/netdev_xmit_more()/ conversions. Remaining drivers are converted in the next patches. Suggested-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NFlorian Westphal <fw@strlen.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 22 2月, 2019 1 次提交
-
-
由 Björn Töpel 提交于
When the driver clears the XDP xmit ring due to re-configuration or teardown, in-progress ndo_xdp_xmit must be taken into consideration. The ndo_xdp_xmit function is typically called from a NAPI context that the driver does not control. Therefore, we must be careful not to clear the XDP ring, while the call is on-going. This patch adds a synchronize_rcu() to wait for napi(s) (preempt-disable regions and softirqs), prior clearing the queue. Further, the __I40E_CONFIG_BUSY flag is checked in the ndo_xdp_xmit implementation to avoid touching the XDP xmit queue during re-configuration. Fixes: d9314c47 ("i40e: add support for XDP_REDIRECT") Fixes: 123cecd4 ("i40e: added queue pair disable/enable functions") Reported-by: NMaciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: NBjörn Töpel <bjorn.topel@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 13 12月, 2018 2 次提交
-
-
由 Michał Mirosław 提交于
Move rx_ptype extracting to i40e_process_skb_fields() to avoid duplicating the code. Signed-off-by: NMichał Mirosław <michal.miroslaw@atendesoftware.pl> Signed-off-by: NMichał Mirosław <mirq-linux@rere.qmqm.pl> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Michał Mirosław 提交于
This fixes two bugs in hardware VLAN offload: 1. VLAN.TCI == 0 was being dropped 2. there was a race between disabling of VLAN RX feature in hardware and processing RX queue, where packets processed in this window could have their VLAN information dropped Fix moves the VLAN handling into i40e_process_skb_fields() to save on duplicated code. i40e_receive_skb() becomes trivial and so is removed. Signed-off-by: NMichał Mirosław <michal.miroslaw@atendesoftware.pl> Signed-off-by: NMichał Mirosław <mirq-linux@rere.qmqm.pl> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 22 11月, 2018 1 次提交
-
-
由 Jesse Brandeburg 提交于
While reviewing code, I noticed that Eric Dumazet recommends that drivers check the return code of napi_complete_done, and use that to decide to enable interrupts or not when exiting poll. One of the Intel drivers was already fixed (ixgbe). Upon looking at the Intel drivers as a whole, we are handling our polling and NAPI exit in a few different ways based on whether we have multiqueue and whether we have Tx cleanup included. Several drivers had the bug of exiting NAPI with return 0, which appears to mess up the accounting in the stack. Consolidate all the NAPI routines to do best known way of exiting and to just mostly look like each other. 1) check return code of napi_complete_done to control interrupt enable 2) return the actual amount of work done. 3) return budget immediately if need NAPI poll again Tested the changes on e1000e with a high interrupt rate set, and it shows about an 8% reduction in the CPU utilization when busy polling because we aren't re-enabling interrupts when we're about to be polled. Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Reviewed-by: NJacob Keller <jacob.e.keller@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 15 11月, 2018 1 次提交
-
-
由 Jan Sokolowski 提交于
Use a local variable to make the code a bit more readable. Signed-off-by: NJan Sokolowski <jan.sokolowski@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 08 11月, 2018 1 次提交
-
-
由 Jacob Keller 提交于
Many of the Intel Ethernet drivers call skb_tx_timestamp() earlier than necessary. Move the calls to this function to the latest point possible, just prior to notifying hardware of the new Tx packet when we bump the tail register. This affects i40e, iavf, igb, igc, and ixgbe. The e100, e1000, e1000e, fm10k, and ice drivers already call the skb_tx_timestamp() function just prior to indicating the Tx packet to hardware, so they do not need to be changed. Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 26 10月, 2018 1 次提交
-
-
由 Eric Dumazet 提交于
Drivers using generic NAPI interface no longer need to include <net/busy_poll.h>, since busy polling was moved to core networking stack long ago. See commit 79e7fff4 ("net: remove support for per driver ndo_busy_poll()") for reference. Signed-off-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 26 9月, 2018 2 次提交
-
-
由 Björn Töpel 提交于
Outstanding Rx descriptors are temporarily stored on a stash/reuse queue. When/if the HW rings comes up again, entries from the stash are used to re-populate the ring. The latter required some restructuring of the allocation scheme for the AF_XDP zero-copy implementation. There is now a fast, and a slow allocation. The "fast allocation" is used from the fast-path and obtains free buffers from the fill ring and the internal recycle mechanism. The "slow allocation" is only used in ring setup, and obtains buffers from the fill ring and the stash (if any). Signed-off-by: NBjörn Töpel <bjorn.topel@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Björn Töpel 提交于
When the zero-copy enabled XDP Tx ring is torn down, due to configuration changes, outstanding frames on the hardware descriptor ring are queued on the completion ring. The completion ring has a back-pressure mechanism that will guarantee that there is sufficient space on the ring. Signed-off-by: NBjörn Töpel <bjorn.topel@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 30 8月, 2018 1 次提交
-
-
由 Magnus Karlsson 提交于
This patch adds zero-copy Tx support for AF_XDP sockets. It implements the ndo_xsk_async_xmit netdev ndo and performs all the Tx logic from a NAPI context. This means pulling egress packets from the Tx ring, placing the frames on the NIC HW descriptor ring and completing sent frames back to the application via the completion ring. The regular XDP Tx ring is used for AF_XDP as well. This rationale for this is as follows: XDP_REDIRECT guarantees mutual exclusion between different NAPI contexts based on CPU id. In other words, a netdev can XDP_REDIRECT to another netdev with a different NAPI context, since the operation is bound to a specific core and each core has its own hardware ring. As the AF_XDP Tx action is running in the same NAPI context and using the same ring, it will also be protected from XDP_REDIRECT actions with the exact same mechanism. As with AF_XDP Rx, all AF_XDP Tx specific functions are added to i40e_xsk.c. Signed-off-by: NMagnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
-