- 15 10月, 2021 5 次提交
-
-
由 Maciej Fijalkowski 提交于
Under rare circumstances there might be a situation where a requirement of having XDP Tx queue per CPU could not be fulfilled and some of the Tx resources have to be shared between CPUs. This yields a need for placing accesses to xdp_ring inside a critical section protected by spinlock. These accesses happen to be in the hot path, so let's introduce the static branch that will be triggered from the control plane when driver could not provide Tx queue dedicated for XDP on each CPU. Currently, the design that has been picked is to allow any number of XDP Tx queues that is at least half of a count of CPUs that platform has. For lower number driver will bail out with a response to user that there were not enough Tx resources that would allow configuring XDP. The sharing of rings is signalled via static branch enablement which in turn indicates that lock for xdp_ring accesses needs to be taken in hot path. Approach based on static branch has no impact on performance of a non-fallback path. One thing that is needed to be mentioned is a fact that the static branch will act as a global driver switch, meaning that if one PF got out of Tx resources, then other PFs that ice driver is servicing will suffer. However, given the fact that HW that ice driver is handling has 1024 Tx queues per each PF, this is currently an unlikely scenario. Signed-off-by: NMaciej Fijalkowski <maciej.fijalkowski@intel.com> Tested-by: NGeorge Kuruvinakunnel <george.kuruvinakunnel@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
由 Maciej Fijalkowski 提交于
Optimize Tx descriptor cleaning for XDP. Current approach doesn't really scale and chokes when multiple flows are handled. Introduce two ring fields, @next_dd and @next_rs that will keep track of descriptor that should be looked at when the need for cleaning arise and the descriptor that should have the RS bit set, respectively. Note that at this point the threshold is a constant (32), but it is something that we could make configurable. First thing is to get away from setting RS bit on each descriptor. Let's do this only once NTU is higher than the currently @next_rs value. In such case, grab the tx_desc[next_rs], set the RS bit in descriptor and advance the @next_rs by a 32. Second thing is to clean the Tx ring only when there are less than 32 free entries. For that case, look up the tx_desc[next_dd] for a DD bit. This bit is written back by HW to let the driver know that xmit was successful. It will happen only for those descriptors that had RS bit set. Clean only 32 descriptors and advance the DD bit. Actual cleaning routine is moved from ice_napi_poll() down to the ice_xmit_xdp_ring(). It is safe to do so as XDP ring will not get any SKBs in there that would rely on interrupts for the cleaning. Nice side effect is that for rare case of Tx fallback path (that next patch is going to introduce) we don't have to trigger the SW irq to clean the ring. With those two concepts, ring is kept at being almost full, but it is guaranteed that driver will be able to produce Tx descriptors. This approach seems to work out well even though the Tx descriptors are produced in one-by-one manner. Test was conducted with the ice HW bombarded with packets from HW generator, configured to generate 30 flows. Xdp2 sample yields the following results: <snip> proto 17: 79973066 pkt/s proto 17: 80018911 pkt/s proto 17: 80004654 pkt/s proto 17: 79992395 pkt/s proto 17: 79975162 pkt/s proto 17: 79955054 pkt/s proto 17: 79869168 pkt/s proto 17: 79823947 pkt/s proto 17: 79636971 pkt/s </snip> As that sample reports the Rx'ed frames, let's look at sar output. It says that what we Rx'ed we do actually Tx, no noticeable drops. Average: IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil Average: ens4f1 79842324.00 79842310.40 4678261.17 4678260.38 0.00 0.00 0.00 38.32 with tx_busy staying calm. When compared to a state before: Average: IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil Average: ens4f1 90919711.60 42233822.60 5327326.85 2474638.04 0.00 0.00 0.00 43.64 it can be observed that the amount of txpck/s is almost doubled, meaning that the performance is improved by around 90%. All of this due to the drops in the driver, previously the tx_busy stat was bumped at a 7mpps rate. Signed-off-by: NMaciej Fijalkowski <maciej.fijalkowski@intel.com> Tested-by: NGeorge Kuruvinakunnel <george.kuruvinakunnel@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
由 Maciej Fijalkowski 提交于
With rings being split, it is now convenient to introduce a pointer to XDP ring within the Rx ring. For XDP_TX workloads this means that xdp_rings array access will be skipped, which was executed per each processed frame. Also, read the XDP prog once per NAPI and if prog is present, set up the local xdp_ring pointer. Reading prog a single time was discussed in [1] with some concern raised by Toke around dispatcher handling and having the need for going through the RCU grace period in the ndo_bpf driver callback, but ice currently is torning down NAPI instances regardless of the prog presence on VSI. Although the pointer to XDP ring introduced to Rx ring makes things a lot slimmer/simpler, I still feel that single prog read per NAPI lifetime is beneficial. Further patch that will introduce the fallback path will also get a profit from that as xdp_ring pointer will be set during the XDP rings setup. [1]: https://lore.kernel.org/bpf/87k0oseo6e.fsf@toke.dk/Signed-off-by: NMaciej Fijalkowski <maciej.fijalkowski@intel.com> Tested-by: NGeorge Kuruvinakunnel <george.kuruvinakunnel@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
由 Maciej Fijalkowski 提交于
xdp_frame is not needed for XDP_TX data path in ice driver case. For this data path cleaning of sent descriptor will not happen anywhere outside of the driver, which means that carrying the information about the underlying memory model via xdp_frame will not be used. Therefore, this conversion can be simply dropped, which would relieve CPU a bit. Signed-off-by: NMaciej Fijalkowski <maciej.fijalkowski@intel.com> Tested-by: NGeorge Kuruvinakunnel <george.kuruvinakunnel@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
由 Maciej Fijalkowski 提交于
While it was convenient to have a generic ring structure that served both Tx and Rx sides, next commits are going to introduce several Tx-specific fields, so in order to avoid hurting the Rx side, let's pull out the Tx ring onto new ice_tx_ring and ice_rx_ring structs. Rx ring could be handled by the old ice_ring which would reduce the code churn within this patch, but this would make things asymmetric. Make the union out of the ring container within ice_q_vector so that it is possible to iterate over newly introduced ice_tx_ring. Remove the @size as it's only accessed from control path and it can be calculated pretty easily. Change definitions of ice_update_ring_stats and ice_fetch_u64_stats_per_ring so that they are ring agnostic and can be used for both Rx and Tx rings. Sizes of Rx and Tx ring structs are 256 and 192 bytes, respectively. In Rx ring xdp_rxq_info occupies its own cacheline, so it's the major difference now. Signed-off-by: NMaciej Fijalkowski <maciej.fijalkowski@intel.com> Tested-by: NGurucharan G <gurucharanx.g@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
- 08 10月, 2021 1 次提交
-
-
由 Grzegorz Nitka 提交于
Slow path means allowing packet to go from uplink to representor and from representor to correct VF on Rx site and from VF to representor and to uplink on Tx site. To accomplish this driver, has to set correct Tx descriptor. When packet is sent from representor to VF, destination should be set to VF VSI. When packet is sent from uplink port destination should be uplink to bypass switch infrastructure and send packet outside. On Rx site driver should check source VSI field from Rx descriptor and based on that forward packed to correct netdev. To allow this there is a target netdevs table in control plane VSI struct. Co-developed-by: NMichal Swiatkowski <michal.swiatkowski@linux.intel.com> Signed-off-by: NMichal Swiatkowski <michal.swiatkowski@linux.intel.com> Signed-off-by: NGrzegorz Nitka <grzegorz.nitka@intel.com> Tested-by: NSandeep Penigalapati <sandeep.penigalapati@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
- 29 9月, 2021 1 次提交
-
-
由 Dave Ertman 提交于
Implement code to handle submission of APP TLV's containing DSCP to TC mapping. The first such mapping received on an interface will cause that PF to switch to L3 DSCP QoS mode, apply the default config for that mode, and apply the received mapping. Only one such mapping will be allowed per DSCP value, and when the last DSCP mapping is deleted, the PF will switch back into L2 VLAN QoS mode, applying the appropriate default QoS settings. L3 DSCP QoS mode will only be allowed in SW DCBx mode, in other words, when the FW LLDP engine is disabled. Commands that break this mutual exclusivity will be blocked. Co-developed-by: NAnirudh Venkataramanan <anirudh.venkataramanan@intel.com> Signed-off-by: NAnirudh Venkataramanan <anirudh.venkataramanan@intel.com> Signed-off-by: NDave Ertman <david.m.ertman@intel.com> Tested-by: NGurucharan G <gurucharanx.g@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
- 25 6月, 2021 2 次提交
-
-
由 Jesse Brandeburg 提交于
This patch is modeled after one by Scott Peterson for i40e. Add tracepoints to the driver, via a new file ice_trace.h and some new trace calls added in interesting places in the driver. Add some tracing for DIMLIB to help debug interrupt moderation problems. Performance should not be affected, and this can be very useful for debugging and adding new trace events to paths in the future. Note eBPF programs can attach to these events, as well as perf can count them since we're attaching to the events subsystem in the kernel. Co-developed-by: NBen Shelton <benjamin.h.shelton@intel.com> Signed-off-by: NBen Shelton <benjamin.h.shelton@intel.com> Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: NTony Brelinski <tonyx.brelinski@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
由 Toke Høiland-Jørgensen 提交于
The Intel drivers all have rcu_read_lock()/rcu_read_unlock() pairs around XDP program invocations. However, the actual lifetime of the objects referred by the XDP program invocation is longer, all the way through to the call to xdp_do_flush(), making the scope of the rcu_read_lock() too small. This turns out to be harmless because it all happens in a single NAPI poll cycle (and thus under local_bh_disable()), but it makes the rcu_read_lock() misleading. Rather than extend the scope of the rcu_read_lock(), just get rid of it entirely. With the addition of RCU annotations to the XDP_REDIRECT map types that take bh execution into account, lockdep even understands this to be safe, so there's really no reason to keep it around. Signed-off-by: NToke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Tested-by: Jesper Dangaard Brouer <brouer@redhat.com> # i40e Cc: Jesse Brandeburg <jesse.brandeburg@intel.com> Cc: Tony Nguyen <anthony.l.nguyen@intel.com> Cc: intel-wired-lan@lists.osuosl.org Link: https://lore.kernel.org/bpf/20210624160609.292325-12-toke@redhat.com
-
- 18 6月, 2021 1 次提交
-
-
由 Jesse Brandeburg 提交于
The hardware is reporting the type of the hash used for RSS as a PTYPE field in the receive descriptor. Use this value to set the skb packet hash type by extending the hash type table to cover all 10-bits of possible values (requiring some variables to be changed from u8 to u16), and then use that table to convert to one of the possible values in enum pkt_hash_types. While we're here, remove the unused ptype struct value, which makes table init easier for the zero entries, and use ranged initializer to remove a bunch of code (works with gcc and clang). Without this change, the kernel will recalculate the hash in software, which can consume extra CPU cycles. Co-developed-by: NKiran Patil <kiran.patil@intel.com> Signed-off-by: NKiran Patil <kiran.patil@intel.com> Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: NTony Brelinski <tonyx.brelinski@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
- 11 6月, 2021 1 次提交
-
-
由 Jacob Keller 提交于
Add support for enabling Tx timestamp requests for outgoing packets on E810 devices. The ice hardware can support multiple outstanding Tx timestamp requests. When sending a descriptor to hardware, a Tx timestamp request is made by setting a request bit, and assigning an index that represents which Tx timestamp index to store the timestamp in. Hardware makes no effort to synchronize the index use, so it is up to software to ensure that Tx timestamp indexes are not re-used before the timestamp is reported back. To do this, introduce a Tx timestamp tracker which will keep track of currently in-use indexes. In the hot path, if a packet has a timestamp request, an index will be requested from the tracker. Unfortunately, this does require a lock as the indexes are shared across all queues on a PHY. There are not enough indexes to reliably assign only 1 to each queue. For the E810 devices, the timestamp indexes are not shared across PHYs, so each port can have its own tracking. Once hardware captures a timestamp, an interrupt is fired. In this interrupt, trigger a new work item that will figure out which timestamp was completed, and report the timestamp back to the stack. This function loops through the Tx timestamp indexes and checks whether there is now a valid timestamp. If so, it clears the PHY timestamp indication in the PHY memory, locks and removes the SKB and bit in the tracker, then reports the timestamp to the stack. It is possible in some cases that a timestamp request will be initiated but never completed. This might occur if the packet is dropped by software or hardware before it reaches the PHY. Add a task to the periodic work function that will check whether a timestamp request is more than a few seconds old. If so, the timestamp index is cleared in the PHY, and the SKB is released. Just as with Rx timestamps, the Tx timestamps are only 40 bits wide, and use the same overall logic for extending to 64 bits of nanoseconds. With this change, E810 devices should be able to perform basic PTP functionality. Future changes will extend the support to cover the E822-based devices. Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NTony Brelinski <tonyx.brelinski@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
- 04 6月, 2021 1 次提交
-
-
由 Dave Ertman 提交于
Currently in the ice driver, the check whether to allow a LLDP packet to egress the interface from the PF_VSI is being based on the SKB's priority field. It checks to see if the packets priority is equal to TC_PRIO_CONTROL. Injected LLDP packets do not always meet this condition. SCAPY defaults to a sk_buff->protocol value of ETH_P_ALL (0x0003) and does not set the priority field. There will be other injection methods (even ones used by end users) that will not correctly configure the socket so that SKB fields are correctly populated. Then ethernet header has to have to correct value for the protocol though. Add a check to also allow packets whose ethhdr->h_proto matches ETH_P_LLDP (0x88CC). Fixes: 0c3a6101 ("ice: Allow egress control packets from PF_VSI") Signed-off-by: NDave Ertman <david.m.ertman@intel.com> Tested-by: NTony Brelinski <tonyx.brelinski@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
- 03 6月, 2021 1 次提交
-
-
由 Magnus Karlsson 提交于
Add missing exception tracing to XDP when a number of different errors can occur. The support was only partial. Several errors where not logged which would confuse the user quite a lot not knowing where and why the packets disappeared. Fixes: efc2214b ("ice: Add support for XDP") Fixes: 2d4238f5 ("ice: Add support for AF_XDP") Reported-by: NJesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: NMagnus Karlsson <magnus.karlsson@intel.com> Tested-by: NKiran Bhandare <kiranx.bhandare@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
- 15 4月, 2021 3 次提交
-
-
由 Jesse Brandeburg 提交于
Use a dedicated bitfield in order to both increase the amount of checking around the length of ITR writes as well as simplify the checks of dynamic mode. Basically unpack the "high bit means dynamic" logic into bitfields. Also, remove some unused ITR defines. Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: NTony Brelinski <tonyx.brelinski@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
由 Jesse Brandeburg 提交于
The driver would occasionally miss that there were outstanding descriptors to clean when exiting busy/napi poll. This issue has been in the code since the introduction of the ice driver. Attempt to "catch" any remaining work by triggering a software interrupt when exiting napi poll or busy-poll. This will not cause extra interrupts in the case of normal execution. This issue was found when running sfnt-pingpong, with busy poll enabled, and typically with larger I/O sizes like > 8192, the program would occasionally report > 1 second maximums to complete a ping pong. Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: NTony Brelinski <tonyx.brelinski@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
由 Jacob Keller 提交于
The ice driver has support for adaptive interrupt moderation, an algorithm for tuning the interrupt rate dynamically. This algorithm is based on various assumptions about ring size, socket buffer size, link speed, SKB overhead, ethernet frame overhead and more. The Linux kernel has support for a dynamic interrupt moderation algorithm known as "dimlib". Replace the custom driver-specific implementation of dynamic interrupt moderation with the kernel's algorithm. The Intel hardware has a different hardware implementation than the originators of the dimlib code had to work with, which requires the driver to use a slightly different set of inputs for the actual moderation values, while getting all the advice from dimlib of better/worse, shift left or right. The change made for this implementation is to use a pair of values for each of the 5 "slots" that the dimlib moderation expects, and the driver will program those pairs when dimlib recommends a slot to use. The currently implementation uses two tables, one for receive and one for transmit, and the pairs of values in each slot set the maximum delay of an interrupt and a maximum number of interrupts per second (both expressed in microseconds). There are two separate kinds of bugs fixed by using DIMLIB, one is UDP single stream send was too slow, and the other is that 8K ping-pong was going to the most aggressive moderation and has much too high latency. The overall result of using DIMLIB is that we meet or exceed our performance expectations set based on the old algorithm. Co-developed-by: NJesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NTony Brelinski <tonyx.brelinski@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
- 01 4月, 2021 1 次提交
-
-
由 Anirudh Venkataramanan 提交于
struct ice_vsi has two fields, state and flags which seem to be serving the same purpose. Consolidate them into one field 'state'. enum ice_state is used to represent state information of the PF. While some of these enum values can be use to represent VSI state, it makes more sense to represent VSI state with its own enum. So derive a new enum ice_vsi_state from ice_vsi_flags and ice_state and use it. Also rename enum ice_state to ice_pf_state for clarity. Signed-off-by: NAnirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: NTony Brelinski <tonyx.brelinski@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
- 23 3月, 2021 1 次提交
-
-
由 Qi Zhang 提交于
Enable returning FDIR completion status by checking the ctrl_vsi Rx queue descriptor value. To enable returning FDIR completion status from ctrl_vsi Rx queue, COMP_Queue and COMP_Report of FDIR filter programming descriptor needs to be properly configured. After program request sent to ctrl_vsi Tx queue, ctrl_vsi Rx queue interrupt will be triggered and completion status will be returned. Driver will first issue request in ice_vc_fdir_add_fltr(), then pass FDIR context to the background task in interrupt service routine ice_vc_fdir_irq_handler() and finally deal with them in ice_flush_fdir_ctx(). ice_flush_fdir_ctx() will check the descriptor's value, fdir context, and then send back virtual channel message to VF by calling ice_vc_add_fdir_fltr_post(). An additional timer will be setup in case of hardware interrupt timeout. Signed-off-by: NYahui Cao <yahui.cao@intel.com> Signed-off-by: NBrett Creeley <brett.creeley@intel.com> Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: NQi Zhang <qi.z.zhang@intel.com> Tested-by: NChen Bo <BoX.C.Chen@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
- 18 3月, 2021 1 次提交
-
-
由 Lorenzo Bianconi 提交于
We want to change the current ndo_xdp_xmit drop semantics because it will allow us to implement better queue overflow handling. This is working towards the larger goal of a XDP TX queue-hook. Move XDP_REDIRECT error path handling from each XDP ethernet driver to devmap code. According to the new APIs, the driver running the ndo_xdp_xmit pointer, will break tx loop whenever the hw reports a tx error and it will just return to devmap caller the number of successfully transmitted frames. It will be devmap responsibility to free dropped frames. Move each XDP ndo_xdp_xmit capable driver to the new APIs: - veth - virtio-net - mvneta - mvpp2 - socionext - amazon ena - bnxt - freescale (dpaa2, dpaa) - xen-frontend - qede - ice - igb - ixgbe - i40e - mlx5 - ti (cpsw, cpsw-new) - tun - sfc Signed-off-by: NLorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Reviewed-by: NIoana Ciornei <ioana.ciornei@nxp.com> Reviewed-by: NIlias Apalodimas <ilias.apalodimas@linaro.org> Reviewed-by: NCamelia Groza <camelia.groza@nxp.com> Acked-by: NEdward Cree <ecree.xilinx@gmail.com> Acked-by: NJesper Dangaard Brouer <brouer@redhat.com> Acked-by: NShay Agroskin <shayagr@amazon.com> Link: https://lore.kernel.org/bpf/ed670de24f951cfd77590decf0229a0ad7fd12f6.1615201152.git.lorenzo@kernel.org
-
- 12 3月, 2021 1 次提交
-
-
由 Maciej Fijalkowski 提交于
ice_rx_offset(), that is supposed to initialize the Rx buffer headroom, relies on ICE_RX_FLAGS_RING_BUILD_SKB flag as well as XDP prog presence. Currently, the callsite of mentioned function is placed incorrectly within ice_setup_rx_ring() where Rx ring's build skb flag is not set yet. This causes the XDP_REDIRECT to be partially broken due to inability to create xdp_frame in the headroom space, as the headroom is 0. Fix this by moving ice_rx_offset() to ice_setup_rx_ctx() after the flag setting. Fixes: f1b1f409 ("ice: store the result of ice_rx_offset() onto ice_ring") Signed-off-by: NMaciej Fijalkowski <maciej.fijalkowski@intel.com> Tested-by: NKiran Bhandare <kiranx.bhandare@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
- 13 2月, 2021 3 次提交
-
-
由 Maciej Fijalkowski 提交于
Output of ice_rx_offset() is based on ethtool's priv flag setting, which when changed, causes PF reset (disables napi, frees irqs, loads different Rx mem model, etc.). This means that within napi its result is constant and there is no reason to call it per each processed frame. Add new 'rx_offset' field to ice_ring that is meant to hold the ice_rx_offset() result and use it within ice_clean_rx_irq(). Furthermore, use it within ice_alloc_mapped_page(). Reviewed-by: NBjörn Töpel <bjorn.topel@intel.com> Signed-off-by: NMaciej Fijalkowski <maciej.fijalkowski@intel.com> Tested-by: NTony Brelinski <tonyx.brelinski@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
由 Maciej Fijalkowski 提交于
Similar thing has been done in i40e, as there is no real need for having the sk_buff pointer in each rx_buf. Non-eop frames can be simply handled on that pointer moved upwards to rx_ring. Reviewed-by: NBjörn Töpel <bjorn.topel@intel.com> Signed-off-by: NMaciej Fijalkowski <maciej.fijalkowski@intel.com> Tested-by: NTony Brelinski <tonyx.brelinski@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
由 Maciej Fijalkowski 提交于
There's no need for 'result' variable, we can directly return the internal status based on action returned by xdp prog. Reviewed-by: NBjörn Töpel <bjorn.topel@intel.com> Signed-off-by: NMaciej Fijalkowski <maciej.fijalkowski@intel.com> Tested-by: NKiran Bhandare <kiranx.bhandare@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
- 09 2月, 2021 2 次提交
-
-
由 Chinh T Cao 提交于
Refactor the DCB related variables out of the ice_port_info_struct. The goal is to make the ice_port_info struct cleaner. Signed-off-by: NChinh T Cao <chinh.t.cao@intel.com> Co-developed-by: NDave Ertman <david.m.ertman@intel.com> Signed-off-by: NDave Ertman <david.m.ertman@intel.com> Tested-by: NTony Brelinski <tonyx.brelinski@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
由 Jesse Brandeburg 提交于
The writeback enable logic was incorrectly implemented (due to misunderstanding what the side effects of the implementation would be during polling). Fix this logic issue, while implementing a new feature allowing the user to control the writeback frequency using the knobs for controlling interrupt throttling that we already have. Basically if you leave adaptive interrupts enabled, the writeback frequency will be varied even if busy_polling or if napi-poll is in use. If the interrupt rates are set to a fixed value by ethtool -C and adaptive is off, the driver will allow the user-set interrupt rate to guide how frequently the hardware will complete descriptors to the driver. Effectively the user will get a control over the hardware efficiency, allowing the choice between immediate interrupts or delayed up to a maximum of the interrupt rate, even when interrupts are disabled during polling. Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com> Co-developed-by: NBrett Creeley <brett.creeley@intel.com> Signed-off-by: NBrett Creeley <brett.creeley@intel.com> Tested-by: NTony Brelinski <tonyx.brelinski@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
- 05 2月, 2021 1 次提交
-
-
由 Alexander Lobakin 提交于
Now we can remove a bunch of identical functions from the drivers and make them use common dev_page_is_reusable(). All {,un}likely() checks are omitted since it's already present in this helper. Also update some comments near the call sites. Suggested-by: NDavid Rientjes <rientjes@google.com> Suggested-by: NJakub Kicinski <kuba@kernel.org> Cc: John Hubbard <jhubbard@nvidia.com> Signed-off-by: NAlexander Lobakin <alobakin@pm.me> Reviewed-by: NJesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
- 27 1月, 2021 1 次提交
-
-
由 Nick Nunley 提交于
This patch is based on a similar change to i40e by Slawomir Laba: "i40e: Implement flow for IPv6 next header (extension header)". When a packet contains an IPv6 header with next header which is an extension header and not a protocol one, the kernel function skb_transport_header called with such sk_buff will return a pointer to the extension header and not to the TCP one. The above explained call caused a problem with packet processing for skb with encapsulation for tunnel with ICE_TX_CTX_EIPT_IPV6. The extension header was not skipped at all. The ipv6_skip_exthdr function does check if next header of the IPV6 header is an extension header and doesn't modify the l4_proto pointer if it points to a protocol header value so its safe to omit the comparison of exthdr and l4.hdr pointers. The ipv6_skip_exthdr can return value -1. This means that the skipping process failed and there is something wrong with the packet so it will be dropped. Fixes: a4e82a81 ("ice: Add support for tunnel offloads") Signed-off-by: NNick Nunley <nicholas.d.nunley@intel.com> Tested-by: NTony Brelinski <tonyx.brelinski@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
- 09 1月, 2021 2 次提交
-
-
由 Lorenzo Bianconi 提交于
Introduce xdp_prepare_buff utility routine to initialize per-descriptor xdp_buff fields (e.g. xdp_buff pointers). Rely on xdp_prepare_buff() in all XDP capable drivers. Signed-off-by: NLorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Reviewed-by: NAlexander Duyck <alexanderduyck@fb.com> Acked-by: NJesper Dangaard Brouer <brouer@redhat.com> Acked-by: NJohn Fastabend <john.fastabend@gmail.com> Acked-by: NShay Agroskin <shayagr@amazon.com> Acked-by: NMartin Habets <habetsm.xilinx@gmail.com> Acked-by: NCamelia Groza <camelia.groza@nxp.com> Acked-by: NMarcin Wojtas <mw@semihalf.com> Link: https://lore.kernel.org/bpf/45f46f12295972a97da8ca01990b3e71501e9d89.1608670965.git.lorenzo@kernel.orgSigned-off-by: NAlexei Starovoitov <ast@kernel.org>
-
由 Lorenzo Bianconi 提交于
Introduce xdp_init_buff utility routine to initialize xdp_buff fields const over NAPI iterations (e.g. frame_sz or rxq pointer). Rely on xdp_init_buff in all XDP capable drivers. Signed-off-by: NLorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Reviewed-by: NAlexander Duyck <alexanderduyck@fb.com> Acked-by: NJesper Dangaard Brouer <brouer@redhat.com> Acked-by: NJohn Fastabend <john.fastabend@gmail.com> Acked-by: NShay Agroskin <shayagr@amazon.com> Acked-by: NMartin Habets <habetsm.xilinx@gmail.com> Acked-by: NCamelia Groza <camelia.groza@nxp.com> Acked-by: NMarcin Wojtas <mw@semihalf.com> Link: https://lore.kernel.org/bpf/7f8329b6da1434dc2b05a77f2e800b29628a8913.1608670965.git.lorenzo@kernel.orgSigned-off-by: NAlexei Starovoitov <ast@kernel.org>
-
- 10 12月, 2020 1 次提交
-
-
由 Björn Töpel 提交于
The page recycle code, incorrectly, relied on that a page fragment could not be freed inside xdp_do_redirect(). This assumption leads to that page fragments that are used by the stack/XDP redirect can be reused and overwritten. To avoid this, store the page count prior invoking xdp_do_redirect(). Fixes: efc2214b ("ice: Add support for XDP") Reported-and-analyzed-by: NLi RongQing <lirongqing@baidu.com> Signed-off-by: NBjörn Töpel <bjorn.topel@intel.com> Tested-by: NGeorge Kuruvinakunnel <george.kuruvinakunnel@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
- 01 12月, 2020 1 次提交
-
-
由 Björn Töpel 提交于
Add napi_id to the xdp_rxq_info structure, and make sure the XDP socket pick up the napi_id in the Rx path. The napi_id is used to find the corresponding NAPI structure for socket busy polling. Signed-off-by: NBjörn Töpel <bjorn.topel@intel.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NIlias Apalodimas <ilias.apalodimas@linaro.org> Acked-by: NMichael S. Tsirkin <mst@redhat.com> Acked-by: NTariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/bpf/20201130185205.196029-7-bjorn.topel@gmail.com
-
- 01 9月, 2020 1 次提交
-
-
由 Magnus Karlsson 提交于
Replace the explicit umem reference passed to the driver in AF_XDP zero-copy mode with the buffer pool instead. This in preparation for extending the functionality of the zero-copy mode so that umems can be shared between queues on the same netdev and also between netdevs. In this commit, only an umem reference has been added to the buffer pool struct. But later commits will add other entities to it. These are going to be entities that are different between different queue ids and netdevs even though the umem is shared between them. Signed-off-by: NMagnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NBjörn Töpel <bjorn.topel@intel.com> Link: https://lore.kernel.org/bpf/1598603189-32145-2-git-send-email-magnus.karlsson@intel.com
-
- 27 8月, 2020 1 次提交
-
-
由 Tariq Toukan 提交于
Many device drivers use the same prefetch code structure to deal with small L1 cacheline size. Take this code into a function and call it from the drivers. Suggested-by: NJakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Reviewed-by: NSaeed Mahameed <saeedm@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 01 8月, 2020 3 次提交
-
-
由 Tony Nguyen 提交于
This is a collection of minor fixes including typos, white space, and style. No functional changes. Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
-
由 Kiran Patil 提交于
This is a port of commit 248de22e ("i40e/i40evf: Account for frags split over multiple descriptors in check linearize") As part of testing workloads (read/write) using larger IO size (128K) tx_timeout is observed and whenever it happens, it was due to tx_linearize. Signed-off-by: NKiran Patil <kiran.patil@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
由 Jesse Brandeburg 提交于
The page reuse statistic wasn't even being displayed to the user, even though the driver counted it. Don't waste the struct space and hot-path cycles since the driver doesn't display it. Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com>
-
- 29 7月, 2020 1 次提交
-
-
由 Tony Nguyen 提交于
Depending on PAGE_SIZE, the following unused parameter warning can be reported: drivers/net/ethernet/intel/ice/ice_txrx.c: In function ‘ice_rx_frame_truesize’: drivers/net/ethernet/intel/ice/ice_txrx.c:513:21: warning: unused parameter ‘size’ [-Wunused-parameter] unsigned int size) The 'size' variable is used only when PAGE_SIZE >= 8192. Add __maybe_unused to remove the warning. Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
-
- 31 5月, 2020 1 次提交
-
-
由 Brett Creeley 提交于
Currently the driver does not recognize when there is an 802.1AD VLAN tag right after the dmac/smac (outermost VLAN tag). If any DCB map is applied and/or DCB is enabled this is causing the hardware to insert a VLAN 0 tag after the 802.1AD VLAN tag that is already in the packet. Fix this by preventing VLAN tag 0 from being added when any VLAN is already present after dmac/smac (software offloaded) or skb (hardware offloaded). Signed-off-by: NBrett Creeley <brett.creeley@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 23 5月, 2020 2 次提交
-
-
由 Henry Tieman 提交于
Support the addition and deletion of IPv4 filters. Supported fields are: src-ip, dst-ip, src-port, and dst-port Supported flow-types are: tcp4, udp4, sctp4, ip4 Example usage: ethtool -N eth0 flow-type tcp4 src-ip 192.168.0.55 dst-ip 172.16.0.55 \ src-port 16 dst-port 12 action 32 Signed-off-by: NHenry Tieman <henry.w.tieman@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Henry Tieman 提交于
Flow Director allows for redirection based on ntuple rules. Rules are programmed using the ethtool set-ntuple interface. Supported actions are redirect to queue and drop. Setup the initial framework to process Flow Director filters. Create and allocate resources to manage and program filters to the hardware. Filters are processed via a sideband interface; a control VSI is created to manage communication and process requests through the sideband. Upon allocation of resources, update the hardware tables to accept perfect filters. Signed-off-by: NHenry Tieman <henry.w.tieman@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-