- 09 2月, 2022 3 次提交
-
-
由 Ioana Ciornei 提交于
This patch adds support for driver level TSO in the enetc driver using the TSO API. There is not much to say about this specific implementation. We are using the usual tso_build_hdr(), tso_build_data() to create each data segment, we create an array of S/G FDs where the first S/G entry is referencing the header data and the remaining ones the data portion. For the S/G Table buffer we use the same cache of buffers used on the other non-GSO cases - dpaa2_eth_sgt_get() and dpaa2_eth_sgt_recycle(). We cannot keep a DMA coherent buffer for all the TSO headers because the DPAA2 architecture does not work in a ring based fashion so we just allocate a buffer each time. Even with these limitations we get the following improvement in TCP termination on the LX2160A SoC, on a single A72 core running at 2.2GHz. before: 6.38Gbit/s after: 8.48Gbit/s Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Ciornei 提交于
Up until now, the __dpaa2_eth_tx function used a single FD on the stack to construct the structure to be enqueued. Since we are now preparing the ground work to add support for TSO done in software at the driver level, the same function needs to work with an array of FDs and enqueue as many as the build_*_fd functions create. Make the necessary adjustments in order to do this. These include: keeping an array of FDs in a percpu structure, cleaning up the necessary FDs before populating it and then, retrying the enqueue process up till all the generated FDs were enqueued or until we reach the maximum number retries. This patch does not change the fact that only a single FD will result from a __dpaa2_eth_tx call but rather just creates the necessary changes for the next patch. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Ciornei 提交于
Instead of allocating memory for an S/G table each time a nonlinear skb is processed, and then freeing it on the Tx confirmation path, use the S/G table cache in order to reuse the memory. For this to work we have to change the size of the cached buffers so that it can hold the maximum number of scatterlist entries. Other than that, each allocate/free call is replaced by a call to the dpaa2_eth_sgt_get/dpaa2_eth_sgt_recycle functions, introduced in the previous patch. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 16 12月, 2021 1 次提交
-
-
由 Ioana Ciornei 提交于
Unfortunately, with the blamed commit I also added a side effect in the ethtool stats shown. Because I added two more fields in the per channel structure without verifying if its size is used in any way, part of the ethtool statistics were off by 2. Fix this by not looking up the size of the structure but instead on a fixed value kept in a macro. Fixes: fc398bec ("net: dpaa2: add adaptive interrupt coalescing") Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Link: https://lore.kernel.org/r/20211215105831.290070-1-ioana.ciornei@nxp.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
- 15 10月, 2021 1 次提交
-
-
由 Ioana Ciornei 提交于
Add support for adaptive interrupt coalescing to the dpaa2-eth driver. First of all, ETHTOOL_COALESCE_USE_ADAPTIVE_RX is defined as a supported coalesce parameter and the requested state is configured through the dpio APIs added in the previous patch. Besides the ethtool API interaction, we keep track of how many bytes and frames are dequeued per CDAN (Channel Data Availability Notification) and update the Net DIM instance through the dpaa2_io_update_net_dim() API. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 27 9月, 2021 1 次提交
-
-
由 Leon Romanovsky 提交于
Move devlink_register to be the last command in the initialization sequence. Signed-off-by: NLeon Romanovsky <leonro@nvidia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 03 4月, 2021 3 次提交
-
-
由 Ioana Ciornei 提交于
It's useful, especially for debugging purposes, to have the Rx copybreak value changeable at runtime. Export it as an ethtool tunable. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Ciornei 提交于
DMA unmapping, allocating a new buffer and DMA mapping it back on the refill path is really not that efficient. Proper buffer recycling (page pool, flipping the page and using the other half) cannot be done for DPAA2 since it's not a ring based controller but it rather deals with multiple queues which all get their buffers from the same buffer pool on Rx. To circumvent these limitations, add support for Rx copybreak. For small sized packets instead of creating a skb around the buffer in which the frame was received, allocate a new sk buffer altogether, copy the contents of the frame and release the initial page back into the buffer pool. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Ciornei 提交于
Rename the dpaa2_eth_xdp_release_buf function into dpaa2_eth_recycle_buf since in the next patches we'll be using the same recycle mechanism for the normal stack path beside for XDP_DROP. Also, rename the array which holds the buffers to be recycled so that it does not have any reference to XDP. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 07 2月, 2021 1 次提交
-
-
由 Russell King 提交于
Add support for backplane link mode, which is, according to discussions with NXP earlier in the year, is a mode where the OS (Linux) is able to manage the PCS and Serdes itself. This commit prepares the ground work for allowing 1G fiber connections to be used with DPAA2 on the SolidRun CEX7 platforms. Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
- 10 1月, 2021 1 次提交
-
-
由 Ioana Ciornei 提交于
If the network interface object is connected to a MAC of TYPE_FIXED, the link status management is handled exclusively by the firmware. This does not mean that the driver cannot access the MAC counters and export them in ethtool. For this to happen, we open the attached dpmac device and keep a pointer to it in priv->mac. Because of this, all the checks in the driver of the following form 'if (priv->mac)' have to be updated to actually check the dpmac attribute and not rely on the presence of a non-NULL value. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
- 03 10月, 2020 2 次提交
-
-
由 Ioana Ciornei 提交于
Add support for the new group of devlink traps - PARSER_ERROR_DROPS. This consists of registering the array of parser error drops supported, controlling their action through the .trap_group_action_set() callback and reporting an erroneous skb received on the error queue appropriately. DPAA2 devices do not support controlling the action of independent parser error traps, thus the .trap_action_set() callback just returns an EOPNOTSUPP while .trap_group_action_set() actually notifies the hardware what it should do with a frame marked as having a header error. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Ciornei 提交于
Add basic support in dpaa2-eth for devlink. For the moment, just register the device with devlink, add the corresponding devlink port and implement the .info_get() callback. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 19 9月, 2020 3 次提交
-
-
由 Yangbo Lu 提交于
This patch is to add PTP sync packet one-step timestamping support. Before egress, one-step timestamping enablement needs, - Enabling timestamp and FAS (Frame Annotation Status) in dpni buffer layout. - Write timestamp to frame annotation and set PTP bit in FAS to mark as one-step timestamping event. - Enabling one-step timestamping by dpni_set_single_step_cfg() API, with offset provided to insert correction time on frame. The offset must respect all MAC headers, VLAN tags and other protocol headers accordingly. The correction field update can consider delays up to one second. So PTP frame needs to be filtered and parsed, and written timestamp into Sync frame originTimestamp field. The operation of API dpni_set_single_step_cfg() has to be done when no one-step timestamping frames are in flight. So we have to make sure the last one-step timestamping frame has already been transmitted on hardware before starting to send the current one. The resolution is, - Utilize skb->cb[0] to mark timestamping request per packet. If it is one-step timestamping PTP sync packet, queue to skb queue. If not, transmit immediately. - Schedule a work to transmit skbs in skb queue. - mutex lock is used to ensure the last one-step timestamping packet has already been transmitted on hardware through TX confirmation queue before transmitting current packet. Signed-off-by: NYangbo Lu <yangbo.lu@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yangbo Lu 提交于
This patch is a preparation for next hardware one-step timestamping support. For DPAA2, the one step timestamping configuration on hardware registers has to be done when there is no one-step timestamping packet in flight. So we will have to use workqueue and skb queue for such packets transmitting, to make sure waiting the last packet has already been sent on hardware, and starting to transmit the current one. So the tx timestamping flag in private data may not reflect the actual request for the one-step timestamping packets of skb queue. This also affects skb headroom allocation. Let's use skb->cb[0] to mark the timestamping request for each skb. Signed-off-by: NYangbo Lu <yangbo.lu@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yangbo Lu 提交于
Define a global ptp_qoriq structure pointer, and export to use. The ptp clock operations will be used in dpaa2-eth driver. For example, supporting one step timestamping needs to write current time to hardware frame annotation before sending and then hardware inserts the delay time on frame during sending. So in driver, at least clock gettime operation will be needed to make sure right time is written to hardware frame annotation for one step timestamping. Signed-off-by: NYangbo Lu <yangbo.lu@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 22 7月, 2020 1 次提交
-
-
由 Ioana Ciornei 提交于
React to TC_SETUP_QDISC_TBF and configure the egress shaper as appropriate with the maximum rate and burst size requested by the user. TBF can only be offloaded on DPAA2 when it's the root qdisc, ie it's a per port shaper. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 30 6月, 2020 2 次提交
-
-
由 Ioana Ciornei 提交于
With the previous commit, in case of insufficient SKB headroom on the Tx path instead of reallocing the SKB we now send a S/G frame descriptor. Export the number of occurences of this case as a per CPU counter (in debugfs) and a total number in the ethtool statistics - "tx converted sg frames'. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Ciornei 提交于
Instead of realloc-ing the skb on the Tx path when the provided headroom is smaller than the HW requirements, create a Scatter/Gather frame descriptor with only one entry. Remove the '[drv] tx realloc frames' counter exposed previously through ethtool since it is no longer used. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 02 6月, 2020 7 次提交
-
-
由 Ioana Ciornei 提交于
Leave congestion group taildrop enabled for all traffic classes when PFC is enabled. Notification threshold is low enough such that it will be hit first and this also ensures that FQs on traffic classes which are not PFC enabled won't drain the buffer pool. FQ taildrop threshold is kept disabled as long as any form of flow control is on. Since FQ taildrop works with bytes, not number of frames, we can't guarantee it will not interfere with the congestion notification mechanism for all frame sizes. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Ciornei 提交于
Add support in dpaa2-eth for PFC (Priority Flow Control) through the DCB ops. Instruct the hardware to respond to received PFC frames. Current firmware doesn't allow us to selectively enable PFC on the Rx side for some priorities only, so we will react to all incoming PFC frames (and stop transmitting on the traffic classes specified in the frame). Also, configure the hardware to generate PFC frames based on Rx congestion notifications. When a certain number of frames accumulate in the ingress queues corresponding to a traffic class, priority flow control frames are generated for that TC. The number of PFC traffic classes available can be queried through lldptool. Also, which of those traffic classes have PFC enabled is also controlled through the same dcbnl_rtnl_ops callbacks. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Radulescu 提交于
Now that we have congestion group taildrop configured at all times, we can afford to increase the frame queue taildrop threshold; this will ensure a better response when receiving bursts of large-sized frames. Also decouple the buffer pool count from the Rx FQ taildrop threshold, as above change would increase it too much. Instead, keep the old count as a hardcoded value. With the new limits, we try to ensure that: * we allow enough leeway for large frame bursts (by buffering enough of them in queues to avoid heavy dropping in case of bursty traffic, but when overall ingress bandwidth is manageable) * allow pending frames to be evenly spread between ingress FQs, regardless of frame size * avoid dropping frames due to the buffer pool being empty; this is not a bad behaviour per se, but system overall response is more linear and predictable when frames are dropped at frame queue/group level. Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Radulescu 提交于
The increase in number of ingress frame queues means we now risk depleting the buffer pool before the FQ taildrop kicks in. Congestion group taildrop allows us to control the number of frames that can accumulate on a group of Rx frame queues belonging to the same traffic class. This setting coexists with the frame queue based taildrop: whichever limit gets hit first triggers the frame drop. Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Radulescu 提交于
Add convenient helper functions that determines whether Rx/Tx pause frames are enabled based on link state flags received from firmware. Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Radulescu 提交于
Configure static ingress classification based on VLAN PCP field. If the DPNI doesn't have enough traffic classes to accommodate all priority levels, the lowest ones end up on TC 0 (default on miss). Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Radulescu 提交于
The firmware reserves for each DPNI a number of RX frame queues equal to the number of configured flows x number of configured traffic classes. Current driver configuration directs all incoming traffic to FQs corresponding to TC0, leaving all other priority levels unused. Start adding support for multiple ingress traffic classes, by configuring the FQs associated with all priority levels, not just TC0. All settings that are per-TC, such as those related to hashing and flow steering, are also updated. Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 17 5月, 2020 1 次提交
-
-
由 Ioana Ciornei 提交于
Add driver level bulking to the XDP_TX action. An array of frame descriptors is held for each Tx frame queue and populated accordingly when the action returned by the XDP program is XDP_TX. The frames will be actually enqueued only when the array is filled. At the end of the NAPI cycle a flush on the queued frames is performed in order to enqueue the remaining FDs. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 16 5月, 2020 1 次提交
-
-
由 Ioana Ciornei 提交于
Depending on the WRIOP version, the buffer size on the RX path must by a multiple of 64 or 256. Handle this restriction properly by aligning down the buffer size to the necessary value. Also, use the new buffer size dynamically computed instead of the compile time one. Fixes: 27c87486 ("dpaa2-eth: Use a single page per Rx buffer") Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 08 5月, 2020 1 次提交
-
-
由 Ioana Ciornei 提交于
Create an independent function that takes a particular frame queue and an array of frame descriptors and tries to enqueue them until it hits the maximum number fo retries. The same function will be used in the next patch also on the XDP_TX path. Also, create the dpaa2_eth_xdp_fds structure to incorporate the array of FDs as well as the number of FDs already populated. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 26 4月, 2020 1 次提交
-
-
由 Ioana Ciornei 提交于
Compute the average number of frames processed for each CDAN (Channel Data Availability Notification) and export it to debugfs detailed channel stats. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 23 4月, 2020 3 次提交
-
-
由 Ioana Ciornei 提交于
Take advantage of the bulk enqueue feature in .ndo_xdp_xmit. We cannot use the XDP_XMIT_FLUSH since the architecture is not capable to store all the frames dequeued in a NAPI cycle so we instead are enqueueing all the frames received in a ndo_xdp_xmit call right away. After setting up all FDs for the xdp_frames received, enqueue multiple frames at a time until all are sent or the maximum number of retries is hit. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Acked-by: NJesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Ciornei 提交于
Update the dpaa2-eth driver to use the bulk enqueue function introduced with the change to QBMAN ring mode. At the moment, no functional changes are made but rather the driver just transitions to the new interface while still enqueuing just one frame at a time. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Ciornei 提交于
The enqueue dpaa2-eth callback now returns the number of successfully enqueued frames. This is a preliminary patch necessary for adding support for bulk ring mode enqueue. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 01 11月, 2019 1 次提交
-
-
由 Ioana Ciornei 提交于
The dpaa2-eth driver now has support for connecting to its associated PHY device found through standard OF bindings. This happens when the DPNI object (that the driver probes on) gets connected to a DPMAC. When that happens, the device tree is looked up by the DPMAC ID, and the associated PHY bindings are found. The old logic of handling the net device's link state by hand still needs to be kept, as the DPNI can be connected to other devices on the bus than a DPMAC: other DPNI, DPSW ports, etc. This logic is only engaged when there is no DPMAC (and therefore no phylink instance) attached. The MC firmware support multiple type of DPMAC links: TYPE_FIXED, TYPE_PHY. The TYPE_FIXED mode does not require any DPMAC management from Linux side, and as such, the driver will not handle such a DPMAC. Although PHYLINK typically handles SFP cages and in-band AN modes, for the moment the driver only supports the RGMII interfaces found on the LX2160A. Support for other modes will come later. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 07 10月, 2019 1 次提交
-
-
由 Ioana Radulescu 提交于
Throughout the driver there are several places where we wait indefinitely for DPIO portal commands to be executed, while the portal returns a busy response code. Even though in theory we are guaranteed the portals become available eventually, in practice the QBMan hardware module may become unresponsive in various corner cases. Make sure we can never get stuck in an infinite while loop by adding a retry counter for all portal commands. Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 30 8月, 2019 1 次提交
-
-
由 Ioana Radulescu 提交于
Starting with firmware version MC10.18.0, we have support for L2 flow control. Asymmetrical configuration (Rx or Tx only) is supported, but not pause frame autonegotioation. Pause frame configuration is done via ethtool. By default, we start with flow control enabled on both Rx and Tx. Changes are propagated to hardware through firmware commands, using two flags (PAUSE, ASYM_PAUSE) to specify Rx and Tx pause configuration, as follows: PAUSE | ASYM_PAUSE | Rx pause | Tx pause ---------------------------------------- 0 | 0 | disabled | disabled 0 | 1 | disabled | enabled 1 | 0 | enabled | enabled 1 | 1 | enabled | disabled The hardware can automatically send pause frames when the number of buffers in the pool goes below a predefined threshold. Due to this, flow control is incompatible with Rx frame queue taildrop (both mechanisms target the case when processing of ingress frames can't keep up with the Rx rate; for large frames, the number of buffers in the pool may never get low enough to trigger pause frames as long as taildrop is enabled). So we set pause frame generation and Rx FQ taildrop as mutually exclusive. Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 13 6月, 2019 2 次提交
-
-
由 Ioana Radulescu 提交于
Implement mqprio qdisc support by mapping traffic classes to different hardware enqueue priorities. The maximum number of supported traffic classes is an attribute of each DPNI object. The traffic classes map to hardware priorities from highest (0) to lowest (highest prio number). The skb priority information received from the stack is used to select the hardware Tx queue on which to enqueue the frame. Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: NBogdan Purcareata <bogdan.purcareata@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Radulescu 提交于
DPNI objects can have multiple traffic classes, as reflected by the num_tc attribute. Until now we ignored its value and only used traffic class 0. This patch adds support for multiple Tx traffic classes; we have num_queues x num_tcs hardware queues available for each interface. Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 27 5月, 2019 1 次提交
-
-
由 Ioana Radulescu 提交于
Function dpaa2_eth_cls_key_size() expects a 64bit argument, but DPAA2_ETH_DIST_ALL is defined as UINT_MAX. Fix this. Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Reported-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 17 4月, 2019 1 次提交
-
-
由 Ioana Ciocoi Radulescu 提交于
On platforms that lack a TCAM (like LS1088A), masking of flow steering keys is not supported. Until now we didn't offer flow steering capabilities at all on these platforms, since our driver implementation configured a "comprehensive" FS key (containing all supported header fields), with masks used to ignore the fields not present in the rules provided by the user. We now allow ethtool rules that share a common key (i.e. have the same header fields). The FS key is now kept in the driver private data and initialized when the first rule is added to an empty table, rather than at probe time. If a rule with a new composition key is wanted, the user must first manually delete all previous rules. When building a FS table entry to pass to firmware, we still use the old building algorithm, which assumes an all-supported-fields key, and later collapse the fields which aren't actually needed. Masked rules are not supported; if provided, the mask value will be ignored. For firmware versions older than MC10.7.0 (that only offer the legacy ABIs for configuring distribution keys) flow steering without masking support remains unavailable. Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-