- 26 7月, 2020 1 次提交
-
-
由 Andrii Nakryiko 提交于
Now that BPF program/link management is centralized in generic net_device code, kernel code never queries program id from drivers, so XDP_QUERY_PROG/XDP_QUERY_PROG_HW commands are unnecessary. This patch removes all the implementations of those commands in kernel, along the xdp_attachment_query(). This patch was compile-tested on allyesconfig. Signed-off-by: NAndrii Nakryiko <andriin@fb.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200722064603.3350758-10-andriin@fb.com
-
- 22 7月, 2020 2 次提交
-
-
由 Ioana Ciornei 提交于
React to TC_SETUP_QDISC_TBF and configure the egress shaper as appropriate with the maximum rate and burst size requested by the user. TBF can only be offloaded on DPAA2 when it's the root qdisc, ie it's a per port shaper. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Ciornei 提交于
Move the setup done for MQPRIO into a separate function so that with the addition of another offload we do not crowd dpaa2_eth_setup_tc(). After this restructuring it's easier to see what is supported in terms of Qdisc offloading. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 16 7月, 2020 1 次提交
-
-
由 Ioana Ciornei 提交于
The fsl_mc_get_endpoint() function can return an error or directly a NULL pointer in case the peer device is not under the root DPRC container. Treat this case also, otherwise it would lead to a NULL pointer when trying to access the peer fsl_mc_device. Fixes: 71947923 ("dpaa2-eth: add MAC/PHY support through phylink") Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
- 07 7月, 2020 1 次提交
-
-
由 Ioana Ciornei 提交于
On link down, the draining of the S/G cache should be done on all _possible_ CPUs not just the ones that are online in that moment. Fix this by changing the iterator. Fixes: d70446ee ("dpaa2-eth: send a scatter-gather FD instead of realloc-ing") Reported-by: NJakub Kicinski <kuba@kernel.org> Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 30 6月, 2020 2 次提交
-
-
由 Ioana Ciornei 提交于
With the previous commit, in case of insufficient SKB headroom on the Tx path instead of reallocing the SKB we now send a S/G frame descriptor. Export the number of occurences of this case as a per CPU counter (in debugfs) and a total number in the ethtool statistics - "tx converted sg frames'. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Ciornei 提交于
Instead of realloc-ing the skb on the Tx path when the provided headroom is smaller than the HW requirements, create a Scatter/Gather frame descriptor with only one entry. Remove the '[drv] tx realloc frames' counter exposed previously through ethtool since it is no longer used. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 26 6月, 2020 2 次提交
-
-
由 Ioana Ciornei 提交于
We should keep retrying to acquire buffers through the software portals as long as the function returns -EBUSY and the number of retries is __below__ DPAA2_ETH_SWP_BUSY_RETRIES. Fixes: ef17bd7c ("dpaa2-eth: Avoid unbounded while loops") Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Ciornei 提交于
Before passing the result of skb_to_sgvec() to dma_map_sg() check if any error was returned. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 12 6月, 2020 1 次提交
-
-
由 Xu Wang 提交于
A multiplication for the size determination of a memory allocation indicated that an array data structure should be processed. Thus use the corresponding function "devm_kcalloc". Signed-off-by: NXu Wang <vulab@iscas.ac.cn> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 02 6月, 2020 6 次提交
-
-
由 Ioana Ciornei 提交于
Leave congestion group taildrop enabled for all traffic classes when PFC is enabled. Notification threshold is low enough such that it will be hit first and this also ensures that FQs on traffic classes which are not PFC enabled won't drain the buffer pool. FQ taildrop threshold is kept disabled as long as any form of flow control is on. Since FQ taildrop works with bytes, not number of frames, we can't guarantee it will not interfere with the congestion notification mechanism for all frame sizes. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Ciornei 提交于
Add support in dpaa2-eth for PFC (Priority Flow Control) through the DCB ops. Instruct the hardware to respond to received PFC frames. Current firmware doesn't allow us to selectively enable PFC on the Rx side for some priorities only, so we will react to all incoming PFC frames (and stop transmitting on the traffic classes specified in the frame). Also, configure the hardware to generate PFC frames based on Rx congestion notifications. When a certain number of frames accumulate in the ingress queues corresponding to a traffic class, priority flow control frames are generated for that TC. The number of PFC traffic classes available can be queried through lldptool. Also, which of those traffic classes have PFC enabled is also controlled through the same dcbnl_rtnl_ops callbacks. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Radulescu 提交于
The increase in number of ingress frame queues means we now risk depleting the buffer pool before the FQ taildrop kicks in. Congestion group taildrop allows us to control the number of frames that can accumulate on a group of Rx frame queues belonging to the same traffic class. This setting coexists with the frame queue based taildrop: whichever limit gets hit first triggers the frame drop. Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Radulescu 提交于
Add convenient helper functions that determines whether Rx/Tx pause frames are enabled based on link state flags received from firmware. Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Radulescu 提交于
Configure static ingress classification based on VLAN PCP field. If the DPNI doesn't have enough traffic classes to accommodate all priority levels, the lowest ones end up on TC 0 (default on miss). Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Radulescu 提交于
The firmware reserves for each DPNI a number of RX frame queues equal to the number of configured flows x number of configured traffic classes. Current driver configuration directs all incoming traffic to FQs corresponding to TC0, leaving all other priority levels unused. Start adding support for multiple ingress traffic classes, by configuring the FQs associated with all priority levels, not just TC0. All settings that are per-TC, such as those related to hashing and flow steering, are also updated. Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 17 5月, 2020 1 次提交
-
-
由 Ioana Ciornei 提交于
Add driver level bulking to the XDP_TX action. An array of frame descriptors is held for each Tx frame queue and populated accordingly when the action returned by the XDP program is XDP_TX. The frames will be actually enqueued only when the array is filled. At the end of the NAPI cycle a flush on the queued frames is performed in order to enqueue the remaining FDs. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 16 5月, 2020 1 次提交
-
-
由 Ioana Ciornei 提交于
Depending on the WRIOP version, the buffer size on the RX path must by a multiple of 64 or 256. Handle this restriction properly by aligning down the buffer size to the necessary value. Also, use the new buffer size dynamically computed instead of the compile time one. Fixes: 27c87486 ("dpaa2-eth: Use a single page per Rx buffer") Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 15 5月, 2020 1 次提交
-
-
由 Jesper Dangaard Brouer 提交于
The dpaa2-eth driver reserve some headroom used for hardware and software annotation area in RX/TX buffers. Thus, xdp.data_hard_start doesn't start at page boundary. When XDP is configured the area reserved via dpaa2_fd_get_offset(fd) is 448 bytes of which XDP have reserved 256 bytes. As frame_sz is calculated as an offset from xdp_buff.data_hard_start, an adjust from the full PAGE_SIZE == DPAA2_ETH_RX_BUF_RAW_SIZE. When doing XDP_REDIRECT, the driver doesn't need this reserved headroom any-longer and allows xdp_do_redirect() to use it. This is an advantage for the drivers own ndo-xdp_xmit, as it uses part of this headroom for itself. Patch also adjust frame_sz in this case. The driver cannot support XDP data_meta, because it uses the headroom just before xdp.data for struct dpaa2_eth_swa (DPAA2_ETH_SWA_SIZE=64), when transmitting the packet. When transmitting a xdp_frame in dpaa2_eth_xdp_xmit_frame (call via ndo_xdp_xmit) is uses this area to store a pointer to xdp_frame and dma_size, which is used in TX completion (free_tx_fd) to return frame via xdp_return_frame(). Signed-off-by: NJesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Cc: Ioana Radulescu <ruxandra.radulescu@nxp.com> Link: https://lore.kernel.org/bpf/158945339348.97035.8562488847066908856.stgit@firesoul
-
- 08 5月, 2020 1 次提交
-
-
由 Ioana Ciornei 提交于
Create an independent function that takes a particular frame queue and an array of frame descriptors and tries to enqueue them until it hits the maximum number fo retries. The same function will be used in the next patch also on the XDP_TX path. Also, create the dpaa2_eth_xdp_fds structure to incorporate the array of FDs as well as the number of FDs already populated. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 01 5月, 2020 1 次提交
-
-
由 Wei Yongjun 提交于
Fix to return negative error code -ENOMEM from the error handling case instead of 0, as done elsewhere in this function. Signed-off-by: NWei Yongjun <weiyongjun1@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 26 4月, 2020 1 次提交
-
-
由 Ioana Ciornei 提交于
Compute the average number of frames processed for each CDAN (Channel Data Availability Notification) and export it to debugfs detailed channel stats. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 25 4月, 2020 1 次提交
-
-
由 Jesper Dangaard Brouer 提交于
Drivers ndo_setup_tc call should return -EOPNOTSUPP, when it cannot support the qdisc type. Other return values will result in failing the qdisc setup. This lead to qdisc noop getting assigned, which will drop all TX packets on the interface. Fixes: ab1e6de2 ("dpaa2-eth: Add mqprio support") Signed-off-by: NJesper Dangaard Brouer <brouer@redhat.com> Tested-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 23 4月, 2020 4 次提交
-
-
由 Ioana Ciornei 提交于
Take advantage of the bulk enqueue feature in .ndo_xdp_xmit. We cannot use the XDP_XMIT_FLUSH since the architecture is not capable to store all the frames dequeued in a NAPI cycle so we instead are enqueueing all the frames received in a ndo_xdp_xmit call right away. After setting up all FDs for the xdp_frames received, enqueue multiple frames at a time until all are sent or the maximum number of retries is hit. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Acked-by: NJesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Ciornei 提交于
Instead of having a function that both creates a frame descriptor from an xdp_frame and enqueues it, split this into two stages. Add the dpaa2_eth_xdp_create_fd that just transforms an xdp_frame into a FD while the actual enqueue callback is called directly from the ndo for each frame. This is particulary useful in conjunction with bulk enqueue. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Acked-by: NJesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Ciornei 提交于
Update the dpaa2-eth driver to use the bulk enqueue function introduced with the change to QBMAN ring mode. At the moment, no functional changes are made but rather the driver just transitions to the new interface while still enqueuing just one frame at a time. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Ciornei 提交于
The enqueue dpaa2-eth callback now returns the number of successfully enqueued frames. This is a preliminary patch necessary for adding support for bulk ring mode enqueue. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 28 2月, 2020 1 次提交
-
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Acked-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 18 11月, 2019 1 次提交
-
-
由 Andrii Nakryiko 提交于
Similarly to bpf_map's refcnt/usercnt, convert bpf_prog's refcnt to atomic64 and remove artificial 32k limit. This allows to make bpf_prog's refcounting non-failing, simplifying logic of users of bpf_prog_add/bpf_prog_inc. Validated compilation by running allyesconfig kernel build. Suggested-by: NDaniel Borkmann <daniel@iogearbox.net> Signed-off-by: NAndrii Nakryiko <andriin@fb.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191117172806.2195367-3-andriin@fb.com
-
- 13 11月, 2019 1 次提交
-
-
由 Ioana Ciornei 提交于
The setup_dpio() function tries to allocate a number of channels equal to the number of CPUs online. When there are not enough DPCON objects already probed, the function will return EPROBE_DEFER. When this happens, the already allocated channels are not freed. This results in the incapacity of properly probing the next time around. Fix this by freeing the channels on the error path. Fixes: d7f5a9d8 ("dpaa2-eth: defer probe on object allocate") Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 01 11月, 2019 2 次提交
-
-
由 Ioana Ciornei 提交于
The dpaa2-eth driver now has support for connecting to its associated PHY device found through standard OF bindings. This happens when the DPNI object (that the driver probes on) gets connected to a DPMAC. When that happens, the device tree is looked up by the DPMAC ID, and the associated PHY bindings are found. The old logic of handling the net device's link state by hand still needs to be kept, as the DPNI can be connected to other devices on the bus than a DPMAC: other DPNI, DPSW ports, etc. This logic is only engaged when there is no DPMAC (and therefore no phylink instance) attached. The MC firmware support multiple type of DPMAC links: TYPE_FIXED, TYPE_PHY. The TYPE_FIXED mode does not require any DPMAC management from Linux side, and as such, the driver will not handle such a DPMAC. Although PHYLINK typically handles SFP cages and in-band AN modes, for the moment the driver only supports the RGMII interfaces found on the LX2160A. Support for other modes will come later. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Ciornei 提交于
Currently the function is called at every link up event, although the FQID values will only change when the DPNI is disconnected from the current object and reconnected to a different one. The patch also avoids the forward declaration of update_tx_fqids. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 18 10月, 2019 2 次提交
-
-
由 Ioana Radulescu 提交于
Depending on when MC connects the DPNI to a MAC, Tx FQIDs may not be available during probe time. Read the FQIDs each time the link goes up to avoid using invalid values. In case an error occurs or an invalid value is retrieved, fall back to QDID-based enqueueing. Fixes: 1fa0f68c ("dpaa2-eth: Use FQ-based DPIO enqueue API") Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Florin Chiculita 提交于
Add IRQ for the DPNI endpoint change event, resolving the issue when a dynamically created DPNI gets a randomly generated hw address when the endpoint is a DPMAC object. Signed-off-by: NFlorin Chiculita <florinlaurentiu.chiculita@nxp.com> Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 07 10月, 2019 2 次提交
-
-
由 Ioana Radulescu 提交于
Throughout the driver there are several places where we wait indefinitely for DPIO portal commands to be executed, while the portal returns a busy response code. Even though in theory we are guaranteed the portals become available eventually, in practice the QBMan hardware module may become unresponsive in various corner cases. Make sure we can never get stuck in an infinite while loop by adding a retry counter for all portal commands. Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Radulescu 提交于
Remove one function call whose result was not used anywhere. Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 05 9月, 2019 1 次提交
-
-
由 Ioana Radulescu 提交于
Starting with firmware version MC10.18.0, a new counter for in flight Tx frames is offered. Use it when bringing down the interface to determine when all pending Tx frames have been processed by hardware instead of sleeping a fixed amount of time. Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 30 8月, 2019 2 次提交
-
-
由 Ioana Radulescu 提交于
Starting with firmware version MC10.18.0, we have support for L2 flow control. Asymmetrical configuration (Rx or Tx only) is supported, but not pause frame autonegotioation. Pause frame configuration is done via ethtool. By default, we start with flow control enabled on both Rx and Tx. Changes are propagated to hardware through firmware commands, using two flags (PAUSE, ASYM_PAUSE) to specify Rx and Tx pause configuration, as follows: PAUSE | ASYM_PAUSE | Rx pause | Tx pause ---------------------------------------- 0 | 0 | disabled | disabled 0 | 1 | disabled | enabled 1 | 0 | enabled | enabled 1 | 1 | enabled | disabled The hardware can automatically send pause frames when the number of buffers in the pool goes below a predefined threshold. Due to this, flow control is incompatible with Rx frame queue taildrop (both mechanisms target the case when processing of ingress frames can't keep up with the Rx rate; for large frames, the number of buffers in the pool may never get low enough to trigger pause frames as long as taildrop is enabled). So we set pause frame generation and Rx FQ taildrop as mutually exclusive. Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Radulescu 提交于
Whenever a link state change occurs, we get notified and save the new link settings in the device's private data. In ethtool get_link_ksettings, use the stored state instead of interrogating the firmware each time. Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 13 6月, 2019 1 次提交
-
-
由 Ioana Radulescu 提交于
Implement mqprio qdisc support by mapping traffic classes to different hardware enqueue priorities. The maximum number of supported traffic classes is an attribute of each DPNI object. The traffic classes map to hardware priorities from highest (0) to lowest (highest prio number). The skb priority information received from the stack is used to select the hardware Tx queue on which to enqueue the frame. Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: NBogdan Purcareata <bogdan.purcareata@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-