- 23 4月, 2020 4 次提交
-
-
由 Ioana Ciornei 提交于
Take advantage of the bulk enqueue feature in .ndo_xdp_xmit. We cannot use the XDP_XMIT_FLUSH since the architecture is not capable to store all the frames dequeued in a NAPI cycle so we instead are enqueueing all the frames received in a ndo_xdp_xmit call right away. After setting up all FDs for the xdp_frames received, enqueue multiple frames at a time until all are sent or the maximum number of retries is hit. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Acked-by: NJesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Ciornei 提交于
Instead of having a function that both creates a frame descriptor from an xdp_frame and enqueues it, split this into two stages. Add the dpaa2_eth_xdp_create_fd that just transforms an xdp_frame into a FD while the actual enqueue callback is called directly from the ndo for each frame. This is particulary useful in conjunction with bulk enqueue. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Acked-by: NJesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Ciornei 提交于
Update the dpaa2-eth driver to use the bulk enqueue function introduced with the change to QBMAN ring mode. At the moment, no functional changes are made but rather the driver just transitions to the new interface while still enqueuing just one frame at a time. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Ciornei 提交于
The enqueue dpaa2-eth callback now returns the number of successfully enqueued frames. This is a preliminary patch necessary for adding support for bulk ring mode enqueue. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 28 2月, 2020 1 次提交
-
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Acked-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 18 11月, 2019 1 次提交
-
-
由 Andrii Nakryiko 提交于
Similarly to bpf_map's refcnt/usercnt, convert bpf_prog's refcnt to atomic64 and remove artificial 32k limit. This allows to make bpf_prog's refcounting non-failing, simplifying logic of users of bpf_prog_add/bpf_prog_inc. Validated compilation by running allyesconfig kernel build. Suggested-by: NDaniel Borkmann <daniel@iogearbox.net> Signed-off-by: NAndrii Nakryiko <andriin@fb.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191117172806.2195367-3-andriin@fb.com
-
- 13 11月, 2019 1 次提交
-
-
由 Ioana Ciornei 提交于
The setup_dpio() function tries to allocate a number of channels equal to the number of CPUs online. When there are not enough DPCON objects already probed, the function will return EPROBE_DEFER. When this happens, the already allocated channels are not freed. This results in the incapacity of properly probing the next time around. Fix this by freeing the channels on the error path. Fixes: d7f5a9d8 ("dpaa2-eth: defer probe on object allocate") Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 01 11月, 2019 2 次提交
-
-
由 Ioana Ciornei 提交于
The dpaa2-eth driver now has support for connecting to its associated PHY device found through standard OF bindings. This happens when the DPNI object (that the driver probes on) gets connected to a DPMAC. When that happens, the device tree is looked up by the DPMAC ID, and the associated PHY bindings are found. The old logic of handling the net device's link state by hand still needs to be kept, as the DPNI can be connected to other devices on the bus than a DPMAC: other DPNI, DPSW ports, etc. This logic is only engaged when there is no DPMAC (and therefore no phylink instance) attached. The MC firmware support multiple type of DPMAC links: TYPE_FIXED, TYPE_PHY. The TYPE_FIXED mode does not require any DPMAC management from Linux side, and as such, the driver will not handle such a DPMAC. Although PHYLINK typically handles SFP cages and in-band AN modes, for the moment the driver only supports the RGMII interfaces found on the LX2160A. Support for other modes will come later. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Ciornei 提交于
Currently the function is called at every link up event, although the FQID values will only change when the DPNI is disconnected from the current object and reconnected to a different one. The patch also avoids the forward declaration of update_tx_fqids. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 18 10月, 2019 2 次提交
-
-
由 Ioana Radulescu 提交于
Depending on when MC connects the DPNI to a MAC, Tx FQIDs may not be available during probe time. Read the FQIDs each time the link goes up to avoid using invalid values. In case an error occurs or an invalid value is retrieved, fall back to QDID-based enqueueing. Fixes: 1fa0f68c ("dpaa2-eth: Use FQ-based DPIO enqueue API") Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Florin Chiculita 提交于
Add IRQ for the DPNI endpoint change event, resolving the issue when a dynamically created DPNI gets a randomly generated hw address when the endpoint is a DPMAC object. Signed-off-by: NFlorin Chiculita <florinlaurentiu.chiculita@nxp.com> Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 07 10月, 2019 2 次提交
-
-
由 Ioana Radulescu 提交于
Throughout the driver there are several places where we wait indefinitely for DPIO portal commands to be executed, while the portal returns a busy response code. Even though in theory we are guaranteed the portals become available eventually, in practice the QBMan hardware module may become unresponsive in various corner cases. Make sure we can never get stuck in an infinite while loop by adding a retry counter for all portal commands. Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Radulescu 提交于
Remove one function call whose result was not used anywhere. Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 05 9月, 2019 1 次提交
-
-
由 Ioana Radulescu 提交于
Starting with firmware version MC10.18.0, a new counter for in flight Tx frames is offered. Use it when bringing down the interface to determine when all pending Tx frames have been processed by hardware instead of sleeping a fixed amount of time. Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 30 8月, 2019 2 次提交
-
-
由 Ioana Radulescu 提交于
Starting with firmware version MC10.18.0, we have support for L2 flow control. Asymmetrical configuration (Rx or Tx only) is supported, but not pause frame autonegotioation. Pause frame configuration is done via ethtool. By default, we start with flow control enabled on both Rx and Tx. Changes are propagated to hardware through firmware commands, using two flags (PAUSE, ASYM_PAUSE) to specify Rx and Tx pause configuration, as follows: PAUSE | ASYM_PAUSE | Rx pause | Tx pause ---------------------------------------- 0 | 0 | disabled | disabled 0 | 1 | disabled | enabled 1 | 0 | enabled | enabled 1 | 1 | enabled | disabled The hardware can automatically send pause frames when the number of buffers in the pool goes below a predefined threshold. Due to this, flow control is incompatible with Rx frame queue taildrop (both mechanisms target the case when processing of ingress frames can't keep up with the Rx rate; for large frames, the number of buffers in the pool may never get low enough to trigger pause frames as long as taildrop is enabled). So we set pause frame generation and Rx FQ taildrop as mutually exclusive. Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Radulescu 提交于
Whenever a link state change occurs, we get notified and save the new link settings in the device's private data. In ethtool get_link_ksettings, use the stored state instead of interrogating the firmware each time. Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 13 6月, 2019 3 次提交
-
-
由 Ioana Radulescu 提交于
Implement mqprio qdisc support by mapping traffic classes to different hardware enqueue priorities. The maximum number of supported traffic classes is an attribute of each DPNI object. The traffic classes map to hardware priorities from highest (0) to lowest (highest prio number). The skb priority information received from the stack is used to select the hardware Tx queue on which to enqueue the frame. Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: NBogdan Purcareata <bogdan.purcareata@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Radulescu 提交于
DPNI objects can have multiple traffic classes, as reflected by the num_tc attribute. Until now we ignored its value and only used traffic class 0. This patch adds support for multiple Tx traffic classes; we have num_queues x num_tcs hardware queues available for each interface. Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Radulescu 提交于
Move the code configuring xps on the netdev TX queues to a separate function. A subsequent patch will need to call this in another context as well. Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 10 6月, 2019 2 次提交
-
-
The driver is using netdev_alloc_frag() for allocation in the ->ndo_start_xmit() path. That one is always invoked in a BH disabled region so we could also use napi_alloc_frag(). Use napi_alloc_frag() for skb allocation. Cc: Ioana Radulescu <ruxandra.radulescu@nxp.com> Acked-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
According to the comment, the preempt_disable() statement is required due to synchronisation in napi_alloc_frag(). The awful truth is that local_bh_disable() is required because otherwise the NAPI poll callback can be invoked while the open function setup buffers. This isn't unlikely since the dpaa2 provides multiple devices. The usage of napi_alloc_frag() has been removed in commit 27c87486 ("dpaa2-eth: Use a single page per Rx buffer") which means that the comment is not accurate and the preempt_disable() statement is not required. Remove the outdated comment and the no longer required preempt_disable(). Cc: Ioana Radulescu <ruxandra.radulescu@nxp.com> Acked-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 27 5月, 2019 1 次提交
-
-
由 Ioana Radulescu 提交于
Use PTR_ERR_OR_ZERO instead of PTR_ERR in cases where zero is a valid input. Reported by smatch. Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 24 5月, 2019 1 次提交
-
-
由 Ioana Radulescu 提交于
This reverts commit f8b99585. The reverted change instructed the QMan hardware block to fetch RX frame annotation and beginning of frame data to cache before the core would read them. It turns out that in rare cases, it's possible that a QMan stashing transaction is delayed long enough such that, by the time it gets executed, the frame in question had already been dequeued by the core and software processing began on it. If the core manages to unmap the frame buffer _before_ the stashing transaction is executed, an SMMU exception will be raised. Unfortunately there is no easy way to work around this while keeping the performance advantages brought by QMan stashing, so disable it altogether. Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 17 4月, 2019 4 次提交
-
-
由 Ioana Ciocoi Radulescu 提交于
On platforms that lack a TCAM (like LS1088A), masking of flow steering keys is not supported. Until now we didn't offer flow steering capabilities at all on these platforms, since our driver implementation configured a "comprehensive" FS key (containing all supported header fields), with masks used to ignore the fields not present in the rules provided by the user. We now allow ethtool rules that share a common key (i.e. have the same header fields). The FS key is now kept in the driver private data and initialized when the first rule is added to an empty table, rather than at probe time. If a rule with a new composition key is wanted, the user must first manually delete all previous rules. When building a FS table entry to pass to firmware, we still use the old building algorithm, which assumes an all-supported-fields key, and later collapse the fields which aren't actually needed. Masked rules are not supported; if provided, the mask value will be ignored. For firmware versions older than MC10.7.0 (that only offer the legacy ABIs for configuring distribution keys) flow steering without masking support remains unavailable. Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Ciocoi Radulescu 提交于
Introduce an internal id bitfield to uniquely identify header fields supported by the Rx distribution keys. For the hash key, add a conversion from the RXH_* bitmask provided by ethtool to the internal ids. Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Ciocoi Radulescu 提交于
Add two macros to simplify reading DPNI options. Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Ciocoi Radulescu 提交于
Set the Rx flow classification enable flag only if key config operation is successful. Fixes 3f9b5c9 ("dpaa2-eth: Configure Rx flow classification key") Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 27 3月, 2019 2 次提交
-
-
由 Ioana Ciornei 提交于
Take advantage of the software Rx batching by using netif_receive_skb_list instead of napi_gro_receive. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Ciornei 提交于
It might happen that Tx conf acknowledges a frame before it was subscribed in bql, as subscribing was previously done after the enqueue operation. This patch moves the netdev_tx_sent_queue call before the actual frame enqueue, so that this can never happen. Fixes: 569dac6a ("dpaa2-eth: bql support") Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 21 3月, 2019 1 次提交
-
-
由 Ioana Ciocoi Radulescu 提交于
Make sure we don't try to enqueue XDP_REDIRECT frames to an inexistent FQ. While it is guaranteed not to have more than one queue per core, having fewer queues than CPUs on an interface is a valid configuration. Fixes: d678be1d ("dpaa2-eth: add XDP_REDIRECT support") Reported-by: NJesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 04 3月, 2019 2 次提交
-
-
由 Ioana Radulescu 提交于
Implement support for the XDP_REDIRECT action. The redirected frame is transmitted and confirmed on the regular Tx/Tx conf queues. Frame is marked with the "XDP" type in the software annotation, since it requires special treatment. We don't have good hardware support for TX batching, so the XDP_XMIT_FLUSH flag doesn't make a difference for now; ndo_xdp_xmit performs the actual Tx operation on the spot. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Radulescu 提交于
We write different metadata information in the software annotation area of Tx frames, depending on frame type. Make this more explicit by introducing a type field and separate structures for single buffer and scatter-gather frames. Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 27 2月, 2019 1 次提交
-
-
由 Ioana Ciornei 提交于
Configure the amount of 64 bytes of frame, annotation and context data that will be cache stashed for a specific frame queue. Since the frame context is not used, configure that only 64 bytes of frame data and 64 bytes of annotation will be stashed. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Acked-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NLi Yang <leoyang.li@nxp.com>
-
- 07 2月, 2019 3 次提交
-
-
由 Ioana Ciocoi Radulescu 提交于
Starting with MC10.14.0, dpaa2_io_service_enqueue_fq() API is functional. Since there are a number of cases where it offers better performance compared to the currently used enqueue function, switch to it for firmware versions that support it. Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Ciocoi Radulescu 提交于
While in NAPI context, free skbs by calling napi_consume_skb() instead of dev_kfree_skb(), to take advantage of the bulk freeing mechanism. Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ioana Ciocoi Radulescu 提交于
Instead of allocating page fragments via the network stack, use the page allocator directly. For now, we consume one page for each Rx buffer. With the new memory model we are free to consider adding more XDP support. Performance decreases slightly in some IP forwarding cases. No visible effect on termination traffic. The driver memory footprint increases as a result of this change, but it is still small enough to not really matter. Another side effect is that now Rx buffer alignment requirements are naturally satisfied without any additional actions needed. Remove alignment related code, except in the buffer layout information conveyed to MC, as hardware still needs to know the alignment value we guarantee. Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 20 1月, 2019 1 次提交
-
-
由 Ioana Radulescu 提交于
Export detailed driver counters through debugfs. Statistics already available in ethtool are presented in a structured manner. Includes per-core, per-FQ and per-channel statistics. Also transition from module_fsl_mc_driver to explicit module_init/exit in order to create the debugfs directory besides registering the driver. Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 18 1月, 2019 1 次提交
-
-
由 Ioana Ciocoi Radulescu 提交于
In the current implementation, on interface down we disabled NAPI and then manually drained any remaining ingress frames. This could lead to a situation when, under heavy traffic, the data availability notification for some of the channels would not get rearmed correctly. Change the implementation such that we let all remaining ingress frames be processed as usual and only disable NAPI once the hardware queues are empty. We also add a wait on the Tx side, to allow hardware time to process all in-flight Tx frames before issueing the disable command. Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 12 1月, 2019 1 次提交
-
-
由 Ioana Ciornei 提交于
Automatically add a device link between the actual device requesting the dpaa2_io_service_register and the underlying dpaa2_io used. This link will ensure that when a DPIO device, which is indirectly used by other devices, is unbound any consumer devices will be also unbound from their drivers. For example, any DPNI, bound to the dpaa2-eth driver, which is using DPIO devices will be unbound before its supplier device. Also, add a new parameter to the dpaa2_io_service_[de]register functions to specify the requesting device (ie the consumer). Signed-off-by: NIoana Ciornei <ioana.ciornei@nxp.com> Reviewed-by: NHoria Geanta <horia.geanta@nxp.com> Reviewed-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: NLi Yang <leoyang.li@nxp.com>
-
- 30 11月, 2018 1 次提交
-
-
由 Ioana Ciocoi Radulescu 提交于
Add comments in the switch statement for XDP action to indicate fallthrough is intended. Signed-off-by: NIoana Radulescu <ruxandra.radulescu@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-