- 24 2月, 2018 9 次提交
-
-
由 Mark Bloch 提交于
Currently in switchdev mode we allow only for raw packet QPs. Expose the right capabilities and set the gid table length to 0, also make sure we don't try to enable RoCE, so split the function to enable RoCE so representors can enable only the notifier needed for net device events. Signed-off-by: NMark Bloch <markb@mellanox.com> Signed-off-by: NLeon Romanovsky <leon@kernel.org> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Mark Bloch 提交于
Currently we listen to netdev register/unregister event based on PCI device. When in switchdev mode PF and representors share the same PCI device, so in order to pair ib device and netdev in switchdev mode compare the netdev that triggered the event to that of the representor. Expose a function that lets you receive the netdev associated what a given representor. Signed-off-by: NMark Bloch <markb@mellanox.com> Reviewed-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NLeon Romanovsky <leon@kernel.org> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Mark Bloch 提交于
When we point to a representor, it means we are in switchdev mode. The flow db is shared between PF and virtual function representors so each rule created needs to have a match on its specific source port. Signed-off-by: NMark Bloch <markb@mellanox.com> Reviewed-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NLeon Romanovsky <leon@kernel.org> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Mark Bloch 提交于
A flow DB is a shared resource between PF and representors, need to allocate it only when creating the PF IB device. Once we add IB representors, they will use the flow db which was created by the PF. Signed-off-by: NMark Bloch <markb@mellanox.com> Reviewed-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NLeon Romanovsky <leon@kernel.org> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Mark Bloch 提交于
Create the basic infrastructure of registering and unregistering IB representors. The load/unload callbacks are left empty and proper implementation will be introduced in following patches. Signed-off-by: NMark Bloch <markb@mellanox.com> Reviewed-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NLeon Romanovsky <leon@kernel.org> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Mark Bloch 提交于
Under switchdev mode we insert an eswitch miss rule causing any unmatched traffic to be sent towards the PF vport. This miss rule can be optimized if we break it to two, one case is for multicast traffic and the other for unicast. Breaking the miss rule into two (unicast and multicast) allows the firmware to program the hardware in a more efficient way. Using ConncetX-5 Ex with IXIA and testpmd (which use IB representors): IXIA -> NIC -> PF -> IB representor -> NIC -> VF: - Without this optimization: 9.2 MPPS. - With this optimization: 18 MPPS. VF -> NIC -> IB representor-> PF -> NIC -> IXIA: - Without this optimization: 17 MPPS. - With this optimization: 23.4 MPPS. Signed-off-by: NMark Bloch <markb@mellanox.com> Reviewed-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NLeon Romanovsky <leon@kernel.org> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Mark Bloch 提交于
The max FTE number should be the max number of SQs that can be opened. Ethernet representors open one SQ each. Once we add IB representor this will increase (depends on the user). For now lets start with 31 per IB representor and if needed increase in the future. This increase only affects the number of FTEs in the slow path FDB, offloaded rules (done via TC on the fast path portion of the FDB) aren't affected. Signed-off-by: NMark Bloch <markb@mellanox.com> Reviewed-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NLeon Romanovsky <leon@kernel.org> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Mark Bloch 提交于
In preparation for IB representors, move representors structs to a global scope, also expose functions needed for registration, unregistration, eswitch mode and creating a flow rule to direct traffic from SQs to the right VF. Signed-off-by: NMark Bloch <markb@mellanox.com> Reviewed-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NLeon Romanovsky <leon@kernel.org> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Mark Bloch 提交于
Add a callback interface to get a protocol device (per representor type). The Ethernet representors will expose their netdev via this interface. This functionality can be later used by IB representor in order to find the corresponding net device representor. Signed-off-by: NMark Bloch <markb@mellanox.com> Reviewed-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NLeon Romanovsky <leon@kernel.org> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
- 15 2月, 2018 7 次提交
-
-
由 Yonatan Cohen 提交于
The current implementation of create CQ requires contiguous memory, such requirement is problematic once the memory is fragmented or the system is low in memory, it causes for failures in dma_zalloc_coherent(). This patch implements new scheme of fragmented CQ to overcome this issue by introducing new type: 'struct mlx5_frag_buf_ctrl' to allocate fragmented buffers, rather than contiguous ones. Base the Completion Queues (CQs) on this new fragmented buffer. It fixes following crashes: kworker/29:0: page allocation failure: order:6, mode:0x80d0 CPU: 29 PID: 8374 Comm: kworker/29:0 Tainted: G OE 3.10.0 Workqueue: ib_cm cm_work_handler [ib_cm] Call Trace: [<>] dump_stack+0x19/0x1b [<>] warn_alloc_failed+0x110/0x180 [<>] __alloc_pages_slowpath+0x6b7/0x725 [<>] __alloc_pages_nodemask+0x405/0x420 [<>] dma_generic_alloc_coherent+0x8f/0x140 [<>] x86_swiotlb_alloc_coherent+0x21/0x50 [<>] mlx5_dma_zalloc_coherent_node+0xad/0x110 [mlx5_core] [<>] ? mlx5_db_alloc_node+0x69/0x1b0 [mlx5_core] [<>] mlx5_buf_alloc_node+0x3e/0xa0 [mlx5_core] [<>] mlx5_buf_alloc+0x14/0x20 [mlx5_core] [<>] create_cq_kernel+0x90/0x1f0 [mlx5_ib] [<>] mlx5_ib_create_cq+0x3b0/0x4e0 [mlx5_ib] Signed-off-by: NYonatan Cohen <yonatanc@mellanox.com> Reviewed-by: NTariq Toukan <tariqt@mellanox.com> Signed-off-by: NLeon Romanovsky <leon@kernel.org> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
-
由 Saeed Mahameed 提交于
EQ structure and API is private to mlx5_core driver only, external drivers should not have access or the means to manipulate EQ objects. Remove redundant exports and move API functions out of the linux/mlx5 include directory into the driver's mlx5_core.h private include file. Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com> Reviewed-by: NGal Pressman <galp@mellanox.com>
-
由 Saeed Mahameed 提交于
Since CQ tree is now per EQ, CQ completion and event forwarding became specific implementation of EQ logic, this patch moves that logic to eq.c and makes those functions static. Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com> Reviewed-by: NGal Pressman <galp@mellanox.com>
-
由 Saeed Mahameed 提交于
Now as the CQ table is per EQ, add an API to hold/put CQ to be used from eq.c in downstream patch. Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com> Reviewed-by: NGal Pressman <galp@mellanox.com>
-
由 Saeed Mahameed 提交于
Add API to add/del CQ to/from EQs CQ table to be used in cq.c upon CQ creation/destruction, as CQ table is now private to eq.c. Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com> Reviewed-by: NGal Pressman <galp@mellanox.com>
-
由 Saeed Mahameed 提交于
If a hardware event is targeting a CQ, that CQ should exist. Add unlikely to error handling flows. Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com> Reviewed-by: NGal Pressman <galp@mellanox.com>
-
由 Saeed Mahameed 提交于
Before this patch the driver had one CQ database protected via one spinlock, this spinlock is meant to synchronize between CQ adding/removing and CQ IRQ interrupt handling. On a system with large number of CPUs and on a work load that requires lots of interrupts, this global spinlock becomes a very nasty hotspot and introduces a contention between the active cores, which will significantly hurt performance and becomes a bottleneck that prevents seamless cpu scaling. To solve this we simply move the CQ database and its spinlock to be per EQ (IRQ), thus per core. Tested with: system: 2 sockets, 14 cores per socket, hyperthreading, 2x14x2=56 cores netperf command: ./super_netperf 200 -P 0 -t TCP_RR -H <server> -l 30 -- -r 300,300 -o -s 1M,1M -S 1M,1M WITHOUT THIS PATCH: Average: CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle Average: all 4.32 0.00 36.15 0.09 0.00 34.02 0.00 0.00 0.00 25.41 Samples: 2M of event 'cycles:pp', Event count (approx.): 1554616897271 Overhead Command Shared Object Symbol + 14.28% swapper [kernel.vmlinux] [k] intel_idle + 12.25% swapper [kernel.vmlinux] [k] queued_spin_lock_slowpath + 10.29% netserver [kernel.vmlinux] [k] queued_spin_lock_slowpath + 1.32% netserver [kernel.vmlinux] [k] mlx5e_xmit WITH THIS PATCH: Average: CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle Average: all 4.27 0.00 34.31 0.01 0.00 18.71 0.00 0.00 0.00 42.69 Samples: 2M of event 'cycles:pp', Event count (approx.): 1498132937483 Overhead Command Shared Object Symbol + 23.33% swapper [kernel.vmlinux] [k] intel_idle + 1.69% netserver [kernel.vmlinux] [k] mlx5e_xmit Tested-by: NSong Liu <songliubraving@fb.com> Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com> Reviewed-by: NGal Pressman <galp@mellanox.com>
-
- 12 2月, 2018 1 次提交
-
-
由 Linus Torvalds 提交于
This is the mindless scripted replacement of kernel use of POLL* variables as described by Al, done by this script: for V in IN OUT PRI ERR RDNORM RDBAND WRNORM WRBAND HUP RDHUP NVAL MSG; do L=`git grep -l -w POLL$V | grep -v '^t' | grep -v /um/ | grep -v '^sa' | grep -v '/poll.h$'|grep -v '^D'` for f in $L; do sed -i "-es/^\([^\"]*\)\(\<POLL$V\>\)/\\1E\\2/" $f; done done with de-mangling cleanups yet to come. NOTE! On almost all architectures, the EPOLL* constants have the same values as the POLL* constants do. But they keyword here is "almost". For various bad reasons they aren't the same, and epoll() doesn't actually work quite correctly in some cases due to this on Sparc et al. The next patch from Al will sort out the final differences, and we should be all done. Scripted-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 10 2月, 2018 10 次提交
-
-
由 Vadim Pasternak 提交于
It adds support for new Mellanox system types of basic classes qmb7, sn34, sn37, containing systems QMB700 (40x200GbE InfiniBand switch), SN3700 (32x200GbE and 16x400GbE Ethernet switch) and SN3410 (6x400GbE plus 48x50GbE Ethernet switch). These are the Top of the Rack systems, equipped with Mellanox COM-Express carrier board and switch board with Mellanox Quantum device, which supports InfiniBand switching with 40X200G ports and line rate of up to HDR speed or with Mellanox Spectrum-2 device, which supports Ethernet switching with 32X200G ports line rate of up to HDR speed. Signed-off-by: NVadim Pasternak <vadimp@mellanox.com> Signed-off-by: NDarren Hart (VMware) <dvhart@infradead.org>
-
由 Vadim Pasternak 提交于
It adds support for new Mellanox system types of basic half unit size class msn201x, containing system MSN2010 (18x10GbE plus 4x4x25GbE) half and its derivatives. This is the Top of the Rack system, equipped with Mellanox Small Form Factor carrier board and switch board with Mellanox Spectrum device, which supports Ethernet switching with 32X100G ports line rate of up to EDR speed. Signed-off-by: NVadim Pasternak <vadimp@mellanox.com> Signed-off-by: NDarren Hart (VMware) <dvhart@infradead.org>
-
由 Vadim Pasternak 提交于
It adds support for new Mellanox system types of basic class msn274x, containing system MSN2740 (32x100GbE Ethernet switch with cost reduction) and its derivatives. These are the Top of the Rack system, equipped with Mellanox Small Form Factor carrier board and switch board with Mellanox Spectrum device, which supports Ethernet switching with 32X100G ports line rate of up to EDR speed. Signed-off-by: NVadim Pasternak <vadimp@mellanox.com> Signed-off-by: NDarren Hart (VMware) <dvhart@infradead.org>
-
由 John Allen 提交于
Having these checks in ibmvnic_xmit causes problems with VLAN tagging and balance-alb/tlb bonding modes. The restriction they imposed can be removed. Signed-off-by: NJohn Allen <jallen@linux.vnet.ibm.com> Signed-off-by: NNathan Fontenot <nfont@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Julian Wiedmann 提交于
send_control_data() applies some special handling to SETIP v4 IPA commands. But current code parses *all* command types for the SETIP command code. Limit the command code check to IPA commands. Fixes: 5b54e16f ("qeth: do not spin for SETIP ip assist command") Signed-off-by: NJulian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ursula Braun 提交于
For a memory range/skb where the last byte falls onto a page boundary (ie. 'end' is of the form xxx...xxx001), the PFN_UP() part of the calculation currently doesn't round up to the next PFN due to an off-by-one error. Thus qeth believes that the skb occupies one page less than it actually does, and may select a IO buffer that doesn't have enough spare buffer elements to fit all of the skb's data. HW detects this as a malformed buffer descriptor, and raises an exception which then triggers device recovery. Fixes: 2863c613 ("qeth: refactor calculation of SBALE count") Signed-off-by: NUrsula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: NJulian Wiedmann <jwi@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Niklas Cassel 提交于
For dwmac4, GMAC_INT_DEFAULT_ENABLE already includes GMAC_INT_PMT_EN, so it is redundant to check if hw->pmt is set, and if so, setting the bit again. For dwmac1000, GMAC_INT_DEFAULT_MASK does not include GMAC_INT_DISABLE_PMT, so it is redundant to check if hw->pmt is set, and if so, clearing an already cleared bit. Improve code readability by removing this redundant code. Signed-off-by: NNiklas Cassel <niklas.cassel@axis.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Niklas Cassel 提交于
GMAC_INT_DEFAULT_MASK is written to the interrupt enable register. In previous versions of the IP (e.g. dwmac1000), this register was instead an interrupt mask register. To improve clarity and reflect reality, rename GMAC_INT_DEFAULT_MASK to GMAC_INT_DEFAULT_ENABLE. Signed-off-by: NNiklas Cassel <niklas.cassel@axis.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Niklas Cassel 提交于
The interrupt status register in both dwmac1000 and dwmac4 ignores interrupt enable (for dwmac4) / interrupt mask (for dwmac1000). Therefore, if we want to check only the bits that can actually trigger an irq, we have to filter the interrupt status register manually. Commit 0a764db1 ("stmmac: Discard masked flags in interrupt status register") fixed this for dwmac1000. Fix the same issue for dwmac4. Just like commit 0a764db1 ("stmmac: Discard masked flags in interrupt status register"), this makes sure that we do not get spurious link up/link down prints. Signed-off-by: NNiklas Cassel <niklas.cassel@axis.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Thomas Falcon 提交于
When allocating RX or TX buffer pools, the driver needs to provide a unique mapping ID to firmware for each pool. This value is assigned using a counter which is incremented after a new pool is created. The ID can be an integer ranging from 1-255. When migrating to a device that requests a different number of queues, this value was not being reset properly. As a result, after enough migrations, the counter exceeded the upper bound and pool creation failed. This is fixed by resetting the counter to one in this case. Signed-off-by: NThomas Falcon <tlfalcon@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 09 2月, 2018 10 次提交
-
-
由 Tomi Valkeinen 提交于
The omapfb driver fails to build after commit 23c35f48 ("pinctrl: remove include file from <linux/device.h>") because it relies on the <linux/pinctrl/consumer.h> and <linux/seq_file.h> being pulled in by the <linux/device.h> header implicitly. Include these headers explicitly to avoid the build failures. Fixes: 23c35f48 ("pinctrl: remove include file from <linux/device.h>") Signed-off-by: NTomi Valkeinen <tomi.valkeinen@ti.com> Tested-by: NTony Lindgren <tony@atomide.com> [b.zolnierkie: fix include order and patch description] Signed-off-by: NBartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
-
由 Vadim Pasternak 提交于
Add dedicated structure with power cable setting for Mellanox msn21xx family. These systems do not have a physical device for the power unit controller. When the power cable is inserted or removed, the relevant interrupt signal is handled, the status is updated, but no device is associated with the signal. Add definition for interrupt low aggregation signal. On system from msn21xx family, low aggregation mask should be removed in order to allow signal to hit CPU. Fixes: 6613d18e ("platform/x86: mlx-platform: Move module from arch/x86") Signed-off-by: NVadim Pasternak <vadimp@mellanox.com> Signed-off-by: NDarren Hart (VMware) <dvhart@infradead.org>
-
由 Vadim Pasternak 提交于
Add define for the negative bus ID in order to use it in case no hotplug device is associated with the hotplug interrupt signal. In this case, the signal will be handled by the mlxreg-hotplug driver, but no device will be associated with the signal. Signed-off-by: NVadim Pasternak <vadimp@mellanox.com> Signed-off-by: NDarren Hart (VMware) <dvhart@infradead.org>
-
由 Vadim Pasternak 提交于
Add defines for the bus IDs, used for hotplug device topology to improve code readability. Defines added for FAN and power units. Signed-off-by: NVadim Pasternak <vadimp@mellanox.com> Signed-off-by: NDarren Hart (VMware) <dvhart@infradead.org>
-
由 Geert Uytterhoeven 提交于
With gcc-4.1.2: drivers/platform/mellanox/mlxreg-hotplug.c: In function ‘mlxreg_hotplug_health_work_helper’: drivers/platform/mellanox/mlxreg-hotplug.c:347: warning: ‘ret’ is used uninitialized in this function Indeed, if mlxreg_core_item.count is zero, ret is used uninitialized. While this is unlikely to happen (it is set to ARRAY_SIZE(...) in x86 board files), this is done in another source file, so fix this by preinitializing ret to zero. Fixes: c6acad68 ("platform/mellanox: mlxreg-hotplug: Modify to use a regmap interface") Signed-off-by: NGeert Uytterhoeven <geert@linux-m68k.org> Acked-by: NVadim Pasternak <vadimp@mellanox.com> Signed-off-by: NDarren Hart (VMware) <dvhart@infradead.org>
-
由 Heiner Kallweit 提交于
This condition wasn't adjusted when PHY_IGNORE_INTERRUPT (-2) was added long ago. In case of PHY_IGNORE_INTERRUPT the MAC interrupt indicates also PHY state changes and we should do what the symbol says. Fixes: 84a527a4 ("net: phylib: fix interrupts re-enablement in phy_start") Signed-off-by: NHeiner Kallweit <hkallweit1@gmail.com> Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dean Nelson 提交于
The Cavium thunder nicvf driver supports rx/tx rings of up to 65536 entries per. The number of entires are stored in the q_len member of struct q_desc_mem. The problem is that q_len being a u16, results in 65536 becoming 0. In getting pointers to descriptors in the rings, the driver uses q_len minus 1 as a mask after incrementing the pointer, in order to go back to the beginning and not go past the end of the ring. With the q_len set to 0 the mask is no longer correct and the driver does go beyond the end of the ring, causing various ills. Usually the first thing that shows up is a "NETDEV WATCHDOG: enP2p1s0f1 (nicvf): transmit queue 7 timed out" warning. This patch remedies the problem by changing q_len to a u32. Signed-off-by: NDean Nelson <dnelson@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Nathan Fontenot 提交于
While handling a driver reset we get a H_CLOSED return trying to send a CRQ event. When this occurs we need to queue up another reset attempt. Without doing this we see instances where the driver is left in a closed state because the reset failed and there is no further attempts to reset the driver. Signed-off-by: NNathan Fontenot <nfont@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Gustavo A. R. Silva 提交于
Add suffix ULL to constants 272, 204, 136 and 68 in order to give the compiler complete information about the proper arithmetic to use. Notice that these constants are used in contexts that expect expressions of type unsigned long long (64 bits, unsigned). The following expressions are currently being evaluated using 32-bit arithmetic: 272 * mult 204 * mult 136 * mult 68 * mult Addresses-Coverity-ID: 201058 Signed-off-by: NGustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jason Wang 提交于
When using devmap to redirect packets between interfaces, xdp_do_flush() is usually a must to flush any batched packets. Unfortunately this is missed in current tuntap implementation. Unlike most hardware driver which did XDP inside NAPI loop and call xdp_do_flush() at then end of each round of poll. TAP did it in the context of process e.g tun_get_user(). So fix this by count the pending redirected packets and flush when it exceeds NAPI_POLL_WEIGHT or MSG_MORE was cleared by sendmsg() caller. With this fix, xdp_redirect_map works again between two TAPs. Fixes: 761876c8 ("tap: XDP support") Signed-off-by: NJason Wang <jasowang@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 08 2月, 2018 3 次提交
-
-
由 Jakub Kicinski 提交于
DKMS and similar out-of-tree module replacement services use module version to make sure the out-of-tree software is not older than the module shipped with the kernel. We use the kernel version in ethtool -i output, put it into MODULE_VERSION as well. Reported-by: NJan Gutter <jan.gutter@netronome.com> Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NDirk van der Merwe <dirk.vandermerwe@netronome.com> Reviewed-by: NSimon Horman <simon.horman@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jakub Kicinski 提交于
Most FWs limit the number of TSO segments a frame can produce to 64. This is for fairness and efficiency (of FW datapath) reasons. If a frame with larger number of segments is submitted the FW will drop it. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NSimon Horman <simon.horman@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jakub Kicinski 提交于
All netdevs which can accept TC offloads must implement .ndo_set_features(). nfp_reprs currently do not do that, which means hw-tc-offload can be turned on and off even when offloads are active. Whether the offloads are active is really a question to nfp_ports, so remove the per-app tc_busy callback indirection thing, and simply count the number of offloaded items in nfp_port structure. Fixes: 8a276873 ("nfp: provide infrastructure for offloading flower based TC filters") Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NSimon Horman <simon.horman@netronome.com> Tested-by: NPieter Jansen van Vuuren <pieter.jansenvanvuuren@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-