- 21 3月, 2022 5 次提交
-
-
由 Jakub Kicinski 提交于
Newer versions of the PCIe microcode support writing back the position of the TX pointer back into host memory. This speeds up TX completions, because we avoid a read from device memory (replacing PCIe read with DMA coherent read). Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: NFei Qin <fei.qin@corigine.com> Signed-off-by: NSimon Horman <simon.horman@corigine.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jakub Kicinski 提交于
QCidx is not used on fast path, move it to the lower cacheline. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: NFei Qin <fei.qin@corigine.com> Signed-off-by: NSimon Horman <simon.horman@corigine.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jakub Kicinski 提交于
New datapaths may use multiple descriptor units to describe a single packet. Prepare for that by adding a descriptors per simple frame constant into ring size calculations. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: NFei Qin <fei.qin@corigine.com> Signed-off-by: NSimon Horman <simon.horman@corigine.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jakub Kicinski 提交于
To reduce the coupling of slow path ring implementations and their callers, use callbacks instead. Changes to Jakub's work: * Also use callbacks for xmit functions Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: NYinjun Zhang <yinjun.zhang@corigine.com> Signed-off-by: NSimon Horman <simon.horman@corigine.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jakub Kicinski 提交于
In preparation for support for a new datapath format move all ring and fast path logic into separate files. It is basically a verbatim move with some wrapping functions, no new structures and functions added. The current data path is called NFD3 from the initial version of the driver ABI it used. The non-fast path, but ring related functions are moved to nfp_net_dp.c file. Changes to Jakub's work: * Rebase on xsk related code. * Split the patch, move the callback changes to next commit. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: NFei Qin <fei.qin@corigine.com> Signed-off-by: NSimon Horman <simon.horman@corigine.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 12 3月, 2022 5 次提交
-
-
由 Jakub Kicinski 提交于
NFP3800 has slightly different queue controller range bounds. Use the static chip data instead of defines. This commit still assumes unchanged descriptor format. Later datapath changes will allow adjusting for descriptor accounting. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: NFei Qin <fei.qin@corigine.com> Signed-off-by: NSimon Horman <simon.horman@corigine.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Jakub Kicinski 提交于
The queue controller (QCP) is accessed based on a device specific offset. The NFP3800 device also supports more queues. Furthermore, the NFP3800 VFs also access the QCP differently to how the NFP6000 VFs accesses it, though still indirectly. Fortunately, we can remove the offset all together for both VF types. This is safe for NFP6000 VFs since the offset was effectively a wrap around and only used for convenience to have it set the same as the NFP6000 PF. Use nfp_dev_info to store queue controller parameters. Signed-off-by: NDirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: NFei Qin <fei.qin@corigine.com> Signed-off-by: NSimon Horman <simon.horman@corigine.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Jakub Kicinski 提交于
In preparation for new chips instead of defines use dev_info constants to store DMA mask length. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: NFei Qin <fei.qin@corigine.com> Signed-off-by: NSimon Horman <simon.horman@corigine.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Jakub Kicinski 提交于
In preparation for supporting new chip add a driver data structure which will hold per-chip-version information such as register offsets. Plumb it through to the relevant functions (nfpcore and nfp_net). For now only a very simple member holding chip names is added, following commits will add more. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: NFei Qin <fei.qin@corigine.com> Signed-off-by: NSimon Horman <simon.horman@corigine.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Christo du Toit 提交于
Multiple writes cause intermediate pointer values that do not end on complete TX descriptors. The QCP peripheral on the NFP provides a number of access modes. In some access modes, the maximum amount to add must be restricted to a 6bit value. The particular access mode used by _nfp_qcp_ptr_add() has no such restrictions, so the "< NFP_QCP_MAX_ADD" test is unnecessary. Note that trying to add more that the configured ring size in a single add will cause a QCP overflow, caught and handled by the QCP peripheral. Signed-off-by: NChristo du Toit <christo.du.toit@netronome.com> Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: NFei Qin <fei.qin@corigine.com> Signed-off-by: NSimon Horman <simon.horman@corigine.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
- 04 3月, 2022 3 次提交
-
-
由 Niklas Söderlund 提交于
This patch adds zero-copy Rx and Tx support for AF_XDP sockets. It do so by adding a separate NAPI poll function that is attached to a each channel when the XSK socket is attached with XDP_SETUP_XSK_POOL, and restored when the XSK socket is terminated, this is done per channel. Support for XDP_TX is implemented and the XDP buffer can safely be moved from the Rx to the Tx queue and correctly freed and returned to the XSK pool once it's transmitted. Note that when AF_XDP zero-copy is enabled, the XDP action XDP_PASS will allocate a new buffer and copy the zero-copy frame prior passing it to the kernel stack. This patch is based on previous work by Jakub Kicinski. Signed-off-by: NNiklas Söderlund <niklas.soderlund@corigine.com> Signed-off-by: NSimon Horman <simon.horman@corigine.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Niklas Söderlund 提交于
Each data path needs an array of xsk pools to track if an xsk socket is in use. Add this array and make sure it's handled correctly when the data path is duplicated. Signed-off-by: NNiklas Söderlund <niklas.soderlund@corigine.com> Signed-off-by: NSimon Horman <simon.horman@corigine.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Niklas Söderlund 提交于
There are some common functionality that can be reused in the upcoming AF_XDP support. Expose those functions in the header. While at it mark some arguments of nfp_net_rx_csum() as const. Signed-off-by: NNiklas Söderlund <niklas.soderlund@corigine.com> Signed-off-by: NSimon Horman <simon.horman@corigine.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 22 11月, 2021 1 次提交
-
-
由 Diana Wang 提交于
Use nn->tlv_caps.me_freq_mhz instead of nn->me_freq_mhz to check whether rx-usecs/tx-usecs is valid. This is because nn->tlv_caps.me_freq_mhz represents the clock_freq (MHz) of the flow processing cores (FPC) on the NIC. While nn->me_freq_mhz is not be set. Fixes: ce991ab6 ("nfp: read ME frequency from vNIC ctrl memory") Signed-off-by: NDiana Wang <na.wang@corigine.com> Signed-off-by: NSimon Horman <simon.horman@corigine.com> Reviewed-by: NJakub Kicinski <kuba@kernel.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 26 7月, 2021 1 次提交
-
-
由 Yinjun Zhang 提交于
Use dynamic interrupt moderation library to implement coalesce adaptive feature for nfp driver. Signed-off-by: NYinjun Zhang <yinjun.zhang@corigine.com> Signed-off-by: NYu Xiao <yu.xiao@corigine.com> Signed-off-by: NSimon Horman <simon.horman@corigine.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 15 7月, 2020 1 次提交
-
-
由 Jakub Kicinski 提交于
NFP conversion is pretty straightforward. We want to be able to sleep, and only get callbacks when the device is open. NFP did not ask for port replay when ports were removed, now new infra will provide this feature for free. Signed-off-by: NJakub Kicinski <kuba@kernel.org> Reviewed-by: NSimon Horman <simon.horman@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 20 12月, 2019 1 次提交
-
-
由 Jakub Kicinski 提交于
The simple RX resync strategy controlled by the kernel does not guarantee as good results as if the device helps by detecting the potential record boundaries and keeping track of them. We've called this strategy stream scan in the tls-offload doc. Implement this strategy for the NFP. The device sends a request for record boundary confirmation, which is then recorded in per-TLS socket state and responded to once record is reached. Because the device keeps track of records passing after the request was sent the response is not as latency sensitive as when kernel just tries to tell the device the information about the next record. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 31 8月, 2019 1 次提交
-
-
由 Jakub Kicinski 提交于
If control channel MTU is too low to support map operations a warning will be printed. This is not enough, we want to make sure probe fails in such scenario, as this would clearly be a faulty configuration. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NQuentin Monnet <quentin.monnet@netronome.com> Acked-by: NSong Liu <songliubraving@fb.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
-
- 09 7月, 2019 1 次提交
-
-
由 Jakub Kicinski 提交于
Connection 4 tuple reuse is slightly problematic - TLS socket and context do not get destroyed until all the associated skbs left the system and all references are released. This leads to stale connection entry in the device preventing addition of new one if the 4 tuple is reused quickly enough. Instead of using read 4 tuple as the key use a unique ID. Set the protocol to TCP and port to 0 to ensure no collisions with real connections. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NDirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 06 7月, 2019 1 次提交
-
-
For spinlocks the type spinlock_t should be used instead of "struct spinlock". Use spinlock_t for spinlock's definition. Cc: Jakub Kicinski <jakub.kicinski@netronome.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: oss-drivers@netronome.com Cc: netdev@vger.kernel.org Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: NJakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 12 6月, 2019 3 次提交
-
-
由 Jakub Kicinski 提交于
Set ethtool TLS RX feature based on NIC capabilities, and enable TLS RX when connections are added for decryption. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NDirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jakub Kicinski 提交于
Some control messages must be sent from atomic context. The mailbox takes sleeping locks and uses a waitqueue so add a "posted" version of communication. Trylock the semaphore and if that's successful kick of the device communication. The device communication will be completed from a workqueue, which will also release the semaphore. If locks are taken queue the message and return. Schedule a different workqueue to take the semaphore and run the communication. Note that the there are currently no atomic users which would actually need the return value, so all replies to posted messages are just freed. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NDirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dirk van der Merwe 提交于
Firmware indicates when a packet has been decrypted by reusing the currently unused BPF flag. Transfer this information into the skb and provide a statistic of all decrypted segments. Signed-off-by: NDirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 07 6月, 2019 6 次提交
-
-
由 Jakub Kicinski 提交于
Count TX TLS packets: successes, out of order, and dropped due to missing record info. Make sure the RX and TX completion statistics don't share cache lines with TX ones as much as possible. With TLS stats they are no longer reasonably aligned. Signed-off-by: NDirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dirk van der Merwe 提交于
This patch adds the functionality to add and delete TLS connections on the NFP, received from the kernel TLS callbacks. Make use of the common control message (CCM) infrastructure to propagate the kernel state to firmware. Signed-off-by: NDirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dirk van der Merwe 提交于
Prepend connection handle to each transmitted TLS packet. For each connection, the driver tracks the next sequence number expected. If an out of order packet is observed, the driver calls into the TLS kernel code to reencrypt that particular skb. Signed-off-by: NDirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jakub Kicinski 提交于
Add FW ABI defines and code for basic init of TLS offload. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NDirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jakub Kicinski 提交于
FW may prefer to handle some communication via a mailbox or the vNIC may simply not have a control queue (VFs). Add a way of exchanging ccm-compatible messages via a mailbox. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NDirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jakub Kicinski 提交于
We will need to release the bar lock from a workqueue so move from a mutex to a semaphore. This lock should not be too hot. Unfortunately semaphores don't have lockdep support. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NDirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 13 4月, 2019 1 次提交
-
-
由 Jakub Kicinski 提交于
Soon we will try to write to the vNIC mailbox without RTNL held. Add a new mutex to protect access to specific parts of the PCI control BAR. Move the mailbox size checking to the mailbox lock() helper, where it can be more effective (happen prior to potential overwrite of other data). Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NDirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 01 12月, 2018 2 次提交
-
-
由 Jakub Kicinski 提交于
FW reconfiguration timeouts are a common indicator of FW trouble. To make debugging easier print requested update and control word when reconfiguration fails. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NDirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jakub Kicinski 提交于
Chained descriptors for fragments need to duplicate all the descriptor fields of the skb head, so we copy the descriptor and then modify the relevant fields. This is wasteful, because the top half of the descriptor will get overwritten entirely while the bottom half is not modified at all. Copy only the bottom half. This saves us 0.3% of CPU in a GSO test. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NDirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 20 11月, 2018 1 次提交
-
-
由 Jakub Kicinski 提交于
Learn how to set the DSCP map. FW uses a packed array which geometry depends on the number of supported priorities and virtual queues. Write code to assemble this map and to communicate the setting to the FW via mailbox. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NJohn Hurley <john.hurley@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 09 11月, 2018 1 次提交
-
-
由 Jakub Kicinski 提交于
Move setting ctrl_bar pointer to the nfp_net_alloc function, to make sure we can parse capabilities early in the following patch. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NJohn Hurley <john.hurley@netronome.com> Reviewed-by: NQuentin Monnet <quentin.monnet@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 12 10月, 2018 1 次提交
-
-
由 Jakub Kicinski 提交于
Replace the repeated license text with SDPX identifiers. While at it bump the Copyright dates for files we touched this year. Signed-off-by: NEdwin Peer <edwin.peer@netronome.com> Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: NNic Viljoen <nick.viljoen@netronome.com> Reviewed-by: NSimon Horman <simon.horman@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 26 7月, 2018 2 次提交
-
-
由 Jakub Kicinski 提交于
Use array_size() and store the size as full size_t to protect from theoretical size overflow when handling HW descriptor rings. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NDirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jakub Kicinski 提交于
Commit 7f1c684a ("nfp: setup xdp_rxq_info") mixed the cache cold and cache hot data in the nfp_net_rx_ring structure (ignoring the feedback), to try to fit the structure into 2 cache lines after struct xdp_rxq_info was added. Now that we are about to add a new field the structure will grow back to 3 cache lines, so order the members correctly. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NDirk van der Merwe <dirk.vandermerwe@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 14 7月, 2018 2 次提交
-
-
由 Jakub Kicinski 提交于
Split handling of offloaded and driver programs completely. Since offloaded programs always come with XDP_FLAGS_HW_MODE set in reality there could be no sharing, anyway, programs would only be installed in driver or in hardware. Splitting the handling allows us to install programs in HW and in driver at the same time. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NQuentin Monnet <quentin.monnet@netronome.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
-
由 Jakub Kicinski 提交于
Basic operations drivers perform during xdp setup and query can be moved to helpers in the core. Encapsulate program and flags into a structure and add helpers. Note that the structure is intended as the "main" program information source in the driver. Most drivers will additionally place the program pointer in their fast path or ring structures. The helpers don't have a huge impact now, but they will decrease the code duplication when programs can be installed in HW and driver at the same time. Encapsulating the basic operations in helpers will hopefully also reduce the number of changes to drivers which adopt them. Helpers could really be static inline, but they depend on definition of struct netdev_bpf which means they'd have to be placed in netdevice.h, an already 4500 line header. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NQuentin Monnet <quentin.monnet@netronome.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
-
- 13 6月, 2018 1 次提交
-
-
由 Jakub Kicinski 提交于
.ndo_get_phys_port_name was recently extended to support multi-vNIC FWs. These are firmwares which can have more than one vNIC per PF without associated port (e.g. Adaptive Buffer Management FW), therefore we need a way of distinguishing the vNICs. Unfortunately, it's too late to make flower use the same naming. Flower users may depend on .ndo_get_phys_port_name returning -EOPNOTSUPP, for example the name udev gave the PF vNIC was just the bare PCI device-based name before the change, and will have 'nn0' appended after. To ensure flower's vNIC doesn't have phys_port_name attribute, add a flag to vNIC struct and set it in flower code. New projects will not set the flag adhere to the naming scheme from the start. Fixes: 51c1df83 ("nfp: assign vNIC id as phys_port_name of vNICs which are not ports") Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NDirk van der Merwe <dirk.vandermerwe@netronome.com> Reviewed-by: NSimon Horman <simon.horman@netronome.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-