- 22 2月, 2019 1 次提交
-
-
由 Jeff Kirsher 提交于
The enabling L3/L4 filtering for transmit switched packets for all devices caused unforeseen issue on older devices when trying to send UDP traffic in an ordered sequence. This bit was originally intended for X550 devices, which supported this feature, so limit the scope of this bit to only X550 devices. Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
-
- 21 12月, 2018 2 次提交
-
-
由 Steve Douthit 提交于
Use the mii_bus callbacks to address the entire clause 22/45 address space. Enables userspace to poke switch registers instead of a single PHY address. The ixgbe firmware may be polling PHYs in a way that is not protected by the mii_bus lock. This isn't new behavior, but as Andrew Lunn pointed out there are more addresses available for conflicts. Signed-off-by: NStephen Douthit <stephend@silicom-usa.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Steve Douthit 提交于
Most dsa devices expect a 'struct mii_bus' pointer to talk to switches via the MII interface. While this works for dsa devices, it will not work safely with Linux PHYs in all configurations since the firmware of the ixgbe device may be polling some PHY addresses in the background. Signed-off-by: NStephen Douthit <stephend@silicom-usa.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 20 12月, 2018 1 次提交
-
-
由 Florian Westphal 提交于
Use skb_sec_path and secpath_exists helpers where possible. This reduces noise in followup patch that removes skb->sp pointer. v2: no changes, preseve acks from v1. Acked-by: NShannon Nelson <shannon.lee.nelson@gmail.com> Signed-off-by: NFlorian Westphal <fw@strlen.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 13 12月, 2018 1 次提交
-
-
由 Petr Machata 提交于
Drivers may not be able to implement a VLAN addition or reconfiguration. In those cases it's desirable to explain to the user that it was rejected (and why). To that end, add extack argument to ndo_bridge_setlink. Adapt all users to that change. Following patches will use the new argument in the bridge driver. Signed-off-by: NPetr Machata <petrm@mellanox.com> Acked-by: NJiri Pirko <jiri@mellanox.com> Reviewed-by: NIdo Schimmel <idosch@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 22 11月, 2018 1 次提交
-
-
由 Paul E. McKenney 提交于
Now that synchronize_rcu() waits for preempt-disable regions of code as well as RCU read-side critical sections, synchronize_sched() can be replaced by synchronize_rcu(). This commit therefore makes this change. Signed-off-by: N"Paul E. McKenney" <paulmck@linux.ibm.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 08 11月, 2018 2 次提交
-
-
由 Jacob Keller 提交于
Many of the Intel Ethernet drivers call skb_tx_timestamp() earlier than necessary. Move the calls to this function to the latest point possible, just prior to notifying hardware of the new Tx packet when we bump the tail register. This affects i40e, iavf, igb, igc, and ixgbe. The e100, e1000, e1000e, fm10k, and ice drivers already call the skb_tx_timestamp() function just prior to indicating the Tx packet to hardware, so they do not need to be changed. Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Colin Ian King 提交于
There is an earlier check to see if xdp_ring is null when configuring the tx ring, so assuming that it can still be null, the clearing of the xdp_ring->state currently could end up with a null pointer dereference. Fix this by only clearing the bit if xdp_ring is not null. Detected by CoverityScan, CID#1473795 ("Dereference after null check") Fixes: 024aa580 ("ixgbe: added Rx/Tx ring disable/enable functions") Signed-off-by: NColin Ian King <colin.king@canonical.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 01 11月, 2018 1 次提交
-
-
由 Jeff Kirsher 提交于
Based on the original work from Arnd Bergmann. When XFRM_ALGO is not enabled, the new ixgbe IPsec code produces a link error: drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.o: In function `ixgbe_ipsec_vf_add_sa': ixgbe_ipsec.c:(.text+0x1266): undefined reference to `xfrm_aead_get_byname' Simply selecting XFRM_ALGO from here causes circular dependencies, so to fix it, we probably want this slightly more complex solution that is similar to what other drivers with XFRM offload do: A separate Kconfig symbol now controls whether we include the IPsec offload code. To keep the old behavior, this is left as 'default y'. The dependency in XFRM_OFFLOAD still causes a circular dependency but is not actually needed because this symbol is not user visible, so removing that dependency on top makes it all work. CC: Arnd Bergmann <arnd@arndb.de> CC: Shannon Nelson <shannon.nelson@oracle.com> Fixes: eda0333a ("ixgbe: add VF IPsec management") Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com>
-
- 04 10月, 2018 8 次提交
-
-
由 Song Liu 提交于
The NIC driver should only enable interrupts when napi_complete_done() returns true. This patch adds the check for ixgbe. Cc: stable@vger.kernel.org # 4.10+ Suggested-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NSong Liu <songliubraving@fb.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Björn Töpel 提交于
This patch adds zero-copy Tx support for AF_XDP sockets. It implements the ndo_xsk_async_xmit netdev ndo and performs all the Tx logic from a NAPI context. This means pulling egress packets from the Tx ring, placing the frames on the NIC HW descriptor ring and completing sent frames back to the application via the completion ring. The regular XDP Tx ring is used for AF_XDP as well. This rationale for this is as follows: XDP_REDIRECT guarantees mutual exclusion between different NAPI contexts based on CPU id. In other words, a netdev can XDP_REDIRECT to another netdev with a different NAPI context, since the operation is bound to a specific core and each core has its own hardware ring. As the AF_XDP Tx action is running in the same NAPI context and using the same ring, it will also be protected from XDP_REDIRECT actions with the exact same mechanism. As with AF_XDP Rx, all AF_XDP Tx specific functions are added to ixgbe_xsk.c. Signed-off-by: NBjörn Töpel <bjorn.topel@intel.com> Tested-by: NWilliam Tu <u9012063@gmail.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Björn Töpel 提交于
This patch prepares for the upcoming zero-copy Tx functionality by moving common functions used both by the regular path and zero-copy path. Signed-off-by: NBjörn Töpel <bjorn.topel@intel.com> Tested-by: NWilliam Tu <u9012063@gmail.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Björn Töpel 提交于
This patch adds zero-copy Rx support for AF_XDP sockets. Instead of allocating buffers of type MEM_TYPE_PAGE_SHARED, the Rx frames are allocated as MEM_TYPE_ZERO_COPY when AF_XDP is enabled for a certain queue. All AF_XDP specific functions are added to a new file, ixgbe_xsk.c. Note that when AF_XDP zero-copy is enabled, the XDP action XDP_PASS will allocate a new buffer and copy the zero-copy frame prior passing it to the kernel stack. Signed-off-by: NBjörn Töpel <bjorn.topel@intel.com> Tested-by: NWilliam Tu <u9012063@gmail.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Björn Töpel 提交于
This patch prepares for the upcoming zero-copy Rx functionality, by moving/changing linkage of common functions, used both by the regular path and zero-copy path. Signed-off-by: NBjörn Töpel <bjorn.topel@intel.com> Tested-by: NWilliam Tu <u9012063@gmail.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Björn Töpel 提交于
Add functions for Rx/Tx ring enable/disable. Instead of resetting the whole device, only the affected ring is disabled or enabled. This plumbing is used in later commits, when zero-copy AF_XDP support is introduced. Signed-off-by: NBjörn Töpel <bjorn.topel@intel.com> Tested-by: NWilliam Tu <u9012063@gmail.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Radoslaw Tyl 提交于
This patch fix crash when we have restore flow director filters after reset adapter. In ixgbe_fdir_filter_restore() filter->action is outside of the rx_ring array, as it has a VF identifier in the upper 32 bits. Signed-off-by: NRadoslaw Tyl <radoslawx.tyl@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Radoslaw Tyl 提交于
We have Tx hang when number Tx and XDP queues are more than 64. In XDP always is MTQC == 0x0 (64TxQs). We need more space for Tx queues. Signed-off-by: NRadoslaw Tyl <radoslawx.tyl@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 03 10月, 2018 1 次提交
-
-
由 Oza Pawandeep 提交于
After bfcb79fc ("PCI/ERR: Run error recovery callbacks for all affected devices"), AER errors are always cleared by the PCI core and drivers don't need to do it themselves. Remove calls to pci_cleanup_aer_uncorrect_error_status() from device driver error recovery functions. Signed-off-by: NOza Pawandeep <poza@codeaurora.org> [bhelgaas: changelog, remove PCI core changes, remove unused variables] Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
-
- 24 9月, 2018 1 次提交
-
-
由 Eric Dumazet 提交于
As diagnosed by Song Liu, ndo_poll_controller() can be very dangerous on loaded hosts, since the cpu calling ndo_poll_controller() might steal all NAPI contexts (for all RX/TX queues of the NIC). This capture can last for unlimited amount of time, since one cpu is generally not able to drain all the queues under load. ixgbe uses NAPI for TX completions, so we better let core networking stack call the napi->poll() to avoid the capture. Reported-by: NSong Liu <songliubraving@fb.com> Signed-off-by: NEric Dumazet <edumazet@google.com> Tested-by: NSong Liu <songliubraving@fb.com> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 19 9月, 2018 1 次提交
-
-
由 Jesse Brandeburg 提交于
We recently updated all our SPDX identifiers to correctly indicate our net/ethernet/intel/* drivers were always released and intended to be released under GPL v2, but the MODULE_LICENSE declaration was never updated. Fix the MODULE_LICENSE to be GPL v2, for all our drivers. Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 29 8月, 2018 1 次提交
-
-
由 Sebastian Basierski 提交于
Add check for FW NVM recovery mode during driver initialization and service task. If in recovery mode, log message and unregister device Signed-off-by: NSebastian Basierski <sebastianx.basierski@intel.com> Tested-by: NDon Buchholz <donald.buchholz@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 24 8月, 2018 2 次提交
-
-
由 Tony Nguyen 提交于
These changes address comments by Jakub Kicinski on commit 38b7e7f8 ("ixgbe: Do not allow LRO or MTU change with XDP"). Change the MTU check with XDP to allow any supported value and only reject those outside of the range as opposed to rejecting any change when XDP is active. In situations where MTU size is not supported, return -EINVAL instead of -EPERM. Add checks when enabling SRIOV, DCB, or adding L2FW offloaded device as they are not supported with XDP. CC: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com> Acked-by: NJakub Kicinski <jakub.kicinski@netronome.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jia-Ju Bai 提交于
ixgbe_fcoe_ddp_setup(), ixgbe_setup_fcoe_ddp_resources() and ixgbe_sw_init() are never called in atomic context. They call kmalloc(), dma_pool_alloc() and kzalloc() with GFP_ATOMIC, which is not necessary. GFP_ATOMIC can be replaced with GFP_KERNEL. This is found by a static analysis tool named DCNS written by myself. Signed-off-by: NJia-Ju Bai <baijiaju1990@gmail.com> Acked-by: NSebastian Basierski <sebastianx.basierski@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 22 8月, 2018 1 次提交
-
-
由 Cong Wang 提交于
After commit 90b73b77, list_head is no longer needed. Now we just need to convert the list iteration to array iteration for drivers. Fixes: 90b73b77 ("net: sched: change action API to use array of pointers to actions") Cc: Jiri Pirko <jiri@mellanox.com> Cc: Vlad Buslov <vladbu@mellanox.com> Signed-off-by: NCong Wang <xiyou.wangcong@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 27 7月, 2018 3 次提交
-
-
由 Alexander Duyck 提交于
This change is meant to allow us to take completion time into account when disabling queues. Previously we were just working with hard coded values for how long we should wait. This worked fine for the standard case where completion timeout was operating in the 50us to 50ms range, however on platforms that have higher completion timeout times this was resulting in Rx queues disable messages being displayed as we weren't waiting long enough for outstanding Rx DMA completions. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NDon Buchholz <donald.buchholz@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This change is meant to help reduce the time needed to shutdown the transmit and receive paths for the device. Specifically what we now do after this patch is disable the transmit path first at the netdev level, and then work on disabling the Rx. This way while we are waiting on the Rx queues to be disabled the Tx queues have an opportunity to drain out. In addition I have dropped the 10ms timeout that was left in the ixgbe_down function that seems to have been carried through from back in e1000 as far as I can tell. We shouldn't need it since we don't actually disable the Tx until much later and we have additional logic in place for verifying the Tx queues have been disabled. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NDon Buchholz <donald.buchholz@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Tony Nguyen 提交于
XDP does not support jumbo frames or LRO. These checks are being made outside the driver when an XDP program is loaded, however, there is nothing preventing these from changing after an XDP program is loaded. Add the checks so that while an XDP program is loaded, do not allow MTU to be changed or LRO to be enabled. Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 14 7月, 2018 1 次提交
-
-
由 Jakub Kicinski 提交于
prog_attached of struct netdev_bpf should have been superseded by simply setting prog_id long time ago, but we kept it around to allow offloading drivers to communicate attachment mode (drv vs hw). Subsequently drivers were also allowed to report back attachment flags (prog_flags), and since nowadays only programs attached will XDP_FLAGS_HW_MODE can get offloaded, we can tell the attachment mode from the flags driver reports. Remove prog_attached member. Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: NQuentin Monnet <quentin.monnet@netronome.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
-
- 10 7月, 2018 4 次提交
-
-
由 Alexander Duyck 提交于
For most of these calls we can just pass NULL through to the fallback function as the sb_dev. The only cases where we cannot are the cases where we might be dealing with either an upper device or a driver that would have configured things to support an sb_dev itself. The only driver that has any significant change in this patch set should be ixgbe as we can drop the redundant functionality that existed in both the ndo_select_queue function and the fallback function that was passed through to us. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This patch makes it so that instead of passing a void pointer as the accel_priv we instead pass a net_device pointer as sb_dev. Making this change allows us to pass the subordinate device through to the fallback function eventually so that we can keep the actual code in the ndo_select_queue call as focused on possible on the exception cases. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This change makes it so that we can support the concept of subordinate device traffic classes to the core networking code. In doing this we can start pulling out the driver specific bits needed to support selecting a queue based on an upper device. The solution at is currently stands is only partially implemented. I have the start of some XPS bits in here, but I would still need to allow for configuration of the XPS maps on the queues reserved for the subordinate devices. For now I am using the reference to the sb_dev XPS map as just a way to skip the lookup of the lower device XPS map for now as that would result in the wrong queue being picked. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This patch makes it so that we use the tc_to_txq mapping in the macvlan device in order to select the Tx queue for outgoing packets. The idea here is to try and move away from using ixgbe_select_queue and to come up with a generic way to make this work for devices going forward. By encoding this information in the netdev this can become something that can be used generically as a solution for similar setups going forward. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 28 6月, 2018 1 次提交
-
-
由 Jesper Dangaard Brouer 提交于
The driver was combining the XDP_TX tail flush and XDP_REDIRECT map flushing (xdp_do_flush_map). This is suboptimal, these two flush operations should be kept separate. Fixes: 11393cc9 ("xdp: Add batching support to redirect map") Signed-off-by: NJesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 26 6月, 2018 1 次提交
-
-
由 John Hurley 提交于
Pass the extact struct from a tc qdisc add to the block bind function and, in turn, to the setup_tc ndo of binding device via the tc_block_offload struct. Pass this back to any block callback registrations to allow netlink logging of fails in the bind process. Signed-off-by: NJohn Hurley <john.hurley@netronome.com> Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com> Acked-by: NJiri Pirko <jiri@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 13 6月, 2018 1 次提交
-
-
由 Kees Cook 提交于
The kzalloc() function has a 2-factor argument form, kcalloc(). This patch replaces cases of: kzalloc(a * b, gfp) with: kcalloc(a * b, gfp) as well as handling cases of: kzalloc(a * b * c, gfp) with: kzalloc(array3_size(a, b, c), gfp) as it's slightly less ugly than: kzalloc_array(array_size(a, b), c, gfp) This does, however, attempt to ignore constant size factors like: kzalloc(4 * 1024, gfp) though any constants defined via macros get caught up in the conversion. Any factors with a sizeof() of "unsigned char", "char", and "u8" were dropped, since they're redundant. The Coccinelle script used for this was: // Fix redundant parens around sizeof(). @@ type TYPE; expression THING, E; @@ ( kzalloc( - (sizeof(TYPE)) * E + sizeof(TYPE) * E , ...) | kzalloc( - (sizeof(THING)) * E + sizeof(THING) * E , ...) ) // Drop single-byte sizes and redundant parens. @@ expression COUNT; typedef u8; typedef __u8; @@ ( kzalloc( - sizeof(u8) * (COUNT) + COUNT , ...) | kzalloc( - sizeof(__u8) * (COUNT) + COUNT , ...) | kzalloc( - sizeof(char) * (COUNT) + COUNT , ...) | kzalloc( - sizeof(unsigned char) * (COUNT) + COUNT , ...) | kzalloc( - sizeof(u8) * COUNT + COUNT , ...) | kzalloc( - sizeof(__u8) * COUNT + COUNT , ...) | kzalloc( - sizeof(char) * COUNT + COUNT , ...) | kzalloc( - sizeof(unsigned char) * COUNT + COUNT , ...) ) // 2-factor product with sizeof(type/expression) and identifier or constant. @@ type TYPE; expression THING; identifier COUNT_ID; constant COUNT_CONST; @@ ( - kzalloc + kcalloc ( - sizeof(TYPE) * (COUNT_ID) + COUNT_ID, sizeof(TYPE) , ...) | - kzalloc + kcalloc ( - sizeof(TYPE) * COUNT_ID + COUNT_ID, sizeof(TYPE) , ...) | - kzalloc + kcalloc ( - sizeof(TYPE) * (COUNT_CONST) + COUNT_CONST, sizeof(TYPE) , ...) | - kzalloc + kcalloc ( - sizeof(TYPE) * COUNT_CONST + COUNT_CONST, sizeof(TYPE) , ...) | - kzalloc + kcalloc ( - sizeof(THING) * (COUNT_ID) + COUNT_ID, sizeof(THING) , ...) | - kzalloc + kcalloc ( - sizeof(THING) * COUNT_ID + COUNT_ID, sizeof(THING) , ...) | - kzalloc + kcalloc ( - sizeof(THING) * (COUNT_CONST) + COUNT_CONST, sizeof(THING) , ...) | - kzalloc + kcalloc ( - sizeof(THING) * COUNT_CONST + COUNT_CONST, sizeof(THING) , ...) ) // 2-factor product, only identifiers. @@ identifier SIZE, COUNT; @@ - kzalloc + kcalloc ( - SIZE * COUNT + COUNT, SIZE , ...) // 3-factor product with 1 sizeof(type) or sizeof(expression), with // redundant parens removed. @@ expression THING; identifier STRIDE, COUNT; type TYPE; @@ ( kzalloc( - sizeof(TYPE) * (COUNT) * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | kzalloc( - sizeof(TYPE) * (COUNT) * STRIDE + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | kzalloc( - sizeof(TYPE) * COUNT * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | kzalloc( - sizeof(TYPE) * COUNT * STRIDE + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | kzalloc( - sizeof(THING) * (COUNT) * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) | kzalloc( - sizeof(THING) * (COUNT) * STRIDE + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) | kzalloc( - sizeof(THING) * COUNT * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) | kzalloc( - sizeof(THING) * COUNT * STRIDE + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) ) // 3-factor product with 2 sizeof(variable), with redundant parens removed. @@ expression THING1, THING2; identifier COUNT; type TYPE1, TYPE2; @@ ( kzalloc( - sizeof(TYPE1) * sizeof(TYPE2) * COUNT + array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2)) , ...) | kzalloc( - sizeof(TYPE1) * sizeof(THING2) * (COUNT) + array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2)) , ...) | kzalloc( - sizeof(THING1) * sizeof(THING2) * COUNT + array3_size(COUNT, sizeof(THING1), sizeof(THING2)) , ...) | kzalloc( - sizeof(THING1) * sizeof(THING2) * (COUNT) + array3_size(COUNT, sizeof(THING1), sizeof(THING2)) , ...) | kzalloc( - sizeof(TYPE1) * sizeof(THING2) * COUNT + array3_size(COUNT, sizeof(TYPE1), sizeof(THING2)) , ...) | kzalloc( - sizeof(TYPE1) * sizeof(THING2) * (COUNT) + array3_size(COUNT, sizeof(TYPE1), sizeof(THING2)) , ...) ) // 3-factor product, only identifiers, with redundant parens removed. @@ identifier STRIDE, SIZE, COUNT; @@ ( kzalloc( - (COUNT) * STRIDE * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) | kzalloc( - COUNT * (STRIDE) * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) | kzalloc( - COUNT * STRIDE * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | kzalloc( - (COUNT) * (STRIDE) * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) | kzalloc( - COUNT * (STRIDE) * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | kzalloc( - (COUNT) * STRIDE * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | kzalloc( - (COUNT) * (STRIDE) * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | kzalloc( - COUNT * STRIDE * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) ) // Any remaining multi-factor products, first at least 3-factor products, // when they're not all constants... @@ expression E1, E2, E3; constant C1, C2, C3; @@ ( kzalloc(C1 * C2 * C3, ...) | kzalloc( - (E1) * E2 * E3 + array3_size(E1, E2, E3) , ...) | kzalloc( - (E1) * (E2) * E3 + array3_size(E1, E2, E3) , ...) | kzalloc( - (E1) * (E2) * (E3) + array3_size(E1, E2, E3) , ...) | kzalloc( - E1 * E2 * E3 + array3_size(E1, E2, E3) , ...) ) // And then all remaining 2 factors products when they're not all constants, // keeping sizeof() as the second factor argument. @@ expression THING, E1, E2; type TYPE; constant C1, C2, C3; @@ ( kzalloc(sizeof(THING) * C2, ...) | kzalloc(sizeof(TYPE) * C2, ...) | kzalloc(C1 * C2 * C3, ...) | kzalloc(C1 * C2, ...) | - kzalloc + kcalloc ( - sizeof(TYPE) * (E2) + E2, sizeof(TYPE) , ...) | - kzalloc + kcalloc ( - sizeof(TYPE) * E2 + E2, sizeof(TYPE) , ...) | - kzalloc + kcalloc ( - sizeof(THING) * (E2) + E2, sizeof(THING) , ...) | - kzalloc + kcalloc ( - sizeof(THING) * E2 + E2, sizeof(THING) , ...) | - kzalloc + kcalloc ( - (E1) * E2 + E1, E2 , ...) | - kzalloc + kcalloc ( - (E1) * (E2) + E1, E2 , ...) | - kzalloc + kcalloc ( - E1 * E2 + E1, E2 , ...) ) Signed-off-by: NKees Cook <keescook@chromium.org>
-
- 11 6月, 2018 3 次提交
-
-
由 Alexander Duyck 提交于
This patch moves the IPsec init function in ixgbe_sw_init. This way it is a bit more consistent with the placement of similar initialization functions and is placed before the reset_hw call which should allow us to clean up any link issues that may be introduced by the fact that we force the link up if somehow the device had IPsec still enabled before the driver was loaded. In addition to the function move it is necessary to change the assignment of netdev->features. The easiest way to do this is to just test for the existence of adapter->ipsec and if it is present we set the feature bits. Fixes: 49a94d74 ("ixgbe: add ipsec engine start and stop routines") Reported-by: NAndre Tomt <andre@tomt.net> Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Acked-by: NShannon Nelson <shannon.nelson@oracle.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
There is no point in adding code if CONFIG_XFRM is defined that we won't use unless CONFIG_XFRM_OFFLOAD is defined. So instead of leaving this code floating around I am replacing the ifdef with what I believe is the correct one so that we only include the code and variables if they will actually be used. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Acked-by: NShannon Nelson <shannon.nelson@oracle.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
When we were enabling macvlan interfaces we weren't correctly configuring things until ixgbe_setup_tc was called a second time either by tweaking the number of queues or increasing the macvlan count past 15. The issue came down to the fact that num_rx_pools is not populated until after the queues and interrupts are reinitialized. Instead of trying to set it sooner we can just move the call to setup at least 1 traffic class to the SR-IOV/VMDq setup function so that we just set it for this one case. We already had a spot that was configuring the queues for TC 0 in the code here anyway so it makes sense to also set the number of TCs here as well. Fixes: 49cfbeb7 ("ixgbe: Fix handling of macvlan Tx offload") Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 05 6月, 2018 2 次提交
-
-
由 Jesper Dangaard Brouer 提交于
Remove the ndo_xdp_flush call implementation ixgbe_xdp_flush as no callers of ndo_xdp_flush are left. Signed-off-by: NJesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
-
由 Tony Nguyen 提交于
Similar to ixgbevf, the same possibility for race exists. Extend the RTNL lock in ixgbe_reset_subtask() to protect the state bits; this is to make sure that we get the most up-to-date values for the bits and avoid a possible race when going down. Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-