- 08 2月, 2017 8 次提交
-
-
由 Michael Chan 提交于
To support XDP_TX, we need to add a set of dedicated TX rings, each associated with the NAPI of an RX ring. To assign XDP rings and regular rings in a flexible way, we add a bp->tx_ring_map[] array to do the remapping. The netdev txq index is stored in the new field txq_index so that we can retrieve the netdev txq when handling TX completions. In this patch, before we introduce XDP_TX, the mapping is 1:1. v2: Fixed a bug in bnxt_tx_int(). Signed-off-by: NMichael Chan <michael.chan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michael Chan 提交于
Currently, bnxt_setup_tc() and bnxt_set_channels() have similar and duplicated code to check and reserve rx and tx rings. Add a new function bnxt_reserve_rings() to centralize the logic. This will make it easier to add XDP_TX support which requires allocating a new set of TX rings. Also, the tx ring checking logic in bnxt_setup_msix() can be removed. The rings have been reserved before hand. Signed-off-by: NMichael Chan <michael.chan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michael Chan 提交于
In the current code, we have separate rx_event and agg_event parameters to keep track of rx and aggregation events. Combine these events into an u8 event mask with different bits defined for different events. This way, it is easier to expand the logic to include XDP tx events. Signed-off-by: NMichael Chan <michael.chan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michael Chan 提交于
This mode is to support XDP. In this mode, each rx ring is configured with page sized buffers for linear placement of each packet. MTU will be restricted to what the page sized buffers can support. Signed-off-by: NMichael Chan <michael.chan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michael Chan 提交于
Convert the global constants BNXT_RX_OFFSET and BNXT_RX_DMA_OFFSET to device parameters. This will make it easier to support XDP with headroom support which requires different RX buffer offsets. Signed-off-by: NMichael Chan <michael.chan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michael Chan 提交于
When driver is running in XDP mode, rx buffers are DMA mapped as DMA_BIDIRECTIONAL. Add a field so the code will map/unmap rx buffers according to this field. Signed-off-by: NMichael Chan <michael.chan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michael Chan 提交于
To support XDP_TX, we need the RX buffer's DMA address to transmit the packet. Convert the DMA address field to a permanent field in bnxt_sw_rx_bd. Signed-off-by: NMichael Chan <michael.chan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michael Chan 提交于
Minor refactoring of bnxt_rx_skb() so that it can easily be replaced by a new function that handles packets in a single page. Also, use a function pointer bp->rx_skb_func() to switch to a new function when we add the new mode in the next patch. Add a new field data_ptr that points to the packet data in the bnxt_sw_rx_bd structure. The original data field is changed to void pointer so that it can either hold the kmalloc'ed data or a page pointer. The last parameter of bnxt_rx_skb() which was the length parameter is changed to include the payload offset of the packet in the upper 16 bit. The offset is needed to support the rx page mode and is not used in this existing function. v3: Added a new data_ptr parameter to bp->rx_skb_func(). The caller has the option to modify the starting address of the packet. This will be needed when XDP with headroom support is added. v2: Changed the name of the last parameter to offset_and_len to make the code more clear. Signed-off-by: NMichael Chan <michael.chan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 07 2月, 2017 1 次提交
-
-
由 Parav Pandit 提交于
This patch makes use of is_vlan_dev() function instead of flag comparison which is exactly done by is_vlan_dev() helper function. Signed-off-by: NParav Pandit <parav@mellanox.com> Reviewed-by: NDaniel Jurgens <danielj@mellanox.com> Acked-by: NStephen Hemminger <stephen@networkplumber.org> Acked-by: NJon Maxwell <jmaxwell37@gmail.com> Acked-by: NJohannes Thumshirn <jth@kernel.org> Acked-by: NHaiyang Zhang <haiyangz@microsoft.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 01 2月, 2017 3 次提交
-
-
由 Rafał Miłecki 提交于
This adds support for using bgmac with PHYs supported by standalone PHY drivers. Having any PHY initialization in bgmac is hacky and shouldn't be extended but rather removed if anyone has hardware to test it. Signed-off-by: NRafał Miłecki <rafal@milecki.pl> Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Rafał Miłecki 提交于
Adding struct bcma_mdio was a workaround for bcma code not having access to the struct bgmac used in the core code. Now we don't duplicate this struct we can just use it internally in bcma code. This simplifies code & allows access to all bgmac driver details from all places in bcma code. Signed-off-by: NRafał Miłecki <rafal@milecki.pl> Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Rafał Miłecki 提交于
So far were were allocating struct bgmac in 3 places: platform code, bcma code and shared bgmac_enet_probe function. The reason for this was bgmac_enet_probe: 1) Requiring early-filled struct bgmac 2) Calling alloc_etherdev on its own in order to use netdev_priv later This solution got few drawbacks: 1) Was duplicating allocating code 2) Required copying early-filled struct 3) Resulted in platform/bcma code having access only to unused struct Solve this situation by simply extracting some probe code into the new bgmac_alloc function. Signed-off-by: NRafał Miłecki <rafal@milecki.pl> Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 31 1月, 2017 1 次提交
-
-
由 Eric Dumazet 提交于
napi_complete_done() allows to opt-in for gro_flush_timeout, added back in linux-3.19, commit 3b47d303 ("net: gro: add a per device gro flush timer") This allows for more efficient GRO aggregation without sacrifying latencies. Signed-off-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 26 1月, 2017 3 次提交
-
-
由 Michael Chan 提交于
bnxt_get_port_module_status() calls bnxt_update_link() which expects RTNL to be held. In bnxt_sp_task() that does not hold RTNL, we need to call it with a prior call to bnxt_rtnl_lock_sp() and the call needs to be moved to the end of bnxt_sp_task(). Signed-off-by: NMichael Chan <michael.chan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michael Chan 提交于
bnxt_update_link() is called from multiple code paths. Most callers, such as open, ethtool, already hold RTNL. Only the caller bnxt_sp_task() does not. So it is a bug to take RTNL inside bnxt_update_link(). Fix it by removing the RTNL inside bnxt_update_link(). The function now expects the caller to always hold RTNL. In bnxt_sp_task(), call bnxt_rtnl_lock_sp() before calling bnxt_update_link(). We also need to move the call to the end of bnxt_sp_task() since it will be clearing the BNXT_STATE_IN_SP_TASK bit. Signed-off-by: NMichael Chan <michael.chan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michael Chan 提交于
In bnxt_sp_task(), we set a bit BNXT_STATE_IN_SP_TASK so that bnxt_close() will synchronize and wait for bnxt_sp_task() to finish. Some functions in bnxt_sp_task() require us to clear BNXT_STATE_IN_SP_TASK and then acquire rtnl_lock() to prevent race conditions. There are some bugs related to this logic. This patch refactors the code to have common bnxt_rtnl_lock_sp() and bnxt_rtnl_unlock_sp() to handle the RTNL and the clearing/setting of the bit. Multiple functions will need the same logic. We also need to move bnxt_reset() to the end of bnxt_sp_task(). Functions that clear BNXT_STATE_IN_SP_TASK must be the last functions to be called in bnxt_sp_task(). The common scheme will handle the condition properly. Signed-off-by: NMichael Chan <michael.chan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 25 1月, 2017 1 次提交
-
-
由 Philippe Reynes 提交于
The ethtool api {get|set}_settings is deprecated. We move this driver to new api {get|set}_link_ksettings. As I don't have the hardware, I'd be very pleased if someone may test this patch. Signed-off-by: NPhilippe Reynes <tremyfr@gmail.com> Acked-by: NYuval Mintz <Yuval.Mintz@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 24 1月, 2017 1 次提交
-
-
由 Eric Dumazet 提交于
Commit 4cace675 ("bnx2x: Alloc 4k fragment for each rx ring buffer element") added extra put_page() and get_page() calls on arches where PAGE_SIZE=4K like x86 Reorder things to avoid this overhead. Signed-off-by: NEric Dumazet <edumazet@google.com> Cc: Gabriel Krisman Bertazi <krisman@linux.vnet.ibm.com> Cc: Yuval Mintz <Yuval.Mintz@cavium.com> Cc: Ariel Elior <ariel.elior@cavium.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 23 1月, 2017 2 次提交
-
-
由 Florian Fainelli 提交于
Add supporf for the SYSTEMPORT Lite Ethernet controller, this piece of hardware is largely based on the full-blown SYSTEMPORT and differs in the following: - no full-blown UniMAC, instead we have the MagicPacket matching from UniMAC at same offset, and a GMII Interface Block (GIB) for the MAC-level stuff, since we are always interfaced to an Ethernet switch which is fully Ethernet compliant shortcuts could be made - 16 transmit queues, whose interrupts are moved into the first Level-2 interrupt controller bank - slight TDMA offset change (a register was inserted after TDMA_STATUS, *sigh*) - 256 RX descriptors (512 words) and 256 TX descriptors (not visible) As a consequence of these two things, update the code paths accordingly to differentiate the full-blown from the light version. Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Florian Fainelli 提交于
In preparation for adding SYSTEMPORT Lite, which has twice as less transmit queues than SYSTEMPORT make sure we do allocate TX rings based on the systemport,txq property to get an appropriate memory footprint. Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 21 1月, 2017 1 次提交
-
-
由 Arnd Bergmann 提交于
gcc-7 and probably earlier versions get confused by this function and print a harmless warning: drivers/net/ethernet/broadcom/bcm63xx_enet.c: In function 'bcm_enet_open': drivers/net/ethernet/broadcom/bcm63xx_enet.c:1130:3: error: 'phydev' may be used uninitialized in this function [-Werror=maybe-uninitialized] This adds an initialization for the 'phydev' variable when it is unused and changes the check to test for that NULL pointer to make it clear that we always pass a valid pointer here. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 19 1月, 2017 1 次提交
-
-
由 Michael Chan 提交于
In the TPA GRO code path, initialize the tcp_opt_len variable to 0 so that it will be correct for packets without TCP timestamps. The bug caused the SKB fields to be incorrectly set up for packets without TCP timestamps, leading to these packets being rejected by the stack. Reported-by: NAndy Gospodarek <andrew.gospodarek@broadocm.com> Acked-by: NAndy Gospodarek <andrew.gospodarek@broadocm.com> Signed-off-by: NMichael Chan <michael.chan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 14 1月, 2017 6 次提交
-
-
由 Michael Chan 提交于
Add the ulp_sriov_cfg callbacks when the number of VFs is changing. This allows the RDMA driver to provision RDMA resources for the VFs. Signed-off-by: NMichael Chan <michael.chan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michael Chan 提交于
Add LED blinking code to support ethtool -p on the PF. Signed-off-by: NMichael Chan <michael.chan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michael Chan 提交于
Signed-off-by: NMichael Chan <michael.chan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michael Chan 提交于
Commit bdbd1eb5 ("bnxt_en: Handle no aggregation ring gracefully.") introduced the BNXT_FLAG_NO_AGG_RINGS flag. For consistency, bnxt_set_tpa_flags() should also clear TPA flags when there are no aggregation rings. Signed-off-by: NMichael Chan <michael.chan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michael Chan 提交于
CC [M] drivers/net/ethernet/broadcom/bnxt/bnxt.o drivers/net/ethernet/broadcom/bnxt/bnxt.c:4947:21: warning: ‘bnxt_get_max_func_rss_ctxs’ defined but not used [-Wunused-function] static unsigned int bnxt_get_max_func_rss_ctxs(struct bnxt *bp) ^ CC [M] drivers/net/ethernet/broadcom/bnxt/bnxt.o drivers/net/ethernet/broadcom/bnxt/bnxt.c:4956:21: warning: ‘bnxt_get_max_func_vnics’ defined but not used [-Wunused-function] static unsigned int bnxt_get_max_func_vnics(struct bnxt *bp) ^ Signed-off-by: NMichael Chan <michael.chan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Florian Fainelli 提交于
The __bcm_sysport_tx_reclaim() function is used to reclaim transmit resources in different places within the driver. Most of them should not affect the state of the transit flow control. Introduce bcm_sysport_tx_clean() which cleans the ring, but does not re-enable flow control towards the networking stack, and make bcm_sysport_tx_reclaim() do the actual transmit queue flow control. Fixes: 80105bef ("net: systemport: add Broadcom SYSTEMPORT Ethernet MAC driver") Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 09 1月, 2017 2 次提交
-
-
由 stephen hemminger 提交于
In dev_get_stats() the statistic structure storage has already been zeroed. Therefore network drivers do not need to call memset() again. Signed-off-by: NStephen Hemminger <sthemmin@microsoft.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 stephen hemminger 提交于
The network device operation for reading statistics is only called in one place, and it ignores the return value. Having a structure return value is potentially confusing because some future driver could incorrectly assume that the return value was used. Fix all drivers with ndo_get_stats64 to have a void function. Signed-off-by: NStephen Hemminger <sthemmin@microsoft.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 08 1月, 2017 1 次提交
-
-
由 Michael Chan 提交于
The driver's ndo_get_stats64() method is not always called under RTNL. So it can race with driver close or ethtool reconfigurations. Fix the race condition by taking tp->lock spinlock in tg3_free_consistent() when freeing the tp->hw_stats memory block. tg3_get_stats64() is already taking tp->lock. Reported-by: NWang Yufen <wangyufen@huawei.com> Signed-off-by: NMichael Chan <michael.chan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 05 1月, 2017 2 次提交
-
-
由 Florian Fainelli 提交于
Inserting the TSB means adding an extra 8 bytes in front the of packet that is going to be used as metadata information by the TDMA engine, but stripped off, so it does not really help with the packet padding. For some odd packet sizes that fall below the 60 bytes payload (e.g: ARP) we can end-up padding them after the TSB insertion, thus making them 64 bytes, but with the TDMA stripping off the first 8 bytes, they could still be smaller than 64 bytes which is required to ingress the switch. Fix this by swapping the padding and TSB insertion, guaranteeing that the packets have the right sizes. Fixes: 80105bef ("net: systemport: add Broadcom SYSTEMPORT Ethernet MAC driver") Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Florian Fainelli 提交于
Since we need to pad our packets, utilize skb_put_padto() which increases skb->len by how much we need to pad, allowing us to eliminate the test on skb->len right below. Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 30 12月, 2016 7 次提交
-
-
由 Michael Chan 提交于
The current code assumes that we will always have at least 2 rx rings, 1 will be used as an aggregation ring for TPA and jumbo page placements. However, it is possible, especially on a VF, that there is only 1 rx ring available. In this scenario, the current code will fail to initialize. To handle it, we need to properly set up only 1 ring without aggregation. Set a new flag BNXT_FLAG_NO_AGG_RINGS for this condition and add logic to set up the chip to place RX data linearly into a single buffer per packet. Signed-off-by: NMichael Chan <michael.chan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michael Chan 提交于
With the added support for the bnxt_re RDMA driver, both drivers can be allocating completion rings in any order. The firmware does not know which completion ring should be receiving async events. Add an extra step to tell firmware the completion ring number for receiving async events after bnxt_en allocates the completion rings. Signed-off-by: NMichael Chan <michael.chan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michael Chan 提交于
In order to properly support TX rate limiting in SRIOV VF functions or NPAR functions, firmware needs better control over tx ring allocations. The new scheme requires the driver to reserve the number of tx rings and to query to see if the requested number of tx rings is reserved. The driver will use the new scheme when the firmware interface spec is 1.6.1 or newer. Signed-off-by: NMichael Chan <michael.chan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michael Chan 提交于
Accept ipv6 flows in .ndo_rx_flow_steer() and support ETHTOOL_GRXCLSRULE ipv6 flows. Signed-off-by: NMichael Chan <michael.chan@broadocm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michael Chan 提交于
Assign additional vnics to VFs whenever possible so that NTUPLE can be supported on the VFs. Signed-off-by: NMichael Chan <michael.chan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michael Chan 提交于
The existing hardware RFS mode uses one hardware RSS context block per ring just to calculate the RSS hash. This is very wasteful and prevents VF functions from using it. The new hardware mode shares the same hardware RSS context for RSS placement and RFS steering. This allows VFs to enable RFS. Signed-off-by: NMichael Chan <michael.chan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michael Chan 提交于
Add function bnxt_rfs_supported() that determines if the chip supports RFS. Refactor the existing function bnxt_rfs_capable() that determines if run-time conditions support RFS. Signed-off-by: NMichael Chan <michael.chan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-