- 02 5月, 2019 31 次提交
-
-
由 Brett Creeley 提交于
Currently the link event flow works, but can be much better. Refactor the link event flow to make it cleaner and more clear on what is going on. Signed-off-by: NBrett Creeley <brett.creeley@intel.com> Signed-off-by: NAnirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Tony Nguyen 提交于
The PHY type ICE_PHY_TYPE_LOW_25G_AUI_C2C is missing from ice_get_settings_link_up() which is causing a warning message for unrecognized PHY. Add the PHY type to correctly set the settings and avoid the warning message. Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com> Signed-off-by: NAnirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Brett Creeley 提交于
Every time we want to re-enable interrupts and/or write to a register that requires an interrupt vector's hardware index we do the following: vsi->hw_base_vector + q_vector->v_idx This is a wasteful operation, especially in the hot path. Fix this by adding a u16 reg_idx member to the ice_q_vector structure and make the necessary changes to make this work. Signed-off-by: NBrett Creeley <brett.creeley@intel.com> Signed-off-by: NAnirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Md Fahad Iqbal Polash 提交于
Runtime change of PFINT_OICR_ENA register is unnecessary. The handlers should always clear the atomic bit for each task as they start, because it will make sure that any late interrupt will either 1) re-set the bit, or 2) be handled directly in the "already running" task handler. Signed-off-by: NMd Fahad Iqbal Polash <md.fahad.iqbal.polash@intel.com> Signed-off-by: NAnirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Akeem G Abodunrin 提交于
This patch fixes issue with non trusted VFs being able to add more than permitted number of VLANs by adding a check in ice_vc_process_vlan_msg. Also don't return an error in this case as the VF does not need to know that it is not trusted. Also rework ice_vsi_kill_vlan to use the right types. Signed-off-by: NAkeem G Abodunrin <akeem.g.abodunrin@intel.com> Signed-off-by: NAnirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Brett Creeley 提交于
In ice_vsi_ctrl_rx_rings() we are unnecessarily waiting for QRX_CTRL_QENA_REQ and QRX_CTRL_QENA_STAT to be the same value prior to disabling each Rx queue. There is no reason to do this so remove this wait loop as we already have a wait loop after disabling/enabling the Rx queue through the QRX_CTRL register to make sure it gets successfully disabled/enabled. Signed-off-by: NBrett Creeley <brett.creeley@intel.com> Signed-off-by: NAnirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Brett Creeley 提交于
Currently the driver allows rx-usecs-high values to be set, but when querying the device for rx-usecs-high the value does not stick. This is because it was not yet implemented. Add code to allow the user to change rx-usecs-high and use this to set the q_vector's intrl value. Signed-off-by: NBrett Creeley <brett.creeley@intel.com> Signed-off-by: NAnirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Paul Greenwalt 提交于
Add support to set 52 byte RSS hash key. Signed-off-by: NPaul Greenwalt <paul.greenwalt@intel.com> Signed-off-by: NAnirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Brett Creeley 提交于
There are many places in the code where we do the following: for (i = 0; i < vsi->num_q_vectors; i++) Instead use the macro mentioned in the commit title: ice_for_each_q_vector(vsi, i) Signed-off-by: NBrett Creeley <brett.creeley@intel.com> Signed-off-by: NAnirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Maciej Fijalkowski 提交于
When stopping Tx rings, we use 'i' as an ring array index for looking up whether the ice_ring exists and have assigned a q_vector. This checks rings only within a given TC and we need to go through every ring in VSI. Use 'q_idx' instead. Signed-off-by: NMaciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: NAnirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Brett Creeley 提交于
Reduce scope of the variable 'err' to inside the for loop instead of using it as a second looping conditional. Also while here, improve the debug message if we fail to configure a Rx queue. Signed-off-by: NBrett Creeley <brett.creeley@intel.com> Signed-off-by: NAnirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Bruce Allan 提交于
Static analysis points out the default case in the switch statement in ice_get_itr_intrl_gran() is an infeasible condition causing the default case statement to be unreachable. Remove it and since the function no longer returns anything but success, change it to just return void and update the only call to it accordingly. Signed-off-by: NBruce Allan <bruce.w.allan@intel.com> Signed-off-by: NAnirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Akeem G Abodunrin 提交于
If there is no queue to disable, return appropriate configuration error earlier without acquiring the lock. Signed-off-by: NAkeem G Abodunrin <akeem.g.abodunrin@intel.com> Signed-off-by: NAnirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Anirudh Venkataramanan 提交于
This patch introduces a framework to store queue specific information in VSI queue contexts. Currently VSI queue context (represented by struct ice_q_ctx) only has q_handle as a member. In future patches, this structure will be updated to hold queue specific information. Signed-off-by: NAnirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Maxime Chevallier 提交于
This commit introduces support for the "Drop" action in classification offload. This corresponds to the "-1" action with ethtool -N. This is achieved using the color marking actions available in the C2 engine, which associate a color to a packet. These colors can be either Green, Yellow or Red, Red meaning that the packet should be dropped. Green and Yellow colors are interpreted by the Policer, which isn't supported yet. This method of dropping using the Classifier is different than the already existing early-drop features, such as VLAN filtering and MAC UC/MC filtering, which are performed during the Parsing step, and therefore take precedence over classification actions. Signed-off-by: NMaxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Maxime Chevallier 提交于
This commit introduces basic classification offloading support for the PPv2 controller. The PPv2 classifier has many classification engines, for now we only use the C2 TCAM match engine. This engine allows to perform ternary lookups on 64 bits keys (called Header Extracted Key), that are built by extracting fields from the packet header and concatenating them. At most 4 fields can be extracted for a single lookup. This basic implementation allows to build the HEK from the following fields : - L4 source and destination ports (for UDP and TCP) More fields are to be added in the future. Classification flows are added through the ethtool interface, using the newly introduced flow_rule infrastructure as an internal rule representation, allowing to more easily implement tc flower rules if need be. The internal design for now allocates one range of 4 rules per port due to the internal design of the flow table, which uses 22 sub-flows. When inserting a classification rule, the rule is created in every relevant sub-flow. This low rule-count is a very simple design which reaches quickly the limitations of the flow table ordering, but guarantees that the rule ordering will always be respected. This commit only introduces support for the "steer to rxq" action. Signed-off-by: NMaxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Maxime Chevallier 提交于
As of today, the classification code is used only for RSS. We split the incoming traffic into multiple flows, that correspond to the ethtool flow_type parameter. We don't want to use the ethtool flow definitions such as TCP_V4_FLOW, for several reason : - We want to decorrelate the driver code from ethtool as much as possible, so that we can easily use other interfaces such as tc flower, - We want the flow_type to be a bitfield, so that we can match flows embedded into each other, such as TCP4 which is a subset of IP4. This commit does the conversion to the newer type. Signed-off-by: NMaxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Maxime Chevallier 提交于
Cosmetic patch removing extra whitespaces when writing the flow_table entries Signed-off-by: NMaxime Chevallier <maxime.chevallier@bootlin.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Esben Haabendal 提交于
As soon as TAILDESCR_PTR is written, DMA transfers might start. Let's ensure we are ready to receive DMA IRQ's before doing that. Signed-off-by: NEsben Haabendal <esben@geanix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Esben Haabendal 提交于
This allows custom setup of IRQ coalescing for platforms using legacy platform_device. The irq timeout and count parameters can be used for tuning cpu load vs. latency. I have maintained the 0x00000400 bit in TX_CHNL_CTRL. It is specified as unused in the documentation I have available. It does not make any difference in the hardware I have available, so it is left in to not risk breaking other platforms where it might be used. Signed-off-by: NEsben Haabendal <esben@geanix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Esben Haabendal 提交于
Use usleep_range() to avoid problems with msleep() actually sleeping much longer than expected. Signed-off-by: NEsben Haabendal <esben@geanix.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Esben Haabendal 提交于
As we are actually using a BD for both the skb and each frag contained in it, the oldest TX BD would be overwritten when there was exactly one BD less than needed. Signed-off-by: NEsben Haabendal <esben@geanix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Esben Haabendal 提交于
Unmap the actual buffer length, not the amount of data received, avoiding resource exhaustion of swiotlb (seen on x86_64 platform). Signed-off-by: NEsben Haabendal <esben@geanix.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Esben Haabendal 提交于
Indirect register access goes through a DCR bus bridge, which allows only one outstanding transaction. And to make matters worse, each TEMAC IP block contains two Ethernet interfaces, and although they seem to have separate registers for indirect access, they actually share the registers. Or to be more specific, MSW, LSW and CTL registers are physically shared between Ethernet interfaces in same TEMAC IP, with RDY register being (almost) specificic to the Ethernet interface. The 0x10000 bit in RDY reflects combined bus ready state though. So we need to take care to synchronize not only within a single device, but also between devices in same TEMAC IP. This commit allows to do that with legacy platform devices. For OF devices, the xlnx,compound parent of the temac node should be used to find siblings, and setup a shared indirect_mutex between them. I will leave this work to somebody else, as I don't have hardware to test that. No regression is introduced by that, as before this commit using two Ethernet interfaces in same TEMAC block is simply broken. Signed-off-by: NEsben Haabendal <esben@geanix.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Esben Haabendal 提交于
With little-endian and 64-bit support in place, the ll_temac driver can now be used on x86 and x86_64 platforms. And while at it, enable COMPILE_TEST also. Signed-off-by: NEsben Haabendal <esben@geanix.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Esben Haabendal 提交于
Both TEMAC and SDMA is big-endian, so make sure that all values in SDMA buffer descriptors (cmdac_bd) are handled as big-endian, independent of the host endianness. With all currently supported platforms being big-endian, this change does not make a change for any of them. Note, when using app3 and app4 for piggybacking skb pointers there is no need to care about endianness, as neither TEMAC nor SDMA access app3 and app4 in TX buffer descriptors. Signed-off-by: NEsben Haabendal <esben@geanix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Esben Haabendal 提交于
Replace the powerpc specific MMIO register access functions with the generic big-endian mmio access functions, and add support for little-endian access depending on configuration. Big-endian access is maintained as the default, but little-endian can be configured in device-tree binding or in platform data. The temac_ior()/temac_iow() functions are replaced with macro wrappers to avoid modifying existing code more than necessary. Signed-off-by: NEsben Haabendal <esben@geanix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Esben Haabendal 提交于
The use of buffer descriptor APP4 field (32-bit) for storing skb pointer obviously does not work on 64-bit platforms. As APP3 is also unused, we can use that to store the other half of 64-bit pointer values. Contrary to what is hinted at in commit message of commit 15bfe05c ("net: ethernet: xilinx: Mark XILINX_LL_TEMAC broken on 64-bit") there are no other pointers stored in cdmac_bd. Signed-off-by: NEsben Haabendal <esben@geanix.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Esben Haabendal 提交于
Support initialization with platdata, so the driver can be used on non-device-tree platforms. For currently supported device-tree platforms, the driver should behave as before. Signed-off-by: NEsben Haabendal <esben@geanix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Esben Haabendal 提交于
As a side effect, a few error cases are fixed. If of_iomap() of sdma_regs failed, no error code was returned. Fixed to return -ENOMEM similar to of_iomap() fail of regs. If sysfs_create_group() or register_netdev() failed, lp->phy_node was not released. Finally, the order in remove function is corrected to be reverse order of what is done in probe, i.e. calling temac_mdio_teardown() last, so we unregister the netdev that most likely is using the mdio_bus first. Signed-off-by: NEsben Haabendal <esben@geanix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 YueHaibing 提交于
Fix inconsistent IS_ERR and PTR_ERR in cpsw_probe, The proper pointer to use is clk instead of mode. This issue was detected with the help of Coccinelle. Fixes: 83a8471b ("net: ethernet: ti: cpsw: refactor probe to group common hw initialization") Signed-off-by: NYueHaibing <yuehaibing@huawei.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 01 5月, 2019 9 次提交
-
-
由 Gustavo A. R. Silva 提交于
In preparation to enabling -Wimplicit-fallthrough, mark switch cases where we are expecting to fall through. This patch fixes the following warning: drivers/net/ethernet/sfc/mcdi_port.c: In function ‘efx_mcdi_phy_decode_link’: ./include/linux/compiler.h:77:22: warning: this statement may fall through [-Wimplicit-fallthrough=] # define unlikely(x) __builtin_expect(!!(x), 0) ^~~~~~~~~~~~~~~~~~~~~~~~~~ ./include/asm-generic/bug.h:125:2: note: in expansion of macro ‘unlikely’ unlikely(__ret_warn_on); \ ^~~~~~~~ drivers/net/ethernet/sfc/mcdi_port.c:344:3: note: in expansion of macro ‘WARN_ON’ WARN_ON(1); ^~~~~~~ drivers/net/ethernet/sfc/mcdi_port.c:345:2: note: here case MC_CMD_FCNTL_OFF: ^~~~ Warning level 3 was used: -Wimplicit-fallthrough=3 This patch is part of the ongoing efforts to enable -Wimplicit-fallthrough. Signed-off-by: NGustavo A. R. Silva <gustavo@embeddedor.com> Acked-by: NEdward Cree <ecree@solarflare.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Nikita Danilov 提交于
Some device ids were never released and does not exist. Cleanup these. Signed-off-by: NNikita Danilov <ndanilov@aquantia.com> Signed-off-by: NIgor Russkikh <igor.russkikh@aquantia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dmitry Bogdanov 提交于
DMA counters are 64 bit and we can fetch that to reduce counter overflow, espesially on byte counters. Tested-by: NNikita Danilov <ndanilov@aquantia.com> Signed-off-by: NIgor Russkikh <igor.russkikh@aquantia.com> Signed-off-by: NDmitry Bogdanov <dmitry.bogdanov@aquantia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dmitry Bogdanov 提交于
aq_nic_update_ndev_stats pushes statistics to ndev->stats from system interface. This is not always good because it counts packets/bytes before any of rx filters (including mac filter). Its better to report the packet/bytes statistics from DMA counters which gives actual values of data transferred over pci. System level stats is still available via ethtool. Signed-off-by: NNikita Danilov <ndanilov@aquantia.com> Signed-off-by: NIgor Russkikh <igor.russkikh@aquantia.com> Signed-off-by: NDmitry Bogdanov <dmitry.bogdanov@aquantia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dmitry Bogdanov 提交于
This improves ethtool -S usage, where stats are now actual on each request. Before that stats only were updated at service timer period. Tested-by: NNikita Danilov <ndanilov@aquantia.com> Signed-off-by: NIgor Russkikh <igor.russkikh@aquantia.com> Signed-off-by: NDmitry Bogdanov <dmitry.bogdanov@aquantia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Igor Russkikh 提交于
Service timer callback fetches statistics from FW and that may cause a long delay in error cases. We also now need to use fw mutex to prevent concurrent access to FW, thus - extract that logic from timer callback into the job in the separate work queue. Signed-off-by: NNikita Danilov <ndanilov@aquantia.com> Signed-off-by: NIgor Russkikh <igor.russkikh@aquantia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Nikita Danilov 提交于
Some of FW operations could be invoked simultaneously, from f.e. ethtool context and from service service activity work. Here we introduce a fw mutex to secure and serialize access to FW logic. Signed-off-by: NNikita Danilov <ndanilov@aquantia.com> Signed-off-by: NIgor Russkikh <igor.russkikh@aquantia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Igor Russkikh 提交于
Typo in msi code. No much impact though. Signed-off-by: NNikita Danilov <ndanilov@aquantia.com> Signed-off-by: NIgor Russkikh <igor.russkikh@aquantia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Igor Russkikh 提交于
Improve for better readability Signed-off-by: NNikita Danilov <ndanilov@aquantia.com> Signed-off-by: NIgor Russkikh <igor.russkikh@aquantia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-