- 20 8月, 2021 28 次提交
-
-
由 Vladimir Oltean 提交于
We need to reject some more configurations in future patches, convert the existing one to netlink extack. Signed-off-by: NVladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Vladimir Oltean 提交于
The existing ocelot device trees, like ocelot_pcb123.dts for example, have SERDES ports (ports 4 and higher) that do not have status = "disabled"; but on the other hand do not have a phy-handle or a fixed-link either. So from the perspective of phylink, they have broken DT bindings. Since the blamed commit, probing for the entire switch will fail when such a device tree binding is encountered on a port. There used to be this piece of code which skipped ports without a phy-handle: phy_node = of_parse_phandle(portnp, "phy-handle", 0); if (!phy_node) continue; but now it is gone. Anyway, fixed-link setups are a thing which should work out of the box with phylink, so it would not be in the best interest of the driver to add that check back. Instead, let's look at what other drivers do. Since commit 86f8b1c0 ("net: dsa: Do not make user port errors fatal"), DSA continues after a switch port fails to register, and works only with the ports that succeeded. We can achieve the same behavior in ocelot by unregistering the devlink port for ports where ocelot_port_phylink_create() failed (called via ocelot_probe_port), and clear the bit in devlink_ports_registered for that port. This will make the next iteration reconsider the port that failed to probe as an unused port, and re-register a devlink port of type UNUSED for it. No other cleanup should need to be performed, since ocelot_probe_port() should be self-contained when it fails. Fixes: e6e12df6 ("net: mscc: ocelot: convert to phylink") Reported-and-tested-by: NHoratiu Vultur <horatiu.vultur@microchip.com> Signed-off-by: NVladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Horatiu Vultur 提交于
There are cases where we would like to continue probing the switch even if one port has failed to probe. When that happens, we need to unregister a devlink_port of type DEVLINK_PORT_FLAVOUR_PHYSICAL and re-register it of type DEVLINK_PORT_FLAVOUR_UNUSED. This is fine, except when calling devlink_port_attrs_set on a structure on which devlink_port_register has been previously called, there is a WARN_ON in devlink_port_attrs_set that devlink_port->devlink must be NULL. So don't assume that the memory behind dlp is clean when calling ocelot_port_devlink_init, just zero-initialize it. Signed-off-by: NHoratiu Vultur <horatiu.vultur@microchip.com> Signed-off-by: NVladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Vladimir Oltean 提交于
Currently when probing returns an error, the netdev is freed but phylink_disconnect is not called. Create a common function between the unbind path and the error path, call it the opposite of dpaa2_switch_probe_port: dpaa2_switch_remove_port, and call it from both the unbind and the error path. Fixes: 84cba729 ("dpaa2-switch: integrate the MAC endpoint support") Signed-off-by: NVladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Vladimir Oltean 提交于
There is an ASSERT_RTNL in phylink_disconnect_phy which triggers whenever dpaa2_switch_port_disconnect_mac is called. To follow the pattern established by dpaa2_eth_disconnect_mac, take the rtnl_mutex every time we call dpaa2_switch_port_disconnect_mac. Fixes: 84cba729 ("dpaa2-switch: integrate the MAC endpoint support") Signed-off-by: NVladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: NIoana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Gerhard Engleder 提交于
Configure speed if loopback is used. read_status is not called for loopback. Signed-off-by: NGerhard Engleder <gerhard@engleder-embedded.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Gerhard Engleder 提交于
struct phy_device contains a pointer to the PHY driver and nearly everywhere this pointer is used to access the PHY driver. Only mdio_bus_phy_may_suspend() is still using to_phy_driver() instead of the PHY driver pointer. Uniform PHY driver access by eliminating to_phy_driver() use in mdio_bus_phy_may_suspend(). Only phy_bus_match() and phy_probe() are still using to_phy_driver(), because PHY driver pointer is not available there. Signed-off-by: NGerhard Engleder <gerhard@engleder-embedded.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Gerhard Engleder 提交于
phy_read_status and various other PHY functions support PHY specific overriding of driver functions by using a PHY specific pointer to the PHY driver. Add support of PHY specific override to phy_loopback too. Signed-off-by: NGerhard Engleder <gerhard@engleder-embedded.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Steen Hegelund 提交于
This add frame DMA functionality to the Sparx5 platform. Ethernet frames can be extracted or injected autonomously to or from the device’s DDR3/DDR3L memory and/or PCIe memory space. Linked list data structures in memory are used for injecting or extracting Ethernet frames. The FDMA generates interrupts when frame extraction or injection is done and when the linked lists need updating. The FDMA implements two extraction channels, one per switch core port towards the VCore CPU system and a total of six injection channels. Extraction channels are mapped one-to-one to the CPU ports, while injection channels can be individually assigned to any CPU port. - FDMA channel 0 through 5 corresponds to CPU port 0 injection direction FDMA_CH_CFG[channel].CH_INJ_PORT is set to 0. - FDMA channel 0 through 5 corresponds to CPU port 1 injection direction when FDMA_CH_CFG[channel].CH_INJ_PORT is set to 1. - FDMA channel 6 corresponds to CPU port 0 extraction direction. - FDMA channel 7 corresponds to CPU port 1 extraction direction. The FDMA implements a strict priority scheme among channels. Extraction channels are prioritized over injection channels and secondarily channels with higher channel number are prioritized over channels with lower number. On the other hand, ports are being served on an equal-bandwidth principle both on injection and extraction directions. The equal-bandwidth principle will not force an equal bandwidth. Instead, it ensures that the ports perform at their best considering the operating conditions. When more than one injection channel is enabled for injection on the same CPU port, priority determines which channel can inject data. Ownership is re-arbitrated on frame boundaries. The FDMA processes linked lists of DMA Control Block Structures (DCBs). The DCBs have the same basic structure for both injection and extraction. A DCB must be placed on a 64-bit word-aligned address in memory. Each DCB has a per-channel configurable amount of associated data blocks in memory, where the frame data is stored. The data blocks that are used by extraction channels must be placed on 64-bit word aligned addresses in memory, and their length must be a multiple of 128 bytes. A DCB carries the pointer to the next DCB of the linked list, the INFO word which holds information for the DCB, and a pair of status word and memory pointer for every data block that it is associated with. Signed-off-by: NSteen Hegelund <steen.hegelund@microchip.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dmytro Linkin 提交于
Add tracepoints to log QoS enabling/disabling/configuration for vports and rate groups. Signed-off-by: NDmytro Linkin <dlinkin@nvidia.com> Reviewed-by: NHuy Nguyen <huyn@nvidia.com> Reviewed-by: NMark Bloch <mbloch@nvidia.com> Reviewed-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Dmytro Linkin 提交于
Implement eswitch API that allows updating rate groups. If group pointer is NULL, then move the vport to internal unlimited group zero. Implement devlink_ops->rate_parent_node_set() callback in the terms of the new eswitch group update API. Enable QoS for all group's elements if a group has allocated BW share. Co-developed-by: NVlad Buslov <vladbu@nvidia.com> Signed-off-by: NVlad Buslov <vladbu@nvidia.com> Signed-off-by: NDmytro Linkin <dlinkin@nvidia.com> Reviewed-by: NHuy Nguyen <huyn@nvidia.com> Reviewed-by: NMark Bloch <mbloch@nvidia.com> Reviewed-by: NParav Pandit <parav@nvidia.com> Reviewed-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Dmytro Linkin 提交于
Provide eswitch API to allow controlling group rate limits. Use it to implement devlink_ops->mlx5_devlink_rate_node_tx_{share|max}_set(). The share rate will create relative bandwidth share on the groups level while within the group the user can set shared rate on the member vports of that group and this rate will be relative to the group's share rate. The group with the highest shared rate will get a BW share of 100 and the rest of the groups will get a value that reflects the ratio between their share rate and the maximum share rate. Example: Created four rate groups with tx_share limits: $ devlink port function rate add \ pci/0000:06:00.0/group_1 tx_share 30gbit $ devlink port function rate add \ pci/0000:06:00.0/group_2 tx_share 20gbit $ devlink port function rate add \ pci/0000:06:00.0/group_3 tx_share 20gbit $ devlink port function rate add \ pci/0000:06:00.0/group_4 tx_share 10gbit Assuming link speed is 50 Gbit/sec ratio divider will be 50 / (30+20+20+10) = 0.625. Normalized rate values for the groups: <group_1> 30 * 0.625 = 18.75 Gbit/sec <group_2> 20 * 0.625 = 12.5 Gbit/sec <group_3> 20 * 0.625 = 12.5 Gbit/sec <group_4> 10 * 0.625 = 6.25 Gbit/sec Rate group with unlimited tx_share rate will receive minimum BW value (1Mbit/sec) if presented any group with tx_share rate limit. This allow to not drop all packets in case of heavy traffic. Co-developed-by: NVlad Buslov <vladbu@nvidia.com> Signed-off-by: NVlad Buslov <vladbu@nvidia.com> Signed-off-by: NDmytro Linkin <dlinkin@nvidia.com> Reviewed-by: NHuy Nguyen <huyn@nvidia.com> Reviewed-by: NMark Bloch <mbloch@nvidia.com> Reviewed-by: NParav Pandit <parav@nvidia.com> Reviewed-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Dmytro Linkin 提交于
Extend eswitch API with rate limiting groups: - Define new struct mlx5_esw_rate_group that is used to hold all internal group data. - Implement functions that allow creation, destruction and cleanup of groups. - Assign all vports to internal unlimited zero group by default. This commit lays the groundwork for group rate limiting by implementing devlink_ops->rate_node_{new|del}() callbacks to support creating and deleting groups through devlink rate node objects. APIs that allows setting rates and adding/removing members are implemented in following patches. Co-developed-by: NVlad Buslov <vladbu@nvidia.com> Signed-off-by: NVlad Buslov <vladbu@nvidia.com> Signed-off-by: NDmytro Linkin <dlinkin@nvidia.com> Reviewed-by: NHuy Nguyen <huyn@nvidia.com> Reviewed-by: NMark Bloch <mbloch@nvidia.com> Reviewed-by: NParav Pandit <parav@nvidia.com> Reviewed-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Dmytro Linkin 提交于
Register devlink rate leaf object for every eswitch vport. Implement devlink ops that enable setting shared and max tx rates through devlink API. Extract common eswitch code from existing tx rate set function that is accessed through NDO to be reused for the devlink. Values configured with NDO API are not visible for the devlink API, therefore shouldn't be used simultaneously. When normalizing the BW share value, dividing the desired minimum rate by the common divider results in losing information since the quotient is rounded down. This has a significant affect on configurations of low rate where the round down eliminates a large percentage of the total rate. To improve the formula, round up the division result to make sure that the BW share is at least the value it was supposed to be and won't lost a significant amount of the expected value. Co-developed-by: NVlad Buslov <vladbu@nvidia.com> Signed-off-by: NVlad Buslov <vladbu@nvidia.com> Signed-off-by: NDmytro Linkin <dlinkin@nvidia.com> Reviewed-by: NHuy Nguyen <huyn@nvidia.com> Reviewed-by: NMark Bloch <mbloch@nvidia.com> Reviewed-by: NParav Pandit <parav@nvidia.com> Reviewed-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Dmytro Linkin 提交于
Move eswitch QoS related code into dedicated file. Provide eswitch API to access this code meaning it is isolated and restricted to be used only by eswitch.c. Exception is legacy NDO vf set rate, which moved to esw/legacy.c. Signed-off-by: NDmytro Linkin <dlinkin@nvidia.com> Reviewed-by: NHuy Nguyen <huyn@nvidia.com> Reviewed-by: NMark Bloch <mbloch@nvidia.com> Reviewed-by: NParav Pandit <parav@nvidia.com> Reviewed-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Chris Mi 提交于
Currently the sample offload actions send the encapsulated packet to software. This commit decapsulates the packet before performing the sampling and set the tunnel properties on the skb metadata fields to make the behavior consistent with OVS sFlow. If decapsulating first, we can't use the same match like before in default table. So instantiate a post action instance to continue processing the action list. If HW can preserve reg_c, also use the post action instance. Signed-off-by: NChris Mi <cmi@nvidia.com> Reviewed-by: NOz Shlomo <ozsh@nvidia.com> Reviewed-by: NRoi Dayan <roid@nvidia.com> Reviewed-by: NMark Bloch <mbloch@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Chris Mi 提交于
Currently the sample offload actions send the encapsulated packet to software. sFlow expects tunneled packets to be decapsulated while having the tunnel properties on the skb metadata fields. Reuse the functions used by connection tracking to map the outer header properties to a unique id. The next patch will use that id to restore the tunnel information of decapsulated packets onto the skb. Signed-off-by: NChris Mi <cmi@nvidia.com> Reviewed-by: NOz Shlomo <ozsh@nvidia.com> Reviewed-by: NRoi Dayan <roid@nvidia.com> Reviewed-by: NMark Bloch <mbloch@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Chris Mi 提交于
CONFIG_NET_TC_SKB_EXT controls the SKB extension support for restoring chain ids. SKB extension is not required for tunnel restoration. Remove the CONFIG_NET_TC_SKB_EXT dependency as a pre-step for using the tunnel restore methods for sample offload use cases. Signed-off-by: NChris Mi <cmi@nvidia.com> Reviewed-by: NOz Shlomo <ozsh@nvidia.com> Reviewed-by: NRoi Dayan <roid@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Chris Mi 提交于
Move post action table management to common library providing add/del/get API. Refactor the ct action offload to use the common API. Signed-off-by: NChris Mi <cmi@nvidia.com> Reviewed-by: NOz Shlomo <ozsh@nvidia.com> Reviewed-by: NRoi Dayan <roid@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Chris Mi 提交于
Some tc actions are modeled in hardware using multiple tables causing a tc action list split. For example, CT action is modeled by jumping to a ct table which is controlled by nf flowtable. sFlow jumps in hardware to a sample table, which continues to a "default table" where it should continue processing the action list. Multi table actions are modeled in hardware using a unique fte_id. The fte_id is set before jumping to a table. Split actions continue to a post-action table where the matched fte_id value continues the execution the tc action list. Currently the post-action design is implemented only by the ct action. Introduce post action infrastructure as a pre-step for reusing it with the sFlow offload feature. Init and destroy the common post action table. Refactor the ct offload to use the common post table infrastructure in the next patch. Signed-off-by: NChris Mi <cmi@nvidia.com> Reviewed-by: NOz Shlomo <ozsh@nvidia.com> Reviewed-by: NRoi Dayan <roid@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Chris Mi 提交于
IDR is deprecated. Use xarray instead. Signed-off-by: NChris Mi <cmi@nvidia.com> Reviewed-by: NOz Shlomo <ozsh@nvidia.com> Reviewed-by: NRoi Dayan <roid@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Chris Mi 提交于
Currently it is in eswitch attribute. Move it to flow attribute to reflect the change in previous patch. Signed-off-by: NChris Mi <cmi@nvidia.com> Reviewed-by: NRoi Dayan <roid@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Chris Mi 提交于
Module sample belongs to en/tc instead of esw. Move it and rename accordingly. Signed-off-by: NChris Mi <cmi@nvidia.com> Reviewed-by: NRoi Dayan <roid@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Saeed Mahameed 提交于
mlx5/esw/sample.c doesn't really need mlx5e_priv object, we can remove this redundant dependency by passing the eswitch object directly to the sample object constructor. Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com> Reviewed-by: NRoi Dayan <roid@nvidia.com>
-
由 Colin Ian King 提交于
A recent change added error checking messages and failed to remove one of the previous error checks. There are now two checks on variable err so the second one is redundant dead code and can be removed. Addresses-Coverity: ("Logically dead code") Fixes: a83bdada ("octeontx2-af: Add debug messages for failures") Signed-off-by: NColin Ian King <colin.king@canonical.com> Link: https://lore.kernel.org/r/20210818130927.33895-1-colin.king@canonical.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Vladimir Oltean 提交于
Currently dpaa2_switch_takedown has a funny name and does not do the opposite of dpaa2_switch_init, which makes probing fail when we need to handle an -EPROBE_DEFER. A sketch of what dpaa2_switch_init does: dpsw_open dpaa2_switch_detect_features dpsw_reset for (i = 0; i < ethsw->sw_attr.num_ifs; i++) { dpsw_if_disable dpsw_if_set_stp dpsw_vlan_remove_if_untagged dpsw_if_set_tci dpsw_vlan_remove_if } dpsw_vlan_remove alloc_ordered_workqueue dpsw_fdb_remove dpaa2_switch_ctrl_if_setup When dpaa2_switch_takedown is called from the error path of dpaa2_switch_probe(), the control interface, enabled by dpaa2_switch_ctrl_if_setup from dpaa2_switch_init, remains enabled, because dpaa2_switch_takedown does not call dpaa2_switch_ctrl_if_teardown. Since dpaa2_switch_probe might fail due to EPROBE_DEFER of a PHY, this means that a second probe of the driver will happen with the control interface directly enabled. This will trigger a second error: [ 93.273528] fsl_dpaa2_switch dpsw.0: dpsw_ctrl_if_set_pools() failed [ 93.281966] fsl_dpaa2_switch dpsw.0: fsl_mc_driver_probe failed: -13 [ 93.288323] fsl_dpaa2_switch: probe of dpsw.0 failed with error -13 Which if we investigate the /dev/dpaa2_mc_console log, we find out is caused by: [E, ctrl_if_set_pools:2211, DPMNG] ctrl_if must be disabled So make dpaa2_switch_takedown do the opposite of dpaa2_switch_init (in reasonable limits, no reason to change STP state, re-add VLANs etc), and rename it to something more conventional, like dpaa2_switch_teardown. Fixes: 613c0a58 ("staging: dpaa2-switch: enable the control interface") Signed-off-by: NVladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: NIoana Ciornei <ioana.ciornei@nxp.com> Link: https://lore.kernel.org/r/20210819141755.1931423-1-vladimir.oltean@nxp.comSigned-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Sylwester Dziedziuch 提交于
Make changes to MAC address dependent on the response of PF. Disallow changes to HW MAC address and MAC filter from untrusted VF, thanks to that ping is not lost if VF tries to change MAC. Add a new field in iavf_mac_filter, to indicate whether there was response from PF for given filter. Based on this field pass or discard the filter. If untrusted VF tried to change it's address, it's not changed. Still filter was changed, because of that ping couldn't go through. Fixes: c5c922b3 ("iavf: fix MAC address setting for VFs when filter is rejected") Signed-off-by: NPrzemyslaw Patynowski <przemyslawx.patynowski@intel.com> Signed-off-by: NSylwester Dziedziuch <sylwesterx.dziedziuch@intel.com> Signed-off-by: NMateusz Palczewski <mateusz.palczewski@intel.com> Tested-by: NGurucharan G <Gurucharanx.g@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Arkadiusz Kubalewski 提交于
Without this patch, ATR does not work. Receive/transmit uses queue selection based on SW DCB hashing method. If traffic classes are not configured for PF, then use netdev_pick_tx function for selecting queue for packet transmission. Instead of calling i40e_swdcb_skb_tx_hash, call netdev_pick_tx, which ensures that packet is transmitted/received from CPU that is running the application. Reproduction steps: 1. Load i40e driver 2. Map each MSI interrupt of i40e port for each CPU 3. Disable ntuple, enable ATR i.e.: ethtool -K $interface ntuple off ethtool --set-priv-flags $interface flow-director-atr 4. Run application that is generating traffic and is bound to a single CPU, i.e.: taskset -c 9 netperf -H 1.1.1.1 -t TCP_RR -l 10 5. Observe behavior: Application's traffic should be restricted to the CPU provided in taskset. Fixes: 89ec1f08 ("i40e: Fix queue-to-TC mapping on Tx") Signed-off-by: NPrzemyslaw Patynowski <przemyslawx.patynowski@intel.com> Signed-off-by: NArkadiusz Kubalewski <arkadiusz.kubalewski@intel.com> Tested-by: NDave Switzer <david.switzer@intel.com> Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
- 19 8月, 2021 12 次提交
-
-
由 Max Chou 提交于
For the commit of 9e45524a, wakeup is always disabled for Realtek devices. However, there's the capability for Realtek devices to apply USB wakeup. In this commit, remove WAKEUP_DISABLE feature for Realtek devices. If users would switch wakeup, they should access "/sys/bus/usb/.../power/wakeup" In this commit, it also adds the feature as WAKEUP_AUTOSUSPEND for Realtek devices because it should set do_remote_wakeup on autosuspend. Signed-off-by: NMax Chou <max.chou@realtek.com> Tested-by: NHilda Wu <hildawu@realtek.com> Reviewed-by: NArchie Pusaka <apusaka@chromium.org> Reviewed-by: NAbhishek Pandit-Subedi <abhishekpandit@chromium.org> Signed-off-by: NMarcel Holtmann <marcel@holtmann.org>
-
由 Dario Binacchi 提交于
As reported by a comment in the c_can_start_xmit() this was not a FIFO. C/D_CAN controller sends out the buffers prioritized so that the lowest buffer number wins. What did c_can_start_xmit() do if head was less tail in the tx ring ? It waited until all the frames queued in the FIFO was actually transmitted by the controller before accepting a new CAN frame to transmit, even if the FIFO was not full, to ensure that the messages were transmitted in the order in which they were loaded. By storing the frames in the FIFO without requiring its transmission, we will be able to use the full size of the FIFO even in cases such as the one described above. The transmission interrupt will trigger their transmission only when all the messages previously loaded but stored in less priority positions of the buffers have been transmitted. Link: https://lore.kernel.org/r/20210807130800.5246-5-dariobin@libero.itSuggested-by: NGianluca Falavigna <gianluca.falavigna@inwind.it> Signed-off-by: NDario Binacchi <dariobin@libero.it> Signed-off-by: NMarc Kleine-Budde <mkl@pengutronix.de>
-
由 Dario Binacchi 提交于
The algorithm is already used successfully by other CAN drivers (e.g. mcp251xfd). Its implementation was kindly suggested to me by Marc Kleine-Budde following a patch I had previously submitted. You can find every detail at https://lore.kernel.org/patchwork/patch/1422929/. The idea is that after this patch, it will be easier to patch the driver to use the message object memory as a true FIFO. Link: https://lore.kernel.org/r/20210807130800.5246-4-dariobin@libero.itSuggested-by: NMarc Kleine-Budde <mkl@pengutronix.de> Signed-off-by: NDario Binacchi <dariobin@libero.it> Signed-off-by: NMarc Kleine-Budde <mkl@pengutronix.de>
-
由 Dario Binacchi 提交于
The c_can_poll() handles RX/TX events unconditionally. It may therefore happen that c_can_do_tx() is called unnecessarily because the interrupt was triggered by the reception of a frame. In these cases, we avoid to execute unnecessary statements and exit immediately. Link: https://lore.kernel.org/r/20210807130800.5246-3-dariobin@libero.itSigned-off-by: NDario Binacchi <dariobin@libero.it> Signed-off-by: NMarc Kleine-Budde <mkl@pengutronix.de>
-
由 Dario Binacchi 提交于
It references the clock but it is never used. So let's remove it. Link: https://lore.kernel.org/r/20210807130800.5246-2-dariobin@libero.itSigned-off-by: NDario Binacchi <dariobin@libero.it> Signed-off-by: NMarc Kleine-Budde <mkl@pengutronix.de>
-
由 Marc Kleine-Budde 提交于
The C_CAN/D_CAN cores implement 2 interfaces to manage the message objects. To avoid concurrency and the need for locking one interface is used in the TX path (IF_TX). While the other one, named IF_RX is used from NAPI context only. As this interface is not only used to manage RX, but also TX message objects, this patch renames IF_RX to IF_NAPI. Link: https://lore.kernel.org/r/20210809080608.171545-1-mkl@pengutronix.de Cc: Dario Binacchi <dariobin@libero.it> Signed-off-by: NMarc Kleine-Budde <mkl@pengutronix.de>
-
由 Marc Kleine-Budde 提交于
This patch fixes a typo in the comment in c_can_do_tx(). Fixes: eddf6711 ("can: c_can: add a comment about IF_RX interface's use") Link: https://lore.kernel.org/r/20210806105127.103302-1-mkl@pengutronix.de Cc: Dario Binacchi <dariobin@libero.it> Signed-off-by: NMarc Kleine-Budde <mkl@pengutronix.de>
-
由 Matt Kline 提交于
Give FIFO writes the same treatment as reads to avoid fixed costs of individual transfers on a slow bus (e.g., tcan4x5x). Link: https://lore.kernel.org/r/20210817050853.14875-4-matt@bitbashing.ioSigned-off-by: NMatt Kline <matt@bitbashing.io> Signed-off-by: NMarc Kleine-Budde <mkl@pengutronix.de>
-
由 Matt Kline 提交于
On peripherals communicating over a relatively slow SPI line (e.g. tcan4x5x), individual transfers have high fixed costs. This causes the driver to spend most of its time waiting between transfers and severely limits throughput. Reduce these overheads by reading more than one word at a time. Writing could get a similar treatment in follow-on commits. Link: https://lore.kernel.org/r/20210817050853.14875-3-matt@bitbashing.ioSigned-off-by: NMatt Kline <matt@bitbashing.io> [mkl: remove __packed from struct id_and_dlc] Signed-off-by: NMarc Kleine-Budde <mkl@pengutronix.de>
-
由 Matt Kline 提交于
If FIFO reads or writes fail due to the underlying regmap (e.g., SPI) I/O, propagate that up to the m_can driver, log an error, and disable interrupts, similar to the mcp251xfd driver. While reworking the FIFO functions to add this error handling, add support for bulk reads and writes of multiple registers. Link: https://lore.kernel.org/r/20210817050853.14875-2-matt@bitbashing.ioSigned-off-by: NMatt Kline <matt@bitbashing.io> [mkl: re-wrap long lines, remove WARN_ON, convert to netdev block comments] Signed-off-by: NMarc Kleine-Budde <mkl@pengutronix.de>
-
由 Marc Kleine-Budde 提交于
This patch fixes the commenting style in the m_can driver. Fixes: 1be37d3b ("can: m_can: fix periph RX path: use rx-offload to ensure skbs are sent from softirq context") Fixes: df06fd67 ("can: m_can: m_can_chip_config(): enable and configure internal timestamps") Link: https://lore.kernel.org/r/20210819111703.599686-2-mkl@pengutronix.de Cc: Chandrasekar Ramakrishnan <rcsekar@samsung.com> Signed-off-by: NMarc Kleine-Budde <mkl@pengutronix.de>
-
由 Marc Kleine-Budde 提交于
This patch removes a stray empty line in the cdev_to_priv() function. Fixes: ac33ffd3 ("can: m_can: let m_can_class_allocate_dev() allocate driver specific private data") Link: https://lore.kernel.org/r/20210819111703.599686-1-mkl@pengutronix.deSigned-off-by: NMarc Kleine-Budde <mkl@pengutronix.de>
-