- 12 6月, 2021 1 次提交
-
-
由 Huazhong Tan 提交于
Adds PTP support for HNS3 ethernet driver. Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NYufeng Mo <moyufeng@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 11 6月, 2021 31 次提交
-
-
由 Alex Elder 提交于
Finally the code handles the IPA memory region array in the configuration data without assuming it is indexed by region ID. Get rid of the array index designators where these arrays are initialized. As a result, there's no more need to define an explicitly undefined memory region ID, so get rid of that. Change ipa_mem_find() so it no longer assumes the ipa->mem[] array is indexed by memory region ID. Instead, have it search the array for the entry having the requested memory ID, and return the address of the descriptor if found. Otherwise return NULL. Stop allowing memory regions to be defined with zero size and zero canary value. Check for this condition in ipa_mem_valid_one(). As a result, it is not necessary to check for this case in ipa_mem_config(). Finally, there is no need for IPA_MEM_UNDEFINED to be defined any more, so get rid of it. Signed-off-by: NAlex Elder <elder@linaro.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Alex Elder 提交于
Introduce a new function that abstracts finding information about a region in IPA-local memory, given its memory region ID. For now it simply uses the region ID as an index into the IPA memory array. If the region is not defined, ipa_mem_find() returns a null pointer. Update all code that accesses the ipa->mem[] array directly to use ipa_mem_find() instead. The return value must be checked for null when optional memory regions are sought. Signed-off-by: NAlex Elder <elder@linaro.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Alex Elder 提交于
Stop passing most of the Boolean flags to ipa_table_valid_one(), and just pass a memory region ID to it instead. We still need to indicate whether we're operating on a routing or filter table. Signed-off-by: NAlex Elder <elder@linaro.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Alex Elder 提交于
Pass a memory region ID rather than the address of a memory region descriptor to ipa_table_reset_add() to simplify callers. Similarly, pass memory region IDs to ipa_table_init_add(). Signed-off-by: NAlex Elder <elder@linaro.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Alex Elder 提交于
Pass a memory region ID rather than the address of a memory region descriptor to ipa_mem_zero_region_add() to simplify callers. Signed-off-by: NAlex Elder <elder@linaro.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Alex Elder 提交于
Pass a memory region ID rather than the address of a memory region descriptor to ipa_filter_reset_table(), to simplify callers. We can eliminate the check for a zero region size in this function because ipa_table_reset_add() checks that before adding anything to the transaction. Note that here and in subsequent commits there is no need to check whether a memory region exists, because we will have already verified that during initialization. Signed-off-by: NAlex Elder <elder@linaro.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Alex Elder 提交于
Do some general cleanup in ipa_cmd_header_valid(): - Delay assigning the mem variable until just before it's used. - Assign the maximum offset and size values together. - Improve comments explaining the single range of memory being made up of a modem portion and an AP portion. - Record the offset of the combined range in a local variable. - Do the initial size assignment right after assigning the offset. Signed-off-by: NAlex Elder <elder@linaro.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Alex Elder 提交于
Change ipa_mem_valid() to iterate over the entries using a u32 index variable rather than using a memory region ID. Use the ID found inside the memory descriptor rather than the loop index. Change ipa_mem_size_valid() to iterate over the entries but without assuming the array index is the memory region ID. "Empty" entries will have zero size; and we'll temporarily assume such entries have zero offset as well (they all do, currently). Similarly, don't assume the mem[] array is indexed by ID in ipa_mem_config(). There, "empty" entries will have a zero canary count, so no special assumptions are needed to handle them correctly. Signed-off-by: NAlex Elder <elder@linaro.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Cristobal Forno 提交于
Allow the device to be initialized at a later time if it is not available at boot. The device will be allowed to probe but will be given a "down" state. After completing device probe and registering the net device, the driver will await an interrupt signal from its partner device, indicating that it is ready for boot. The driver will schedule a work event to perform the necessary procedure and begin operation. Co-developed-by: NThomas Falcon <tlfalcon@linux.ibm.com> Signed-off-by: NThomas Falcon <tlfalcon@linux.ibm.com> Signed-off-by: NCristobal Forno <cforno12@linux.ibm.com> Acked-by: NLijun Pan <lijunp213@gmail.com> Reviewed-by: NDany Madden <drt@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Serhiy Boiko 提交于
The following features are supported: - LAG basic operations - create/delete LAG - add/remove a member to LAG - enable/disable member in LAG - LAG Bridge support - LAG VLAN support - LAG FDB support Limitations: - Only HASH lag tx type is supported - The Hash parameters are not configurable. They are applied during the LAG creation stage. - Enslaving a port to the LAG device that already has an upper device is not supported. Co-developed-by: NAndrii Savka <andrii.savka@plvision.eu> Signed-off-by: NAndrii Savka <andrii.savka@plvision.eu> Signed-off-by: NSerhiy Boiko <serhiy.boiko@plvision.eu> Co-developed-by: NVadym Kochan <vkochan@marvell.com> Signed-off-by: NVadym Kochan <vkochan@marvell.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Vadym Kochan 提交于
Replace prestera_bridge_port_event(...) by prestera_bridge_port_join(...) and prestera_bridge_port_leave(). It simplifies the code by reading netdev event specific handling only once in prestera_main.c Signed-off-by: NVadym Kochan <vkochan@marvell.com> CC: Vladimir Oltean <olteanv@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Vadym Kochan 提交于
Move handling of PRECHANGEUPPER event from prestera_switchdev to prestera_main which is responsible for basic netdev events handling and routing them to related module. Signed-off-by: NVadym Kochan <vkochan@marvell.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Tan Zhongjun 提交于
The platform_get_irq() prints error message telling that interrupt is missing,hence there is no need to duplicated that message in the drivers. Signed-off-by: NTan Zhongjun <tanzhongjun@yulong.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Wang Hai 提交于
Convert list_for_each() to list_for_each_entry() where applicable. This simplifies the code. Reported-by: NHulk Robot <hulkci@huawei.com> Signed-off-by: NWang Hai <wanghai38@huawei.com> Acked-by: NLijun Pan <lijunp213@gmail.com> Reviewed-by: NDany Madden <drt@linux.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yang Yingliang 提交于
Use devm_platform_get_and_ioremap_resource() to simplify code. Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yang Yingliang 提交于
Use devm_platform_get_and_ioremap_resource() to simplify code and avoid a null-ptr-deref by checking 'res' in it. Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Wong Vee Khee 提交于
The commit 5a558611 ("net: stmmac: support FPE link partner hand-shaking procedure") introduced the following coverity warning: "Parse warning (PW.MIXED_ENUM_TYPE)" "1. mixed_enum_type: enumerated type mixed with another type" This is due to both "lo_state" and "lp_sate" which their datatype are enum stmmac_fpe_state type, and being assigned with "FPE_EVENT_UNKNOWN" which is a macro-defined of 0. Fixed this by assigned both these variables with the correct enum value. Fixes: 5a558611 ("net: stmmac: support FPE link partner hand-shaking procedure") Signed-off-by: NWong Vee Khee <vee.khee.wong@linux.intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yang Yingliang 提交于
It will cause null-ptr-deref if platform_get_resource() returns NULL, we need check the return value. Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yang Yingliang 提交于
Use devm_platform_get_and_ioremap_resource() to simplify code. Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yang Yingliang 提交于
Use devm_platform_get_and_ioremap_resource() to simplify code. Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Peng Li 提交于
Braces {} should be used on all arms of this statement. Signed-off-by: NPeng Li <lipeng321@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Peng Li 提交于
Networking block comments don't use an empty /* line, use /* Comment... Block comments use * on subsequent lines. Block comments use a trailing */ on a separate line. This patch fixes the comments style issues. Signed-off-by: NPeng Li <lipeng321@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Peng Li 提交于
According to the chackpatch.pl, space prohibited after that open parenthesis '('. Signed-off-by: NPeng Li <lipeng321@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Peng Li 提交于
Add space required before the open parenthesis '('. Add space required after that close brace '}'. Signed-off-by: NPeng Li <lipeng321@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Peng Li 提交于
Should not use assignment in if condition. Signed-off-by: NPeng Li <lipeng321@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Peng Li 提交于
Fix the checkpatch error as "foo* bar" and should be "foo *bar", and "(foo*)" should be "(foo *)". Signed-off-by: NPeng Li <lipeng321@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Peng Li 提交于
This patch fixes the checkpatch error about missing a blank line after declarations. Signed-off-by: NPeng Li <lipeng321@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Peng Li 提交于
This patch removes some redundant blank lines. Signed-off-by: NPeng Li <lipeng321@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yang Yingliang 提交于
Use devm_platform_get_and_ioremap_resource() to simplify code and avoid a null-ptr-deref by checking 'res' in it. Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yang Yingliang 提交于
Use devm_platform_get_and_ioremap_resource() to simplify code. Signed-off-by: NYang Yingliang <yangyingliang@huawei.com> Reviewed-by: NGrygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Wong Vee Khee 提交于
PHY devices such as the Marvell Alaska 88E2110 does not return a valid PHY ID when probed using Clause-22. The current implementation treats PHY ID of zero as a non-error and valid PHY ID, and causing the PHY device failed to bind to the Marvell driver. For such devices, we do an additional probe in the Clause-45 space, if a valid PHY ID is returned, we then proceed to attach the PHY device to the matching PHY ID driver. Signed-off-by: NWong Vee Khee <vee.khee.wong@linux.intel.com> Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 10 6月, 2021 8 次提交
-
-
由 Vlad Buslov 提交于
Move private bridge structures to dedicated headers that is accessible to bridge tracepoint header. Implemented following tracepoints: - Initialize FDB entry. - Refresh FDB entry. - Cleanup FDB entry. - Create VLAN. - Cleanup VLAN. - Attach port to bridge. - Detach port from bridge. Usage example: ># cd /sys/kernel/debug/tracing ># echo mlx5:mlx5_esw_bridge_fdb_entry_init >> set_event ># cat trace ... kworker/u20:1-96 [001] .... 231.892503: mlx5_esw_bridge_fdb_entry_init: net_device=enp8s0f0_0 addr=e4:fd:05:08:00:02 vid=3 flags=0 lastuse=4294895695 Signed-off-by: NVlad Buslov <vladbu@nvidia.com> Reviewed-by: NJianbo Liu <jianbol@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Vlad Buslov 提交于
With support for pvid vlans in mlx5 bridge it is possible to have rules in untagged flow group when vlan filtering is enabled. However, such rules can also match tagged packets that didn't match anything in tagged flow group. Filter such packets by introducing additional flow group between tagged and untagged groups. When filtering is enabled on the bridge create additional flow in vlan filtering flow group and matches tagged packets with specified source MAC address and redirects them to new "skip" table. The skip table is new lowest-level empty table that is used to skip all further processing on packet in bridge priority. Signed-off-by: NVlad Buslov <vladbu@nvidia.com> Reviewed-by: NJianbo Liu <jianbol@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Vlad Buslov 提交于
Implement support for pushing vlan header into untagged packet on ingress of port that has pvid configured and support for popping vlan on egress of port that has the matching vlan configured as untagged. To support such configurations packet reformat contexts of {INSERT|REMOVE}_HEADER types are created per such vlan and saved to struct mlx5_esw_bridge_vlan which allows all FDB entries on particular vlan to share single packet reformat instance. When initializing FDB entries with pvid or untagged vlan type set its mlx5_flow_act->pkt_reformat action accordingly. Flush all flows when removing vlan from port. This is necessary because even though software bridge removes all FDB entries before removing their vlan, mlx5 bridge implementation deletes their corresponding flow entries from hardware in asynchronous workqueue task, which will cause firmware error if vlan packet reformat context is deleted before all flows that point to it. Signed-off-by: NVlad Buslov <vladbu@nvidia.com> Reviewed-by: NJianbo Liu <jianbol@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Vlad Buslov 提交于
Add support for FDB vlan-tagged entries. Extend ingress and egress flow tables with flow groups to match packet vlan tag. Modify the flow creation code to include vlan tag, if vlan is configured on port and vlan configuration is supported for offload. Signed-off-by: NVlad Buslov <vladbu@nvidia.com> Reviewed-by: NJianbo Liu <jianbol@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Vlad Buslov 提交于
Establish all the necessary infrastructure for implementing vlan matching and vlan push/pop in following patches: - Add new per-vport struct mlx5_esw_bridge_port that is used to store metadata for all port vlans. Initialize and cleanup the instance of the structure when port representor is linked/unliked to bridge. Use xarray to allow quick vport metadata lookup by vport number. - Add new per-port-vlan struct mlx5_esw_bridge_vlan that is used to store vlan-specific data (vid, flags). Handle SWITCHDEV_PORT_OBJ_{ADD|DEL} switchdev blocking event for SWITCHDEV_OBJ_ID_PORT_VLAN object by creating/deleting the vlan structure and saving it in per-vport xarray for quick lookup. - Implement support for SWITCHDEV_ATTR_ID_BRIDGE_VLAN_FILTERING object attribute that is used to toggle vlan filtering. Remove all FDB entries from hardware when vlan filtering state is changed. Signed-off-by: NVlad Buslov <vladbu@nvidia.com> Reviewed-by: NJianbo Liu <jianbol@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Vlad Buslov 提交于
Dynamic FDB entries require capability to age out unused entries. Such entries are either aged out by kernel software bridge implementation or by hardware switch that offloaded them (and notified the kernel to mark them as SWITCHDEV_FDB_ADD_TO_BRIDGE). Leaving ageing to kernel bridge would result it deleting offloaded dynamic FDB entries every ageing_time period due to packets being processed by hardware and, consecutively, 'used' timestamp for FDB entry not being updated. However, since hardware doesn't support ageing, software solution inside the driver is required. In order to emulate hardware ageing in driver, extend bridge FDB ingress flows with counter and create delayed br_offloads->update_work task on bridge offloads workqueue. Run the task every second, update 'used' timestamp in software bridge dynamic entry by sending SWITCHDEV_FDB_ADD_TO_BRIDGE for the entry, if it flow hardware counter lastuse field was changed since last update. If lastuse wasn't changed for ageing_time period, then delete the FDB entry and notify kernel bridge by sending SWITCHDEV_FDB_DEL_TO_BRIDGE notification. Register blocking switchdev notifier callback and handle attribute set SWITCHDEV_ATTR_ID_BRIDGE_AGEING_TIME event to allow user to dynamically configure bridge FDB entry ageing timeout. Save the value per-bridge in struct mlx5_esw_bridge. Silently ignore SWITCHDEV_ATTR_ID_PORT_{PRE_}BRIDGE_FLAGS switchdev event since mlx5 bridge implementation relies on software bridge for implementing necessary behavior for all of these flags. Signed-off-by: NVlad Buslov <vladbu@nvidia.com> Reviewed-by: NJianbo Liu <jianbol@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Vlad Buslov 提交于
Hardware supported by mlx5 driver doesn't provide learning and requires the driver to emulate all switch-like behavior in software. As such, all packets by default go through miss path, appear on representor and get to software bridge, if it is the upper device of the representor. This causes bridge to process packet in software, learn the MAC address to FDB and send SWITCHDEV_FDB_ADD_TO_DEVICE event to all subscribers. In order to offload FDB entries in mlx5, register switchdev notifier callback and implement support for both 'added_by_user' and dynamic FDB entry SWITCHDEV_FDB_ADD_TO_DEVICE events asynchronously using new mlx5_esw_bridge_offloads->wq ordered workqueue. In workqueue callback offload the ingress rule (matching FDB entry MAC as packet source MAC) and egress table rule (matching FDB entry MAC as destination MAC). For ingress table rule also match source vport to ensure that only traffic coming from expected bridge port is matched by offloaded rule. Save all the relevant FDB entry data in struct mlx5_esw_bridge_fdb_entry instance and insert the instance in new mlx5_esw_bridge->fdb_list list (for traversing all entries by software ageing implementation in following patch) and in new mlx5_esw_bridge->fdb_ht hash table for fast retrieval. Notify the bridge that FDB entry has been offloaded by sending SWITCHDEV_FDB_OFFLOADED notification. Delete FDB entry on reception of SWITCHDEV_FDB_DEL_TO_DEVICE event. Signed-off-by: NVlad Buslov <vladbu@nvidia.com> Reviewed-by: NJianbo Liu <jianbol@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-
由 Vlad Buslov 提交于
Create new files bridge.{c|h} in en/rep directory that implement bridge interaction with representor netdevices and handle required events/notifications, bridge.{c|h} in esw directory that implement all necessary eswitch offloading infrastructure and works on vport/eswitch level. Provide new kconfig MLX5_BRIDGE which is automatically selected when both kernel bridge and mlx5 eswitch configs are enabled. Provide basic infrastructure for bridge offloads: - struct mlx5_esw_bridge_offloads - per-eswitch bridge offload structure that encapsulates generic bridge-offloads data (notifier blocks, ingress flow table/group, etc.) that is created/deleted on enable/disable eswitch offloads. - struct mlx5_esw_bridge - per-bridge structure that encapsulates per-bridge data (reference counter, FDB, egress flow table/group, etc.) that is created when first eswitch represetor is attached to new bridge and deleted when last representor is removed from the bridge as a result of NETDEV_CHANGEUPPER event. The bridge tables are created with new priority FDB_BR_OFFLOAD in FDB namespace. The new priority is between tc-miss and slow path priorities. Priority consist of two levels: the ingress table that is global per eswitch and matches incoming packets by src_mac/vid and redirects them to next level (egress table) that is chosen according to ingress port bridge membership and matches on dst_mac/vid in order to redirect packet to vport according to the following diagram: + | +---------v----------+ | | | FDB_TC_OFFLOAD | | | +---------+----------+ | | +---------v----------+ | | | FDB_FT_OFFLOAD | | | +---------+----------+ | | +---------v----------+ | | | FDB_TC_MISS | | | +---------+----------+ | +--------------------------------------+ | | | | +------+ | | | | | +------v--------+ FDB_BR_OFFLOAD | | | INGRESS_TABLE | | | +------+---+----+ | | | | match | | | +---------+ | | | | | +-------+ | | +-------v-------+ match | | | | | | EGRESS_TABLE +------------> vport | | | +-------+-------+ | | | | | | | +-------+ | | miss | | | +------+------+ | | | | +--------------------------------------+ | | +---------v----------+ | | | FDB_SLOW_PATH | | | +---------+----------+ | v Signed-off-by: NVlad Buslov <vladbu@nvidia.com> Reviewed-by: NJianbo Liu <jianbol@nvidia.com> Signed-off-by: NSaeed Mahameed <saeedm@nvidia.com>
-