- 15 6月, 2021 1 次提交
-
-
由 Jian Shen 提交于
stable inclusion from stable-5.10.42 commit a663c1e418a3b5b8e8edfad4bc8e7278c312d6fc bugzilla: 55093 CVE: NA -------------------------------- [ Upstream commit a289a7e5 ] Currently, the netdevice is registered before client initializing complete. So there is a timewindow between netdevice available and usable. In this case, if user try to change the channel number or ring param, it may cause the hns3_set_rx_cpu_rmap() being called twice, and report bug. [47199.416502] hns3 0000:35:00.0 eth1: set channels: tqp_num=1, rxfh=0 [47199.430340] hns3 0000:35:00.0 eth1: already uninitialized [47199.438554] hns3 0000:35:00.0: rss changes from 4 to 1 [47199.511854] hns3 0000:35:00.0: Channels changed, rss_size from 4 to 1, tqps from 4 to 1 [47200.163524] ------------[ cut here ]------------ [47200.171674] kernel BUG at lib/cpu_rmap.c:142! [47200.177847] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP [47200.185259] Modules linked in: hclge(+) hns3(-) hns3_cae(O) hns_roce_hw_v2 hnae3 vfio_iommu_type1 vfio_pci vfio_virqfd vfio pv680_mii(O) [last unloaded: hclge] [47200.205912] CPU: 1 PID: 8260 Comm: ethtool Tainted: G O 5.11.0-rc3+ #1 [47200.215601] Hardware name: , xxxxxx 02/04/2021 [47200.223052] pstate: 60400009 (nZCv daif +PAN -UAO -TCO BTYPE=--) [47200.230188] pc : cpu_rmap_add+0x38/0x40 [47200.237472] lr : irq_cpu_rmap_add+0x84/0x140 [47200.243291] sp : ffff800010e93a30 [47200.247295] x29: ffff800010e93a30 x28: ffff082100584880 [47200.254155] x27: 0000000000000000 x26: 0000000000000000 [47200.260712] x25: 0000000000000000 x24: 0000000000000004 [47200.267241] x23: ffff08209ba03000 x22: ffff08209ba038c0 [47200.273789] x21: 000000000000003f x20: ffff0820e2bc1680 [47200.280400] x19: ffff0820c970ec80 x18: 00000000000000c0 [47200.286944] x17: 0000000000000000 x16: ffffb43debe4a0d0 [47200.293456] x15: fffffc2082990600 x14: dead000000000122 [47200.300059] x13: ffffffffffffffff x12: 000000000000003e [47200.306606] x11: ffff0820815b8080 x10: ffff53e411988000 [47200.313171] x9 : 0000000000000000 x8 : ffff0820e2bc1700 [47200.319682] x7 : 0000000000000000 x6 : 000000000000003f [47200.326170] x5 : 0000000000000040 x4 : ffff800010e93a20 [47200.332656] x3 : 0000000000000004 x2 : ffff0820c970ec80 [47200.339168] x1 : ffff0820e2bc1680 x0 : 0000000000000004 [47200.346058] Call trace: [47200.349324] cpu_rmap_add+0x38/0x40 [47200.354300] hns3_set_rx_cpu_rmap+0x6c/0xe0 [hns3] [47200.362294] hns3_reset_notify_init_enet+0x1cc/0x340 [hns3] [47200.370049] hns3_change_channels+0x40/0xb0 [hns3] [47200.376770] hns3_set_channels+0x12c/0x2a0 [hns3] [47200.383353] ethtool_set_channels+0x140/0x250 [47200.389772] dev_ethtool+0x714/0x23d0 [47200.394440] dev_ioctl+0x4cc/0x640 [47200.399277] sock_do_ioctl+0x100/0x2a0 [47200.404574] sock_ioctl+0x28c/0x470 [47200.409079] __arm64_sys_ioctl+0xb4/0x100 [47200.415217] el0_svc_common.constprop.0+0x84/0x210 [47200.422088] do_el0_svc+0x28/0x34 [47200.426387] el0_svc+0x28/0x70 [47200.431308] el0_sync_handler+0x1a4/0x1b0 [47200.436477] el0_sync+0x174/0x180 [47200.441562] Code: 11000405 79000c45 f8247861 d65f03c0 (d4210000) [47200.448869] ---[ end trace a01efe4ce42e5f34 ]--- The process is like below: excuting hns3_client_init | register_netdev() | hns3_set_channels() | | hns3_set_rx_cpu_rmap() hns3_reset_notify_uninit_enet() | | | quit without calling function | hns3_free_rx_cpu_rmap for flag | HNS3_NIC_STATE_INITED is unset. | | | hns3_reset_notify_init_enet() | | set HNS3_NIC_STATE_INITED call hns3_set_rx_cpu_rmap()-- crash Fix it by calling register_netdev() at the end of function hns3_client_init(). Fixes: 08a10068 ("net: hns3: re-organize vector handle") Signed-off-by: NJian Shen <shenjian15@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 03 6月, 2021 5 次提交
-
-
由 Peng Li 提交于
stable inclusion from stable-5.10.38 commit 5aa957e2b5fce76c1e8c845cf5ea1022fe1fd178 bugzilla: 51875 CVE: NA -------------------------------- [ Upstream commit b416e872 ] Currently, netif_tx_stop_all_queues() is used to ensure that the xmit is not running, but for the concurrent case it will not take effect, since netif_tx_stop_all_queues() just sets a flag without locking to indicate that the xmit queue(s) should not be run. So use netif_tx_disable() to replace netif_tx_stop_all_queues(), it takes the xmit queue lock while marking the queue stopped. Fixes: 76ad4f0e ("net: hns3: Add support of HNS3 Ethernet Driver for hip08 SoC") Signed-off-by: NPeng Li <lipeng321@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Hao Chen 提交于
stable inclusion from stable-5.10.38 commit 90120c475dd7267541db416ce9490257f0bb15f7 bugzilla: 51875 CVE: NA -------------------------------- [ Upstream commit 905416f1 ] When skb->ip_summed is CHECKSUM_PARTIAL, for non-tunnel udp packet, which has a dest port as the IANA assigned, the hardware is expected to do the checksum offload, but the hardware whose version is below V3 will not do the checksum offload when udp dest port is 4790. So fixes it by doing the checksum in software for this case. Fixes: 76ad4f0e ("net: hns3: Add support of HNS3 Ethernet Driver for hip08 SoC") Signed-off-by: NHao Chen <chenhao288@hisilicon.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jian Shen 提交于
stable inclusion from stable-5.10.38 commit 7a476a8a9cb69096ea37c8f71ec3455a4be3c948 bugzilla: 51875 CVE: NA -------------------------------- [ Upstream commit b4047aac ] In some cases, the device is not initialized because reset failed. If another task calls hns3_reset_notify_up_enet() before reset retry, it will cause an error since uninitialized pointer access. So add check for HNS3_NIC_STATE_INITED before calling hns3_nic_net_open() in hns3_reset_notify_up_enet(). Fixes: bb6b94a8 ("net: hns3: Add reset interface implementation in client") Signed-off-by: NJian Shen <shenjian15@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Yunsheng Lin 提交于
stable inclusion from stable-5.10.38 commit b502a6a440667da6b9854ca14bbdac0fca458c58 bugzilla: 51875 CVE: NA -------------------------------- [ Upstream commit d5d5e019 ] Currently hns3 driver only handle the xmit skb with one level of fraglist skb, add handling for multi level by calling hns3_tx_bd_num() recursively when calculating bd num and calling hns3_fill_skb_to_desc() recursively when filling tx desc. When the skb has a fraglist level of 24, the skb is simply dropped and stats.max_recursion_level is added to record the error. Move the stat handling from hns3_nic_net_xmit() to hns3_nic_maybe_stop_tx() in order to handle different error stat and add the 'max_recursion_level' and 'hw_limitation' stat. Note that the max recursive level as 24 is chose according to below: commit 48a1df65 ("skbuff: return -EMSGSIZE in skb_to_sgvec to prevent overflow"). And that we are not able to find a testcase to verify the recursive fraglist case, so Fixes tag is not provided. Reported-by: NBarry Song <song.bao.hua@hisilicon.com> Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Salil Mehta 提交于
stable inclusion from stable-5.10.37 commit 3cf9fac71b7903065719d4743772d6302367b6fe bugzilla: 51868 CVE: NA -------------------------------- [ Upstream commit d392ecd1 ] Limiting the scope of the variable vector_ring_chain to the block where it is used. Fixes: 424eb834 ("net: hns3: Unified HNS3 {VF|PF} Ethernet Driver for hip08 SoC") Signed-off-by: NSalil Mehta <salil.mehta@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 30 9月, 2020 3 次提交
-
-
由 Huazhong Tan 提交于
Add support for UDP segmentation offload to the HNS3 driver when the device can do it. Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Huazhong Tan 提交于
Since the maximun BD number may not be 8 now, so rename hns3_over_8bd() to hns3_over_max_bd(). Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Huazhong Tan 提交于
Currently, the driver is able to query the device's specifications, which includes the maximum BD number of non TSO packet, so replace macro HNS3_MAX_NON_TSO_BD_NUM with the queried value, and rewrite macro HNS3_MAX_NON_TSO_SIZE whose value depends on the the maximum BD number of non TSO packet. Also, add a new parameter max_non_tso_bd_num to function hns3_tx_bd_num() and hns3_skb_need_linearized(), then they can get the maximum BD number of non TSO packet from the caller instead of calculating by themself, The note of hns3_skb_need_linearized() should be update as well. Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 28 9月, 2020 2 次提交
-
-
由 Guangbin Huang 提交于
In order to improve code maintainability and compatibility, add support to query the device capability by expanding the existing version query command. The device capability refers to the features supported by the device. Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Guangbin Huang 提交于
To better identify the device version, struct hnae3_handle adds a member dev_version to replace pci revision. The dev_version consists of hardware version and PCI revision. The hardware version is queried from firmware by an existing firmware version query command. Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 25 9月, 2020 4 次提交
-
-
由 Guangbin Huang 提交于
VF devices do not have speed division, its speed is depended on its PF. So macro name of PCI device id of VF is incorrent to have 100G info, it should be renamed by removing 100G info. Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Guangbin Huang 提交于
The 200G device has a new device id 0xA228, so adds this device id to pci table, then the driver can probe it. As speed_ability queried from firmware has only 8 bits and already be used up, so firmware adds extra speed_ability_ext to indicate more speed abilities to support 200G and driver needs to parse it. Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yufeng Mo 提交于
In hns3_process_hw_error(), the hardware error detection of the ROCEE AXI RESP error type is added. When this error occurs, the client needs to be notified of this error and take corresponding operation. Signed-off-by: NYufeng Mo <moyufeng@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yufeng Mo 提交于
If a variable is assigned a value before it is used, it's no need to assign an initial value to the variable. So remove these redundant operations. Signed-off-by: NYufeng Mo <moyufeng@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 18 9月, 2020 6 次提交
-
-
由 Yunsheng Lin 提交于
Use napi_consume_skb() to batch consuming skb when cleaning tx desc in NAPI polling. Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yunsheng Lin 提交于
writel() can be used to order I/O vs memory by default when writing portable drivers. Use writel() to replace wmb() + writel_relaxed(), and writel() is dma_wmb() + writel_relaxed() for ARM64, so there is an optimization here because dma_wmb() is a lighter barrier than wmb(). Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yunsheng Lin 提交于
Currently HNS3_RING_RX_RING_FBDNUM_REG register is read to determine how many rx desc can be cleaned. To avoid the register read operation in the critical data path, use the valid bit in the rx desc to determine if a specific rx desc can be cleaned. The hns3 driver clear valid bit in the rx desc before notifying the rx desc to the hw, and hw will only set the valid bit of the rx desc after corresponding buffer is filled with packet data and other field in the rx desc is set accordingly. Add hns3_rx_ring_move_fw() function to clear the valid bit in the rx desc before moving rx ring's next_to_clean forward to avoid double cleaning a rx desc, also add a dma_rmb() barrier in hns3_handle_rx_bd() to make sure valid bit is set before reading other field in the rx desc. Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yunsheng Lin 提交于
Currently HNS3_RING_TX_RING_HEAD_REG register is read to determine how many tx desc can be cleaned. To avoid the register read operation in the critical data path, use the valid bit in the tx desc to determine if a specific tx desc can be cleaned. The hns3 driver sets valid bit in the tx desc before ringing a doorbell to the hw, and hw will only clear the valid bit of the tx desc after corresponding packet is sent out to the wire. And because next_to_use for tx ring is a changing variable when the driver is filling the tx desc, so reuse the pull_len for rx ring to record the tx desc that has notified to the hw, so that hns3_nic_reclaim_desc() can decide how many tx desc's valid bit need checking when reclaiming tx desc. And io_err_cnt stat is also removed for it is not used anymore. Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yunsheng Lin 提交于
Use netdev_xmit_more() to defer the tx doorbell operation when the skb is passed to the driver continuously. By doing this we can improve the overall xmit performance by avoid some doorbell operations. Also, the tx_err_cnt stat is not used, so rename it to tx_more stat. Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yunsheng Lin 提交于
Batch the page reference count updates instead of doing them one at a time. By doing this we can improve the overall receive performance by avoid some atomic increment operations when the rx page is reused. Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 09 9月, 2020 2 次提交
-
-
由 Guojia Liao 提交于
hns3_update_promisc_mode is defined, but not be used, so remove it. Signed-off-by: NGuojia Liao <liaoguojia@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Huazhong Tan 提交于
'io_base' has been defined and initialized, but never used, so remove it. Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 27 8月, 2020 1 次提交
-
-
由 Tariq Toukan 提交于
Many device drivers use the same prefetch code structure to deal with small L1 cacheline size. Take this code into a function and call it from the drivers. Suggested-by: NJakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: NTariq Toukan <tariqt@mellanox.com> Reviewed-by: NSaeed Mahameed <saeedm@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 26 8月, 2020 1 次提交
-
-
由 Yi Li 提交于
when skb->encapsulation is 0, skb->ip_summed is CHECKSUM_PARTIAL and it is udp packet, which has a dest port as the IANA assigned. the hardware is expected to do the checksum offload, but the hardware will not do the checksum offload when udp dest port is 6081. This patch fixes it by doing the checksum in software. Reported-by: NLi Bing <libing@winhong.com> Signed-off-by: NYi Li <yili@winhong.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 24 8月, 2020 1 次提交
-
-
由 Gustavo A. R. Silva 提交于
Replace the existing /* fall through */ comments and its variants with the new pseudo-keyword macro fallthrough[1]. Also, remove unnecessary fall-through markings when it is the case. [1] https://www.kernel.org/doc/html/v5.7/process/deprecated.html?highlight=fallthrough#implicit-switch-case-fall-throughSigned-off-by: NGustavo A. R. Silva <gustavoars@kernel.org>
-
- 29 7月, 2020 2 次提交
-
-
由 Yonglong Liu 提交于
When the queue depth and queue parameters are modified, there is a low probability that TX timeout occurs. The two operations cause the link to be down or up when the watchdog is still working. All queues are stopped when the link is down. After the carrier is on, all queues are woken up. If the watchdog detects the link between the carrier on and wakeup queues, a false TX timeout occurs. So fix this issue by modifying the sequence of carrier on and queue wakeup, which is symmetrical to the link down action. Fixes: 76ad4f0e ("net: hns3: Add support of HNS3 Ethernet Driver for hip08 SoC") Signed-off-by: NYonglong Liu <liuyonglong@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yunsheng Lin 提交于
The linear and frag data part may be changed when the skb is expanded or lineared in skb_cow_head() or skb_checksum_help(), which is called by hns3_fill_skb_desc(), so the linear len return by skb_headlen() before the calling of hns3_fill_skb_desc() is unreliable. Move hns3_fill_skb_desc() before the calling of skb_headlen() to fix this bug. Fixes: 76ad4f0e ("net: hns3: Add support of HNS3 Ethernet Driver for hip08 SoC") Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 22 7月, 2020 3 次提交
-
-
由 Yunsheng Lin 提交于
The content of the TX desc is automatically cleared by the HW when the HW has sent out the packet to the wire. When desc filling fails in hns3_nic_net_xmit(), it will call hns3_clear_desc() to do the error handling, which miss zeroing of the TX desc and the checking if a unmapping is needed. So add the zeroing and checking in hns3_clear_desc() to avoid the above problem. Also add DESC_TYPE_UNKNOWN to indicate the info in desc_cb is not valid, because hns3_nic_reclaim_desc() may treat the desc_cb->type of zero as packet and add to the sent pkt statistics accordingly. Fixes: 76ad4f0e ("net: hns3: Add support of HNS3 Ethernet Driver for hip08 SoC") Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yunsheng Lin 提交于
With GRO and fraglist support, the SKB can be aggregated to a total size of 65535, and when that SKB is forwarded through a bridge, the size of the SKB may be pushed to exceed the size of 65535 when br_dev_queue_push_xmit() is called. The max send size of BD supported by the HW is 65535, when a SKB with a headlen of over 65535 is sent to the driver, the driver needs to use multi BD to send the linear data, and the send size of the last BD is calculated incorrectly by the driver who is using '&' operation, which causes a TX error. Use '%' operation to fix this problem. Fixes: 3fe13ed9 ("net: hns3: avoid mult + div op in critical data path") Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yunsheng Lin 提交于
When a big TX buffer is sent using multi BD, the driver maps the whole TX buffer, and unmaps it using info in desc_cb corresponding to each BD, but only the info in the desc_cb of first BD is correct, other info in desc_cb is wrong, which causes TX unmapping problem when SMMU is on. Only set the mapping and freeing info in the desc_cb of first BD to fix this problem, because the TX buffer only need to be unmapped and freed once. Fixes: 1e8a7977("net: hns3: add handling for big TX fragment") Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NHuzhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 07 7月, 2020 1 次提交
-
-
由 Huazhong Tan 提交于
When unloading driver, if flag HNS3_NIC_STATE_INITED has been already cleared, the debugfs will not be uninitialized, so fix it. Fixes: b2292360 ("net: hns3: Add debugfs framework registration") Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 19 6月, 2020 5 次提交
-
-
由 Barry Song 提交于
Right now they are empty functions for our SoC since hardware can keep cache coherent, but it is still good to align with streaming DMA APIs as device drivers should not make an assumption of SoC. Reviewed-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NBarry Song <song.bao.hua@hisilicon.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Barry Song 提交于
disable_irq() after request_irq() is still risk as there is a chance irq can come after request_irq() and before disable_irq(). this should be done by IRQ_NOAUTOEN flag. Signed-off-by: NBarry Song <song.bao.hua@hisilicon.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Barry Song 提交于
This is for improving the readability. Signed-off-by: NBarry Song <song.bao.hua@hisilicon.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Barry Song 提交于
Move the type of buffer address from unsigned char to void Signed-off-by: NBarry Song <song.bao.hua@hisilicon.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Barry Song 提交于
since we are using device-managed function, it is unnecessary to free in probe. Signed-off-by: NBarry Song <song.bao.hua@hisilicon.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 31 5月, 2020 1 次提交
-
-
由 Huazhong Tan 提交于
NETIF_F_HW_VLAN_CTAG_FILTER is not set in netdev->hw_feature for the HNS3 driver, so the handler of NETIF_F_HW_VLAN_CTAG_FILTER in hns3_nic_set_features() won't be called, remove it. Reported-by: NJian Shen <shenjian15@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 15 5月, 2020 1 次提交
-
-
由 Huazhong Tan 提交于
The skb_has_frag_list() in hns3_nic_net_xmit() is redundant, since skb_walk_frags() includes this checking implicitly. Reported-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 26 4月, 2020 1 次提交
-
-
由 Jian Shen 提交于
Currently, the PF driver removes all (including its VFs') MAC/VLAN flow director table entries when resetting, and restores them after reset completed. In fact, the hardware will clear all table entries only in IMP reset and global reset. So driver only needs to restore the table entries in these cases, and needs do nothing when PF reset, FLR or other function level reset. This patch optimizes it by removing unnecessary table entries clear and restoring handling in the reset flow, and doing the restoring after reset completed. Signed-off-by: NJian Shen <shenjian15@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-