- 29 9月, 2021 2 次提交
-
-
由 Jian Shen 提交于
Currently, in function hns3_nic_set_real_num_queue(), the driver doesn't report the queue count and offset for disabled tc. If user enables multiple TCs, but only maps user priorities to partial of them, it may cause the queue range of the unmapped TC being displayed abnormally. Fix it by removing the tc enable checking, ensure the queue count is not zero. With this change, the tc_en is useless now, so remove it. Fixes: a75a8efa ("net: hns3: Fix tc setup when netdev is first up") Signed-off-by: NJian Shen <shenjian15@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jian Shen 提交于
hns3_nic_net_open() is not allowed to called repeatly, but there is no checking for this. When doing device reset and setup tc concurrently, there is a small oppotunity to call hns3_nic_net_open repeatedly, and cause kernel bug by calling napi_enable twice. The calltrace information is like below: [ 3078.222780] ------------[ cut here ]------------ [ 3078.230255] kernel BUG at net/core/dev.c:6991! [ 3078.236224] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP [ 3078.243431] Modules linked in: hns3 hclgevf hclge hnae3 vfio_iommu_type1 vfio_pci vfio_virqfd vfio pv680_mii(O) [ 3078.258880] CPU: 0 PID: 295 Comm: kworker/u8:5 Tainted: G O 5.14.0-rc4+ #1 [ 3078.269102] Hardware name: , BIOS KpxxxFPGA 1P B600 V181 08/12/2021 [ 3078.276801] Workqueue: hclge hclge_service_task [hclge] [ 3078.288774] pstate: 60400009 (nZCv daif +PAN -UAO -TCO BTYPE=--) [ 3078.296168] pc : napi_enable+0x80/0x84 tc qdisc sho[w 3d0e7v8 .e3t0h218 79] lr : hns3_nic_net_open+0x138/0x510 [hns3] [ 3078.314771] sp : ffff8000108abb20 [ 3078.319099] x29: ffff8000108abb20 x28: 0000000000000000 x27: ffff0820a8490300 [ 3078.329121] x26: 0000000000000001 x25: ffff08209cfc6200 x24: 0000000000000000 [ 3078.339044] x23: ffff0820a8490300 x22: ffff08209cd76000 x21: ffff0820abfe3880 [ 3078.349018] x20: 0000000000000000 x19: ffff08209cd76900 x18: 0000000000000000 [ 3078.358620] x17: 0000000000000000 x16: ffffc816e1727a50 x15: 0000ffff8f4ff930 [ 3078.368895] x14: 0000000000000000 x13: 0000000000000000 x12: 0000259e9dbeb6b4 [ 3078.377987] x11: 0096a8f7e764eb40 x10: 634615ad28d3eab5 x9 : ffffc816ad8885b8 [ 3078.387091] x8 : ffff08209cfc6fb8 x7 : ffff0820ac0da058 x6 : ffff0820a8490344 [ 3078.396356] x5 : 0000000000000140 x4 : 0000000000000003 x3 : ffff08209cd76938 [ 3078.405365] x2 : 0000000000000000 x1 : 0000000000000010 x0 : ffff0820abfe38a0 [ 3078.414657] Call trace: [ 3078.418517] napi_enable+0x80/0x84 [ 3078.424626] hns3_reset_notify_up_enet+0x78/0xd0 [hns3] [ 3078.433469] hns3_reset_notify+0x64/0x80 [hns3] [ 3078.441430] hclge_notify_client+0x68/0xb0 [hclge] [ 3078.450511] hclge_reset_rebuild+0x524/0x884 [hclge] [ 3078.458879] hclge_reset_service_task+0x3c4/0x680 [hclge] [ 3078.467470] hclge_service_task+0xb0/0xb54 [hclge] [ 3078.475675] process_one_work+0x1dc/0x48c [ 3078.481888] worker_thread+0x15c/0x464 [ 3078.487104] kthread+0x160/0x170 [ 3078.492479] ret_from_fork+0x10/0x18 [ 3078.498785] Code: c8027c81 35ffffa2 d50323bf d65f03c0 (d4210000) [ 3078.506889] ---[ end trace 8ebe0340a1b0fb44 ]--- Once hns3_nic_net_open() is excute success, the flag HNS3_NIC_STATE_DOWN will be cleared. So add checking for this flag, directly return when HNS3_NIC_STATE_DOWN is no set. Fixes: e8884027 ("net: hns3: call hns3_nic_net_open() while doing HNAE3_UP_CLIENT") Signed-off-by: NJian Shen <shenjian15@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 13 9月, 2021 2 次提交
-
-
由 Yufeng Mo 提交于
The hardware cannot handle short tunnel frames below 65 bytes, and will cause vlan tag missing problem. So pads packet size to 65 bytes for tunnel frames to fix this bug. Fixes: 3db084d2("net: hns3: Fix for vxlan tx checksum bug") Signed-off-by: NYufeng Mo <moyufeng@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yunsheng Lin 提交于
When page pool is added to the hns3 driver, it is always enabled unconditionally, which means spilt page handling in the hns3 driver is dead code. As there is a requirement to test the performance between spilt page handling in driver and page pool, so add a module param to support disabling the page pool. When the page pool is proved to perform better in most case, the spilt page handling in driver can be removed. Fixes: 93188e96 ("net: hns3: support skb's frag page recycling based on page pool") Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 31 8月, 2021 3 次提交
-
-
由 Hao Chen 提交于
This patch removes some unnecessary spaces for cleanup. Signed-off-by: NHao Chen <chenhao288@hisilicon.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Hao Chen 提交于
Add some required spaces to improve readability. Signed-off-by: NHao Chen <chenhao288@hisilicon.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jian Shen 提交于
Currently, the driver sets default feature for netdev->features, netdev->hw_features, netdev->vlan_features and netdev->hw_enc_features separately. It's fussy, because most of the feature bits are same. So refine it by copy value from netdev->features. Signed-off-by: NJian Shen <shenjian15@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 30 8月, 2021 1 次提交
-
-
由 Hao Chen 提交于
Replace the "? :" statement wich max() to simplify code. Signed-off-by: NHao Chen <chenhao288@hisilicon.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 27 8月, 2021 1 次提交
-
-
由 Huazhong Tan 提交于
To improve the readability and maintainability, add hns3_state_init() to initialize the state, and this new function will be used to add more state initialization in the future. Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 24 8月, 2021 2 次提交
-
-
由 Yufeng Mo 提交于
Add support in ethtool for switching EQE/CQE mode. Signed-off-by: NYufeng Mo <moyufeng@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Yufeng Mo 提交于
For device whose version is above V3(include V3), the GL can select EQE or CQE mode, so adds support for it. In CQE mode, the coalesced timer will restart when the first new completion occurs, while in EQE mode, the timer will not restart. Signed-off-by: NYufeng Mo <moyufeng@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
- 10 8月, 2021 1 次提交
-
-
由 Yunsheng Lin 提交于
This patch adds skb's frag page recycling support based on the frag page support in page pool. The performance improves above 10~20% for single thread iperf TCP flow with IOMMU disabled when iperf server and irq/NAPI have a different CPU. The performance improves about 135%(14Gbit to 33Gbit) for single thread iperf TCP flow when IOMMU is in strict mode and iperf server shares the same cpu with irq/NAPI. Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
- 28 7月, 2021 1 次提交
-
-
由 Arnd Bergmann 提交于
Most users of ndo_do_ioctl are ethernet drivers that implement the MII commands SIOCGMIIPHY/SIOCGMIIREG/SIOCSMIIREG, or hardware timestamping with SIOCSHWTSTAMP/SIOCGHWTSTAMP. Separate these from the few drivers that use ndo_do_ioctl to implement SIOCBOND, SIOCBR and SIOCWANDEV commands. This is a purely cosmetic change intended to help readers find their way through the implementation. Cc: Doug Ledford <dledford@redhat.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Jay Vosburgh <j.vosburgh@gmail.com> Cc: Veaceslav Falico <vfalico@gmail.com> Cc: Andy Gospodarek <andy@greyhouse.net> Cc: Andrew Lunn <andrew@lunn.ch> Cc: Vivien Didelot <vivien.didelot@gmail.com> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: Vladimir Oltean <olteanv@gmail.com> Cc: Leon Romanovsky <leon@kernel.org> Cc: linux-rdma@vger.kernel.org Signed-off-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NJason Gunthorpe <jgg@nvidia.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 19 6月, 2021 1 次提交
-
-
由 Yunsheng Lin 提交于
In the current rx page reuse handling process, the rx page buffer may have conflict between driver and stack in high-pressure scenario. To fix this problem, we need to check whether the page is only owned by driver at the begin and at the end of a page to make sure there is no reuse conflict between driver and stack when desc_cb->page_offset is rollbacked to zero or increased. Fixes: fa7711b8 ("net: hns3: optimize the rx page reuse handling process") Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 16 6月, 2021 7 次提交
-
-
由 Yunsheng Lin 提交于
Currently rx page will be reused to receive future packet when the stack releases the previous skb quickly. If the old page can not be reused, a new page will be allocated and mapped, which comsumes a lot of cpu when IOMMU is in the strict mode, especially when the application and irq/NAPI happens to run on the same cpu. So allocate a new frag to memcpy the data to avoid the costly IOMMU unmapping/mapping operation, and add "frag_alloc_err" and "frag_alloc" stats in "ethtool -S ethX" cmd. The throughput improves above 50% when running single thread of iperf using TCP when IOMMU is in strict mode and iperf shares the same cpu with irq/NAPI(rx_copybreak = 2048 and mtu = 1500). Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yunsheng Lin 提交于
Current rx page offset only reset to zero when all the below conditions are satisfied: 1. rx page is only owned by driver. 2. rx page is reusable. 3. the page offset that is above to be given to the stack has reached the end of the page. If the page offset is over the hns3_buf_size(), it means the buffer below the offset of the page is usable when the above condition 1 & 2 are satisfied, so page offset can be reset to zero instead of increasing the offset. We may be able to always reuse the first 4K buffer of a 64K page, which means we can limit the hot buffer size as much as possible. The above optimization is a side effect when refacting the rx page reuse handling in order to support the rx copybreak. Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yunsheng Lin 提交于
Using the queue based tx buffer, it is also possible to allocate a sgl buffer, and use skb_to_sgvec() to convert the skb to the sgvec in order to support the dma_map_sg() to decreases the overhead of IOMMU mapping and unmapping. Firstly, it reduces the number of buffers. For example, a tcp skb may have a 66-byte header and 3 fragments of 4328, 32768, and 28064 bytes. With this patch, dma_map_sg() will combine them into two buffers, 66-bytes header and one 65160-bytes fragment by using IOMMU. Secondly, it reduces the number of dma mapping and unmapping. All the original 4 buffers are mapped only once rather than 4 times. The throughput improves above 10% when running single thread of iperf using TCP when IOMMU is in strict mode. Suggested-by: NBarry Song <song.bao.hua@hisilicon.com> Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Huazhong Tan 提交于
Add support to query tx spare buffer size from configuration file, and use this info to do spare buffer initialization when the module parameter 'tx_spare_buf_size' is not specified. Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yunsheng Lin 提交于
when the packet or frag size is small, it causes both security and performance issue. As dma can't map sub-page, this means some extra kernel data is visible to devices. On the other hand, the overhead of dma map and unmap is huge when IOMMU is on. So add a queue based tx shared bounce buffer to memcpy the small packet when the len of the xmitted skb is below tx_copybreak. Add tx_spare_buf_size module param to set the size of tx spare buffer, and add set/get_tunable to set or query the tx_copybreak. The throughtput improves from 30 Gbps to 90+ Gbps when running 16 netperf threads with 32KB UDP message size when IOMMU is in the strict mode(tx_copybreak = 2000 and mtu = 1500). Suggested-by: NBarry Song <song.bao.hua@hisilicon.com> Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yunsheng Lin 提交于
Factor out hns3_fill_desc() so that it can be reused in the tx bounce supporting. Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yunsheng Lin 提交于
desc_cb is used to store mapping and freeing info for the corresponding desc, which is used in the cleaning process. There will be more desc_cb type coming up when supporting the tx bounce buffer, change desc_cb type to bit-wise value in order to reduce the desc_cb type checking operation in the data path. Also move the desc_cb type definition to hns3_enet.h because it is only used in hns3_enet.c, and declare a local variable desc_cb in hns3_clear_desc() to reduce lines of code. Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 12 6月, 2021 1 次提交
-
-
由 Huazhong Tan 提交于
Adds PTP support for HNS3 ethernet driver. Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NYufeng Mo <moyufeng@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 01 6月, 2021 1 次提交
-
-
由 Jian Shen 提交于
Previously, with hardware limitation, the port VLAN filter are effective for both PF and its VFs simultaneously, so a single function is not able to enable/disable separately, and the VLAN filter state is default enabled. Now some device supports each function to bypass port VLAN filter, then each function can switch VLAN filter separately. Add capability flag to check whether the device supports modify VLAN filter state. If flag on, user will be able to modify the VLAN filter state by ethtool -K. Furtherly, the default VLAN filter state is also changed according to whether non-zero VLAN used. Then the device can receive packet with any VLAN tag if only VLAN 0 used. The function hclge_need_enable_vport_vlan_filter() is used to help implement above changes. And the VLAN filter handle for promisc mode can also be simplified by this function. Signed-off-by: NJian Shen <shenjian15@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
- 26 5月, 2021 1 次提交
-
-
由 Huazhong Tan 提交于
The Linux kernel has support for a dynamic interrupt moderation algorithm known as "dimlib". Replace the custom driver-specific implementation of dynamic interrupt moderation with the kernel's algorithm. Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 19 5月, 2021 3 次提交
-
-
由 Yunsheng Lin 提交于
Currently skb_checksum_help()'s return is ignored, but it may return error when it fails to allocate memory when linearizing. So adds checking for the return of skb_checksum_help(). Fixes: 76ad4f0e("net: hns3: Add support of HNS3 Ethernet Driver for hip08 SoC") Fixes: 3db084d2("net: hns3: Fix for vxlan tx checksum bug") Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Huazhong Tan 提交于
Currently, when adaptive is on, the user's coalesce configuration may be overwritten by the dynamic one. The reason is that user's configurations are saved in struct hns3_enet_tqp_vector whose value maybe changed by the dynamic algorithm. To fix it, use struct hns3_nic_priv instead of struct hns3_enet_tqp_vector to save and get the user's configuration. BTW, operations of storing and restoring coalesce info in the reset process are unnecessary now, so remove them as well. Fixes: 434776a5 ("net: hns3: add ethtool_ops.set_coalesce support to PF") Fixes: 7e96adc4 ("net: hns3: add ethtool_ops.get_coalesce support to PF") Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jian Shen 提交于
Currently, the netdevice is registered before client initializing complete. So there is a timewindow between netdevice available and usable. In this case, if user try to change the channel number or ring param, it may cause the hns3_set_rx_cpu_rmap() being called twice, and report bug. [47199.416502] hns3 0000:35:00.0 eth1: set channels: tqp_num=1, rxfh=0 [47199.430340] hns3 0000:35:00.0 eth1: already uninitialized [47199.438554] hns3 0000:35:00.0: rss changes from 4 to 1 [47199.511854] hns3 0000:35:00.0: Channels changed, rss_size from 4 to 1, tqps from 4 to 1 [47200.163524] ------------[ cut here ]------------ [47200.171674] kernel BUG at lib/cpu_rmap.c:142! [47200.177847] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP [47200.185259] Modules linked in: hclge(+) hns3(-) hns3_cae(O) hns_roce_hw_v2 hnae3 vfio_iommu_type1 vfio_pci vfio_virqfd vfio pv680_mii(O) [last unloaded: hclge] [47200.205912] CPU: 1 PID: 8260 Comm: ethtool Tainted: G O 5.11.0-rc3+ #1 [47200.215601] Hardware name: , xxxxxx 02/04/2021 [47200.223052] pstate: 60400009 (nZCv daif +PAN -UAO -TCO BTYPE=--) [47200.230188] pc : cpu_rmap_add+0x38/0x40 [47200.237472] lr : irq_cpu_rmap_add+0x84/0x140 [47200.243291] sp : ffff800010e93a30 [47200.247295] x29: ffff800010e93a30 x28: ffff082100584880 [47200.254155] x27: 0000000000000000 x26: 0000000000000000 [47200.260712] x25: 0000000000000000 x24: 0000000000000004 [47200.267241] x23: ffff08209ba03000 x22: ffff08209ba038c0 [47200.273789] x21: 000000000000003f x20: ffff0820e2bc1680 [47200.280400] x19: ffff0820c970ec80 x18: 00000000000000c0 [47200.286944] x17: 0000000000000000 x16: ffffb43debe4a0d0 [47200.293456] x15: fffffc2082990600 x14: dead000000000122 [47200.300059] x13: ffffffffffffffff x12: 000000000000003e [47200.306606] x11: ffff0820815b8080 x10: ffff53e411988000 [47200.313171] x9 : 0000000000000000 x8 : ffff0820e2bc1700 [47200.319682] x7 : 0000000000000000 x6 : 000000000000003f [47200.326170] x5 : 0000000000000040 x4 : ffff800010e93a20 [47200.332656] x3 : 0000000000000004 x2 : ffff0820c970ec80 [47200.339168] x1 : ffff0820e2bc1680 x0 : 0000000000000004 [47200.346058] Call trace: [47200.349324] cpu_rmap_add+0x38/0x40 [47200.354300] hns3_set_rx_cpu_rmap+0x6c/0xe0 [hns3] [47200.362294] hns3_reset_notify_init_enet+0x1cc/0x340 [hns3] [47200.370049] hns3_change_channels+0x40/0xb0 [hns3] [47200.376770] hns3_set_channels+0x12c/0x2a0 [hns3] [47200.383353] ethtool_set_channels+0x140/0x250 [47200.389772] dev_ethtool+0x714/0x23d0 [47200.394440] dev_ioctl+0x4cc/0x640 [47200.399277] sock_do_ioctl+0x100/0x2a0 [47200.404574] sock_ioctl+0x28c/0x470 [47200.409079] __arm64_sys_ioctl+0xb4/0x100 [47200.415217] el0_svc_common.constprop.0+0x84/0x210 [47200.422088] do_el0_svc+0x28/0x34 [47200.426387] el0_svc+0x28/0x70 [47200.431308] el0_sync_handler+0x1a4/0x1b0 [47200.436477] el0_sync+0x174/0x180 [47200.441562] Code: 11000405 79000c45 f8247861 d65f03c0 (d4210000) [47200.448869] ---[ end trace a01efe4ce42e5f34 ]--- The process is like below: excuting hns3_client_init | register_netdev() | hns3_set_channels() | | hns3_set_rx_cpu_rmap() hns3_reset_notify_uninit_enet() | | | quit without calling function | hns3_free_rx_cpu_rmap for flag | HNS3_NIC_STATE_INITED is unset. | | | hns3_reset_notify_init_enet() | | set HNS3_NIC_STATE_INITED call hns3_set_rx_cpu_rmap()-- crash Fix it by calling register_netdev() at the end of function hns3_client_init(). Fixes: 08a10068 ("net: hns3: re-organize vector handle") Signed-off-by: NJian Shen <shenjian15@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 15 5月, 2021 4 次提交
-
-
由 Huazhong Tan 提交于
Currently, the debugfs command for bd info is implemented by "echo xxxx > cmd", and record the information in dmesg. It's unnecessary and heavy. To improve it, add two debugfs directories "tx_bd_info" and "rx_bd_info", and create a file for each queue under these two directories, and query the bd info of specific queue by "cat tx_bd_info/tx_bd_queue*" or "cat rx_bd_info/rx_bd_queue*", return the result to userspace, rather than record in dmesg. The display style is below: $ cat rx_bd_info/rx_bd_queue0 Queue 0 rx bd info: BD_IDX L234_INFO PKT_LEN SIZE... 0 0x0 60 60... 1 0x0 1512 1512... $ cat tx_bd_info/tx_bd_queue0 Queue 0 tx bd info: BD_IDX ADDRESS VLAN_TAG SIZE... 0 0x0 0 0... 1 0x0 0 0... Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yufeng Mo 提交于
Currently, each debugfs command needs to create a file to get the information. To better support more debugfs commands, the debugfs process is reconstructed, including the process of creating dentries and files, and obtaining information. Signed-off-by: NYufeng Mo <moyufeng@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Huazhong Tan 提交于
Only when RXD advanced layout is enabled, in some cases (e.g. ip fragments), the checksum of entire packet will be calculated and filled in the least significant 16 bits of the unused addr field. So refactor out the handling of RX completion checksum: adjust the location of the checksum in RX descriptor, and use ptype table to identify whether this kind of checksum is calculated. Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Huazhong Tan 提交于
Currently, the driver gets packet type by parsing the L3_ID/L4_ID/OL3_ID/OL4_ID from RX descriptor, it's time-consuming. Now some new devices support RXD advanced layout, which combines previous OL3_ID/OL4_ID to 8bit ptype field, so the driver gets packet type by looking up only one table, and L3_ID/L4_ID become reserved fields. Considering compatibility, the firmware will report capability of RXD advanced layout, the driver will identify and enable it by default. This patch provides basic function: identify and enable the RXD advanced layout, and refactor out hns3_rx_checksum() by using ptype table to handle RX checksum if supported. Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 01 5月, 2021 2 次提交
-
-
由 Peng Li 提交于
Currently, netif_tx_stop_all_queues() is used to ensure that the xmit is not running, but for the concurrent case it will not take effect, since netif_tx_stop_all_queues() just sets a flag without locking to indicate that the xmit queue(s) should not be run. So use netif_tx_disable() to replace netif_tx_stop_all_queues(), it takes the xmit queue lock while marking the queue stopped. Fixes: 76ad4f0e ("net: hns3: Add support of HNS3 Ethernet Driver for hip08 SoC") Signed-off-by: NPeng Li <lipeng321@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Hao Chen 提交于
When skb->ip_summed is CHECKSUM_PARTIAL, for non-tunnel udp packet, which has a dest port as the IANA assigned, the hardware is expected to do the checksum offload, but the hardware whose version is below V3 will not do the checksum offload when udp dest port is 4790. So fixes it by doing the checksum in software for this case. Fixes: 76ad4f0e ("net: hns3: Add support of HNS3 Ethernet Driver for hip08 SoC") Signed-off-by: NHao Chen <chenhao288@hisilicon.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 30 4月, 2021 1 次提交
-
-
由 Jian Shen 提交于
In some cases, the device is not initialized because reset failed. If another task calls hns3_reset_notify_up_enet() before reset retry, it will cause an error since uninitialized pointer access. So add check for HNS3_NIC_STATE_INITED before calling hns3_nic_net_open() in hns3_reset_notify_up_enet(). Fixes: bb6b94a8 ("net: hns3: Add reset interface implementation in client") Signed-off-by: NJian Shen <shenjian15@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 13 4月, 2021 1 次提交
-
-
由 Colin Ian King 提交于
The reset_prepare and reset_done calls have a null pointer check on ae_dev however ae_dev is being dereferenced via the call to ns3_is_phys_func with the ae->pdev argument. Fix this by performing a null pointer check on ae_dev and hence short-circuiting the dereference to ae_dev on the call to ns3_is_phys_func. Addresses-Coverity: ("Dereference before null check") Fixes: 715c58e9 ("net: hns3: add suspend and resume pm_ops") Signed-off-by: NColin Ian King <colin.king@canonical.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 09 4月, 2021 2 次提交
-
-
由 Jiaran Zhang 提交于
To implement the system suspend/resume functions, the NIC driver needs to support: 1. When the system enters the suspend mode, the driver needs to implement the suspend callback function of the NIC device. The driver needs to mute the device, stop all RX/TX activities of the device, and unmap the interrupt. 2. When the system enters the resume mode, the driver needs to implement the resume callback function of the NIC device and restore the device to the state before suspension. When the system enters the suspend and resume mode, the NIC driver actually executes the PF function reset process. When the PFs are suspending/resuming, VFs also enter the suspend/resume state because the PFs trigger the VFs to reset, therefore no operation is required when the VF pci_driver is suspending or resuming. Signed-off-by:
Jiaran Zhang <zhangjiaran@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jiaran Zhang 提交于
The flr_prepare/flr_done functions are not only used in the FLR scenario, but also used in the suspend/resume. Change the function names to prepare_for_reset/rebuild_for_reset, change the flr_prepare/flr_done to reset_prepare/reset_done in hnae3_ae_ops. Signed-off-by:
Jiaran Zhang <zhangjiaran@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 06 4月, 2021 1 次提交
-
-
由 Salil Mehta 提交于
Limiting the scope of the variable vector_ring_chain to the block where it is used. Fixes: 424eb834 ("net: hns3: Unified HNS3 {VF|PF} Ethernet Driver for hip08 SoC") Signed-off-by: NSalil Mehta <salil.mehta@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 30 3月, 2021 2 次提交
-
-
由 Yunsheng Lin 提交于
skb_put_padto() may fails because of memory failure, sw_err_cnt is already used to log memory failure in hns3_skb_linearize(), so use it to log the memory failure for skb_put_padto() too. Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yunsheng Lin 提交于
The actual size on wire for tso skb should be (gso_segs - 1) * hdr + skb->len instead of skb->len, which can be seen by user using 'ethtool -S ethX' cmd, and 'Byte Queue Limit' also use the send size stat to do the queue limiting, so add send_bytes in the desc_cb to record the actual send size for a skb. And send_bytes is only for tx desc_cb and page_offset is only for rx desc, so reuse the same space for both of them. Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-