- 26 9月, 2021 2 次提交
-
-
由 Yufeng Mo 提交于
mainline inclusion from mainline-master commit cce1689e category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I46N7D CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=cce1689eb58d2fe3219da2ecd27cef8e644c4cc6 ---------------------------------------------------------------------- Add support in ethtool for switching EQE/CQE mode. Signed-off-by: NYufeng Mo <moyufeng@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NJunxin Chen <chenjunxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Yufeng Mo 提交于
mainline inclusion from mainline-master commit 9f0c6f4b category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I46N7D CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=9f0c6f4b7475dd97e1f0eed81dd6ff173cf8c7fc ---------------------------------------------------------------------- For device whose version is above V3(include V3), the GL can select EQE or CQE mode, so adds support for it. In CQE mode, the coalesced timer will restart when the first new completion occurs, while in EQE mode, the timer will not restart. Signed-off-by: NYufeng Mo <moyufeng@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NJunxin Chen <chenjunxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 02 9月, 2021 1 次提交
-
-
由 Yufeng Mo 提交于
mainline inclusion from mainline-master commit ddccc5e3 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I46N7D CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=ddccc5e368a33daeb6862192d4dca8e59af9234a ---------------------------------------------------------------------- Currently, four reset types are supported for the HNS3 ethernet driver: IMP reset, global reset, function reset, and FLR. Only FLR can now be triggered by the user. To restore the device when an exception occurs, add support for triggering reset by ethtool. Run the "ethtool --reset DEVNAME mgmt | all | dedicated" to trigger the IMP | global | function reset manually. In addition, VF can only trigger function reset. Signed-off-by: NYufeng Mo <moyufeng@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Link: https://lore.kernel.org/r/1628602128-15640-1-git-send-email-huangguangbin2@huawei.comSigned-off-by: NJakub Kicinski <kuba@kernel.org> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NJunxin Chen <chenjunxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 26 7月, 2021 15 次提交
-
-
由 Yunsheng Lin 提交于
mainline inclusion from mainline-master commit 99f6b5fb category: feature bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=99f6b5fb5f63cf69c6e56bba8e5492c98c521a63 ---------------------------------------------------------------------- Currently rx page will be reused to receive future packet when the stack releases the previous skb quickly. If the old page can not be reused, a new page will be allocated and mapped, which comsumes a lot of cpu when IOMMU is in the strict mode, especially when the application and irq/NAPI happens to run on the same cpu. So allocate a new frag to memcpy the data to avoid the costly IOMMU unmapping/mapping operation, and add "frag_alloc_err" and "frag_alloc" stats in "ethtool -S ethX" cmd. The throughput improves above 50% when running single thread of iperf using TCP when IOMMU is in strict mode and iperf shares the same cpu with irq/NAPI(rx_copybreak = 2048 and mtu = 1500). Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NJunxin Chen <chenjunxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Yunsheng Lin 提交于
mainline inclusion from mainline-master commit 7459775e category: feature bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=7459775e9f658a2d5f3ff9d4d087e86f4d4e5b83 ---------------------------------------------------------------------- Using the queue based tx buffer, it is also possible to allocate a sgl buffer, and use skb_to_sgvec() to convert the skb to the sgvec in order to support the dma_map_sg() to decreases the overhead of IOMMU mapping and unmapping. Firstly, it reduces the number of buffers. For example, a tcp skb may have a 66-byte header and 3 fragments of 4328, 32768, and 28064 bytes. With this patch, dma_map_sg() will combine them into two buffers, 66-bytes header and one 65160-bytes fragment by using IOMMU. Secondly, it reduces the number of dma mapping and unmapping. All the original 4 buffers are mapped only once rather than 4 times. The throughput improves above 10% when running single thread of iperf using TCP when IOMMU is in strict mode. Suggested-by: NBarry Song <song.bao.hua@hisilicon.com> Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NJunxin Chen <chenjunxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Yunsheng Lin 提交于
mainline inclusion from mainline-master commit 907676b1 category: feature bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=907676b130711fd1f627824559e92259db2061d1 ---------------------------------------------------------------------- when the packet or frag size is small, it causes both security and performance issue. As dma can't map sub-page, this means some extra kernel data is visible to devices. On the other hand, the overhead of dma map and unmap is huge when IOMMU is on. So add a queue based tx shared bounce buffer to memcpy the small packet when the len of the xmitted skb is below tx_copybreak. Add tx_spare_buf_size module param to set the size of tx spare buffer, and add set/get_tunable to set or query the tx_copybreak. The throughtput improves from 30 Gbps to 90+ Gbps when running 16 netperf threads with 32KB UDP message size when IOMMU is in the strict mode(tx_copybreak = 2000 and mtu = 1500). Suggested-by: NBarry Song <song.bao.hua@hisilicon.com> Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NJunxin Chen <chenjunxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Yunsheng Lin 提交于
mainline inclusion from mainline-master commit 26f1ccdf category: feature bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=26f1ccdf609a9fb087f49a3782fdc2ade23cde82 ---------------------------------------------------------------------- desc_cb is used to store mapping and freeing info for the corresponding desc, which is used in the cleaning process. There will be more desc_cb type coming up when supporting the tx bounce buffer, change desc_cb type to bit-wise value in order to reduce the desc_cb type checking operation in the data path. Also move the desc_cb type definition to hns3_enet.h because it is only used in hns3_enet.c, and declare a local variable desc_cb in hns3_clear_desc() to reduce lines of code. Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NJunxin Chen <chenjunxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Huazhong Tan 提交于
mainline inclusion from mainline-master commit 0bf5eb78 category: feature bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=0bf5eb788512187b744ef7f79de835e6cbe85b9c ---------------------------------------------------------------------- Adds PTP support for HNS3 ethernet driver. Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NYufeng Mo <moyufeng@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NJunxin Chen <chenjunxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Jian Shen 提交于
mainline inclusion from mainline-master commit 2ba30662 category: feature bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=2ba306627f5950c9a7850f3b0499d39e522dc249 ---------------------------------------------------------------------- Previously, with hardware limitation, the port VLAN filter are effective for both PF and its VFs simultaneously, so a single function is not able to enable/disable separately, and the VLAN filter state is default enabled. Now some device supports each function to bypass port VLAN filter, then each function can switch VLAN filter separately. Add capability flag to check whether the device supports modify VLAN filter state. If flag on, user will be able to modify the VLAN filter state by ethtool -K. Furtherly, the default VLAN filter state is also changed according to whether non-zero VLAN used. Then the device can receive packet with any VLAN tag if only VLAN 0 used. The function hclge_need_enable_vport_vlan_filter() is used to help implement above changes. And the VLAN filter handle for promisc mode can also be simplified by this function. Signed-off-by: NJian Shen <shenjian15@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NJunxin Chen <chenjunxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Huazhong Tan 提交于
mainline inclusion from mainline-master commit 307ea4ce category: feature bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=307ea4ce3edd3f7d1130d3c35955aa77063296cc ---------------------------------------------------------------------- The Linux kernel has support for a dynamic interrupt moderation algorithm known as "dimlib". Replace the custom driver-specific implementation of dynamic interrupt moderation with the kernel's algorithm. Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NJunxin Chen <chenjunxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Huazhong Tan 提交于
mainline inclusion from mainline-master commit 77e91848 category: feature bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=77e9184869c9fb00a482357ea8eef3bd7ae3d45a ---------------------------------------------------------------------- Currently, the debugfs command for bd info is implemented by "echo xxxx > cmd", and record the information in dmesg. It's unnecessary and heavy. To improve it, add two debugfs directories "tx_bd_info" and "rx_bd_info", and create a file for each queue under these two directories, and query the bd info of specific queue by "cat tx_bd_info/tx_bd_queue*" or "cat rx_bd_info/rx_bd_queue*", return the result to userspace, rather than record in dmesg. The display style is below: $ cat rx_bd_info/rx_bd_queue0 Queue 0 rx bd info: BD_IDX L234_INFO PKT_LEN SIZE... 0 0x0 60 60... 1 0x0 1512 1512... $ cat tx_bd_info/tx_bd_queue0 Queue 0 tx bd info: BD_IDX ADDRESS VLAN_TAG SIZE... 0 0x0 0 0... 1 0x0 0 0... Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NJunxin Chen <chenjunxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Yufeng Mo 提交于
mainline inclusion from mainline-master commit 5e69ea7e category: feature bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=5e69ea7ee2a69f68c4172afcb0cbe29e7162fb6e ---------------------------------------------------------------------- Currently, each debugfs command needs to create a file to get the information. To better support more debugfs commands, the debugfs process is reconstructed, including the process of creating dentries and files, and obtaining information. Signed-off-by: NYufeng Mo <moyufeng@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NJunxin Chen <chenjunxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Huazhong Tan 提交于
mainline inclusion from mainline-master commit 1ddc028a category: feature bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=1ddc028ac84988b6b1c9ceb9d15acbf321735ca3 ---------------------------------------------------------------------- Only when RXD advanced layout is enabled, in some cases (e.g. ip fragments), the checksum of entire packet will be calculated and filled in the least significant 16 bits of the unused addr field. So refactor out the handling of RX completion checksum: adjust the location of the checksum in RX descriptor, and use ptype table to identify whether this kind of checksum is calculated. Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NJunxin Chen <chenjunxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Huazhong Tan 提交于
mainline inclusion from mainline-master commit 79664077 category: feature bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=796640778c26f3d99fde173bb7b1d726b5f0d816 ---------------------------------------------------------------------- Currently, the driver gets packet type by parsing the L3_ID/L4_ID/OL3_ID/OL4_ID from RX descriptor, it's time-consuming. Now some new devices support RXD advanced layout, which combines previous OL3_ID/OL4_ID to 8bit ptype field, so the driver gets packet type by looking up only one table, and L3_ID/L4_ID become reserved fields. Considering compatibility, the firmware will report capability of RXD advanced layout, the driver will identify and enable it by default. This patch provides basic function: identify and enable the RXD advanced layout, and refactor out hns3_rx_checksum() by using ptype table to handle RX checksum if supported. Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NJunxin Chen <chenjunxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Yunsheng Lin 提交于
mainline inclusion from mainline-v5.13-rc1 commit 811c0830 category: feature bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=811c0830eb4ca8811ed80fe40378f622b9844835 ---------------------------------------------------------------------- The actual size on wire for tso skb should be (gso_segs - 1) * hdr + skb->len instead of skb->len, which can be seen by user using 'ethtool -S ethX' cmd, and 'Byte Queue Limit' also use the send size stat to do the queue limiting, so add send_bytes in the desc_cb to record the actual send size for a skb. And send_bytes is only for tx desc_cb and page_offset is only for rx desc, so reuse the same space for both of them. Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NJunxin Chen <chenjunxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Huazhong Tan 提交于
mainline inclusion from mainline-v5.12-rc1-dontuse commit 64749c9c category: feature bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=64749c9c38a9b7f64b83b6970b679f2fb7cd6387 ---------------------------------------------------------------------- Since hns3_uninit_all_ring() only returns 0, so remove this redundant return value and function declaration in hns3_enet.h. Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NJunxin Chen <chenjunxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Yufeng Mo 提交于
mainline inclusion from mainline-v5.12-rc1-dontuse commit 9393eb50 category: feature bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=9393eb5034a040931120f9c6eed9bf0e78029192 ---------------------------------------------------------------------- In macro definitions, parentheses are unnecessary in some cases, such as the calling parameter of a function, the left variable of the equal sign, and so on. So remove these unnecessary parentheses according to these rules. Signed-off-by: NYufeng Mo <moyufeng@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NJunxin Chen <chenjunxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Yufeng Mo 提交于
mainline inclusion from mainline-v5.12-rc1-dontuse commit e070c8b9 category: feature bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=e070c8b91ac1c7810c8448b1e5534d1895a0c7f4 ---------------------------------------------------------------------- Since the newer hardware may supports different frame size, so add support to obtain the capability from the firmware instead of the fixed value. Signed-off-by: NYufeng Mo <moyufeng@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org> Reviewed-by: NYongxin Li <liyongxin1@huawei.com> Signed-off-by: NJunxin Chen <chenjunxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 16 7月, 2021 3 次提交
-
-
由 Huazhong Tan 提交于
mainline inclusion from mainline-v5.11-rc1 commit 3e281621 category: feature bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=3e2816219d7ccae4ab4b5ed480566e05aef9cf1a ---------------------------------------------------------------------- For the device who has the capability to handle udp tunnel checksum segmentation, add support for it. Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org> Reviewed-by: Nli yongxin <liyongxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Huazhong Tan 提交于
mainline inclusion from mainline-v5.11-rc1 commit 66d52f3b category: feature bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=66d52f3bf385c8d969e9ca6b281ddf773c9691d7 ---------------------------------------------------------------------- For the device that supports TX hardware checksum, the hardware can calculate the checksum from the start and fill the checksum to the offset position, which reduces the operations of calculating the type and header length of L3/L4. So add this feature for the HNS3 ethernet driver. The previous simple BD description is unsuitable, rename it as HW TX CSUM. Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org> Reviewed-by: Nli yongxin <liyongxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Huazhong Tan 提交于
mainline inclusion from mainline-v5.11-rc1 commit 4b2fe769 category: feature bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=4b2fe769aad9736624147882e566eeeb8dd4c187 ---------------------------------------------------------------------- In some cases (for example ip fragment), hardware will calculate the checksum of whole packet in RX, and setup the HNS3_RXD_L2_CSUM_B flag in the descriptor, so add support to utilize this checksum. Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org> Reviewed-by: Nli yongxin <liyongxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 15 7月, 2021 4 次提交
-
-
由 Huazhong Tan 提交于
mainline inclusion from mainline-v5.11-rc1 commit de25bcc4 category: feature bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=de25bcc47fba49a848764fdfab76741b7e17ca2f ---------------------------------------------------------------------- Besides GL(Gap Limiting), QL(Quantity Limiting) can be modified dynamically when DIM is supported. So rename gl_adapt_enable as adapt_enable in struct hns3_enet_coalesce. Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org> Reviewed-by: Nli yongxin <liyongxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Huazhong Tan 提交于
mainline inclusion from mainline-v5.11-rc1 commit 5ac84b02 category: feature bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=5ac84b02d372ff45bce48c78beedbffe7c9158c0 ---------------------------------------------------------------------- For device whose version is above V3(include V3), the GL configuration can set as 1us unit, so adds support for configuring this field. Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org> Reviewed-by: Nli yongxin <liyongxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Huazhong Tan 提交于
mainline inclusion from mainline-v5.11-rc1 commit ab16b49c category: feature bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=ab16b49cdf986172373afc16b4039f058aa3b22d ---------------------------------------------------------------------- For maintainability and compatibility, add support for querying the maximum value of GL. Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org> Reviewed-by: Nli yongxin <liyongxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
由 Huazhong Tan 提交于
mainline inclusion from mainline-v5.11-rc1 commit 91bfae25 category: feature bugzilla: 173966 CVE: NA Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=91bfae25eedd981b384339c7b12bef9eeaba0f34 ---------------------------------------------------------------------- QL(quantity limiting) means that hardware supports the interrupt coalesce based on the frame quantity. QL can be configured when int_ql_max in device's specification is non-zero, so add support to configure it. Also, rename two coalesce init function to fit their purpose. Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org> Reviewed-by: Nli yongxin <liyongxin1@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 03 6月, 2021 1 次提交
-
-
由 Yunsheng Lin 提交于
stable inclusion from stable-5.10.38 commit b502a6a440667da6b9854ca14bbdac0fca458c58 bugzilla: 51875 CVE: NA -------------------------------- [ Upstream commit d5d5e019 ] Currently hns3 driver only handle the xmit skb with one level of fraglist skb, add handling for multi level by calling hns3_tx_bd_num() recursively when calculating bd num and calling hns3_fill_skb_to_desc() recursively when filling tx desc. When the skb has a fraglist level of 24, the skb is simply dropped and stats.max_recursion_level is added to record the error. Move the stat handling from hns3_nic_net_xmit() to hns3_nic_maybe_stop_tx() in order to handle different error stat and add the 'max_recursion_level' and 'hw_limitation' stat. Note that the max recursive level as 24 is chose according to below: commit 48a1df65 ("skbuff: return -EMSGSIZE in skb_to_sgvec to prevent overflow"). And that we are not able to find a testcase to verify the recursive fraglist case, so Fixes tag is not provided. Reported-by: NBarry Song <song.bao.hua@hisilicon.com> Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSasha Levin <sashal@kernel.org> Signed-off-by: NChen Jun <chenjun102@huawei.com> Acked-by: NWeilong Chen <chenweilong@huawei.com> Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
-
- 30 9月, 2020 2 次提交
-
-
由 Guangbin Huang 提交于
Adds debugfs to dump tqp enable status. Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Huazhong Tan 提交于
Currently, the driver is able to query the device's specifications, which includes the maximum BD number of non TSO packet, so replace macro HNS3_MAX_NON_TSO_BD_NUM with the queried value, and rewrite macro HNS3_MAX_NON_TSO_SIZE whose value depends on the the maximum BD number of non TSO packet. Also, add a new parameter max_non_tso_bd_num to function hns3_tx_bd_num() and hns3_skb_need_linearized(), then they can get the maximum BD number of non TSO packet from the caller instead of calculating by themself, The note of hns3_skb_need_linearized() should be update as well. Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 18 9月, 2020 5 次提交
-
-
由 Yunsheng Lin 提交于
Use napi_consume_skb() to batch consuming skb when cleaning tx desc in NAPI polling. Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yunsheng Lin 提交于
writel() can be used to order I/O vs memory by default when writing portable drivers. Use writel() to replace wmb() + writel_relaxed(), and writel() is dma_wmb() + writel_relaxed() for ARM64, so there is an optimization here because dma_wmb() is a lighter barrier than wmb(). Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yunsheng Lin 提交于
Currently HNS3_RING_TX_RING_HEAD_REG register is read to determine how many tx desc can be cleaned. To avoid the register read operation in the critical data path, use the valid bit in the tx desc to determine if a specific tx desc can be cleaned. The hns3 driver sets valid bit in the tx desc before ringing a doorbell to the hw, and hw will only clear the valid bit of the tx desc after corresponding packet is sent out to the wire. And because next_to_use for tx ring is a changing variable when the driver is filling the tx desc, so reuse the pull_len for rx ring to record the tx desc that has notified to the hw, so that hns3_nic_reclaim_desc() can decide how many tx desc's valid bit need checking when reclaiming tx desc. And io_err_cnt stat is also removed for it is not used anymore. Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yunsheng Lin 提交于
Use netdev_xmit_more() to defer the tx doorbell operation when the skb is passed to the driver continuously. By doing this we can improve the overall xmit performance by avoid some doorbell operations. Also, the tx_err_cnt stat is not used, so rename it to tx_more stat. Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yunsheng Lin 提交于
Batch the page reference count updates instead of doing them one at a time. By doing this we can improve the overall receive performance by avoid some atomic increment operations when the rx page is reused. Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 09 9月, 2020 3 次提交
-
-
由 Guojia Liao 提交于
hns3_update_promisc_mode is defined, but not be used, so remove it. Signed-off-by: NGuojia Liao <liaoguojia@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Huazhong Tan 提交于
There are several macros related queue defined, but never used, so remove them. Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Huazhong Tan 提交于
'io_base' has been defined and initialized, but never used, so remove it. Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 22 7月, 2020 1 次提交
-
-
由 Yunsheng Lin 提交于
With GRO and fraglist support, the SKB can be aggregated to a total size of 65535, and when that SKB is forwarded through a bridge, the size of the SKB may be pushed to exceed the size of 65535 when br_dev_queue_push_xmit() is called. The max send size of BD supported by the HW is 65535, when a SKB with a headlen of over 65535 is sent to the driver, the driver needs to use multi BD to send the linear data, and the send size of the last BD is calculated incorrectly by the driver who is using '&' operation, which causes a TX error. Use '%' operation to fix this problem. Fixes: 3fe13ed9 ("net: hns3: avoid mult + div op in critical data path") Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 19 6月, 2020 1 次提交
-
-
由 Barry Song 提交于
Move the type of buffer address from unsigned char to void Signed-off-by: NBarry Song <song.bao.hua@hisilicon.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 29 5月, 2020 1 次提交
-
-
由 Huazhong Tan 提交于
Remove some fileds which defined in struct hns3_nic_priv, but not used, and remove the related definition of struct hns3_udp_tunnel and enum hns3_udp_tnl_type. Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 15 5月, 2020 1 次提交
-
-
由 Huazhong Tan 提交于
There are some macros defined in hns3_enet.h, but not used in anywhere. Reported-by: NYonglong Liu <liuyonglong@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-