- 26 8月, 2021 6 次提交
-
-
由 Yufeng Mo 提交于
The GRO configuration is enabled by default after reset. This is incorrect and should be restored to the user-configured value. So this restoration is added during reset initialization. Signed-off-by: NYufeng Mo <moyufeng@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Yufeng Mo 提交于
Currently, the cmd index is obtained in debugfs by comparing file names. However, this method may cause errors when processing more complex file names. So, change this method by saving cmd in private data and comparing it when getting cmd index in debugfs for optimization. Fixes: 5e69ea7e ("net: hns3: refactor the debugfs process") Signed-off-by: NYufeng Mo <moyufeng@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Guojia Liao 提交于
VLAN list should not be added duplicate VLAN node, otherwise it would cause "add failed" when restore VLAN from VLAN list, so this patch adds VLAN ID check before adding node into VLAN list. Fixes: c6075b19 ("net: hns3: Record VF vlan tables") Signed-off-by: NGuojia Liao <liaoguojia@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Yonglong Liu 提交于
In bond 4, when the link goes down and up repeatedly, the bond may get an unknown speed, and then this port can not work. The driver notify netif_carrier_on() before update the link state, when the bond receive carrier on, will query the speed of the port, if the query operation happens before updating the link state, will get an unknown speed. So need to notify netif_carrier_on() after update the link state. Fixes: 46a3df9f ("net: hns3: Add HNS3 Acceleration Engine & Compatibility Layer Support") Fixes: e2cb1dec ("net: hns3: Add HNS3 VF HCL(Hardware Compatibility Layer) Support") Signed-off-by: NYonglong Liu <liuyonglong@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Yufeng Mo 提交于
After the cmdq registers are cleared, the firmware may take time to clear out possible left over commands in the cmdq. Driver must release cmdq memory only after firmware has completed processing of left over commands. Fixes: 232d0d55 ("net: hns3: uninitialize command queue while unloading PF driver") Signed-off-by: NYufeng Mo <moyufeng@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Yufeng Mo 提交于
If a PF is bonded to a virtual machine and the virtual machine exits unexpectedly, some hardware resource cannot be cleared. In this case, loading driver may cause exceptions. Therefore, the hardware resource needs to be cleared when the driver is loaded. Fixes: 46a3df9f ("net: hns3: Add HNS3 Acceleration Engine & Compatibility Layer Support") Signed-off-by: NYufeng Mo <moyufeng@huawei.com> Signed-off-by: NSalil Mehta <salil.mehta@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
- 28 7月, 2021 1 次提交
-
-
由 Yufeng Mo 提交于
The ptp cycle is related to the hardware, so it may cause compatibility issues if a fixed value is used in driver. Therefore, the method of obtaining this value is changed to read from the register rather than use a fixed value in driver. Fixes: 0bf5eb78 ("net: hns3: add support for PTP") Signed-off-by: NYufeng Mo <moyufeng@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 20 7月, 2021 4 次提交
-
-
由 Jian Shen 提交于
Currently, VF doesn't enable rx VLAN offload when initializating, and PF does it for VFs. If user disable the rx VLAN offload for VF with ethtool -K, and reload the VF driver, it may cause the rx VLAN offload state being inconsistent between hardware and software. Fixes it by enabling rx VLAN offload when VF initializing. Fixes: e2cb1dec ("net: hns3: Add HNS3 VF HCL(Hardware Compatibility Layer) Support") Signed-off-by: NJian Shen <shenjian15@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Jian Shen 提交于
For hardware limitation, port VLAN filter is port level, and effective for all the functions of the port. So if not support port VLAN bypass, it's necessary to disable the port VLAN filter, in order to support function level VLAN filter control. Fixes: 2ba30662 ("net: hns3: add support for modify VLAN filter state") Signed-off-by: NJian Shen <shenjian15@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Peng Li 提交于
When VF need response from PF, VF will wait (1us - 1s) to receive the response, or it will wait timeout and the VF action fails. If VF do not receive response in 1st action because timeout, the 2nd action may receive response for the 1st action, and get incorrect response data.VF must reciveve the right response from PF,or it will cause unexpected error. This patch adds match_id to check mailbox response from PF to VF, to make sure VF get the right response: 1. The message sent from VF was labelled with match_id which was a unique 16-bit non-zero value. 2. The response sent from PF will label with match_id which got from the request. 3. The VF uses the match_id to match request and response message. This scheme depends on PF driver supports match_id, if PF driver doesn't support then VF will uses the original scheme. Signed-off-by: NPeng Li <lipeng321@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Chengwen Feng 提交于
Currently, the mailbox synchronous communication between VF and PF use the following fields to maintain communication: 1. Origin_mbx_msg which was combined by message code and subcode, used to match request and response. 2. Received_resp which means whether received response. There may possible mismatches of the following situation: 1. VF sends message A with code=1 subcode=1. 2. PF was blocked about 500ms when processing the message A. 3. VF will detect message A timeout because it can't get the response within 500ms. 4. VF sends message B with code=1 subcode=1 which equal message A. 5. PF processes the first message A and send the response message to VF. 6. VF will identify the response matched the message B because the code/subcode is the same. This will lead to mismatch of request and response. To fix the above bug, we use the following scheme: 1. The message sent from VF was labelled with match_id which was a unique 16-bit non-zero value. 2. The response sent from PF will label with match_id which got from the request. 3. The VF uses the match_id to match request and response message. As for PF driver, it only needs to copy the match_id from request to response. Fixes: dde1a86e ("net: hns3: Add mailbox support to PF driver") Signed-off-by: NChengwen Feng <fengchengwen@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
- 29 6月, 2021 2 次提交
-
-
由 Jian Shen 提交于
This patch adds support of dumping MAC umv counter in debugfs, which will be helpful for debugging. The display style is below: $ cat umv_info num_alloc_vport : 2 max_umv_size : 256 wanted_umv_size : 256 priv_umv_size : 85 share_umv_size : 86 vport(0) used_umv_num : 1 vport(1) used_umv_num : 1 Signed-off-by: NJian Shen <shenjian15@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jian Shen 提交于
Previously, the flow director counter is not enabled. To improve the maintainability for chechking whether flow director hit or not, enable flow director counter for each function, and add debugfs query inerface to query the counters for each function. The debugfs command is below: cat fd_counter func_id hit_times pf 0 vf0 0 vf1 0 Signed-off-by: NJian Shen <shenjian15@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 22 6月, 2021 3 次提交
-
-
由 Christophe JAILLET 提交于
If this 'kzalloc()' fails we must free some resources as in all the other error handling paths of this function. Fixes: 2e2deee7 ("net: hns3: add the RAS compatibility adaptation solution") Signed-off-by: NChristophe JAILLET <christophe.jaillet@wanadoo.fr> Reviewed-by: Jiaran Zhang <zhangjiaran@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dan Carpenter 提交于
These flags are used to set and test bits like this: if (!test_bit(HCLGE_PTP_FLAG_TX_EN, &ptp->flags) || The issue is that test_bit() takes a bit number like 1, but we are passing BIT(1) instead and it's testing BIT(BIT(1)). This does not cause a problem because it is always done consistently and the bit values are very small. Fixes: 0bf5eb78 ("net: hns3: add support for PTP") Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dan Carpenter 提交于
This patch doesn't affect runtime at all, it's just a correctness issue. The ptp->info.name[] buffer has 16 characters but the snprintf() limit was capped at 32 characters. Fortunately, HCLGE_DRIVER_NAME is "hclge" which isn't close to 16 characters so we're fine. Fixes: 0bf5eb78 ("net: hns3: add support for PTP") Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 19 6月, 2021 1 次提交
-
-
由 Yunsheng Lin 提交于
In the current rx page reuse handling process, the rx page buffer may have conflict between driver and stack in high-pressure scenario. To fix this problem, we need to check whether the page is only owned by driver at the begin and at the end of a page to make sure there is no reuse conflict between driver and stack when desc_cb->page_offset is rollbacked to zero or increased. Fixes: fa7711b8 ("net: hns3: optimize the rx page reuse handling process") Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 16 6月, 2021 7 次提交
-
-
由 Yunsheng Lin 提交于
Currently rx page will be reused to receive future packet when the stack releases the previous skb quickly. If the old page can not be reused, a new page will be allocated and mapped, which comsumes a lot of cpu when IOMMU is in the strict mode, especially when the application and irq/NAPI happens to run on the same cpu. So allocate a new frag to memcpy the data to avoid the costly IOMMU unmapping/mapping operation, and add "frag_alloc_err" and "frag_alloc" stats in "ethtool -S ethX" cmd. The throughput improves above 50% when running single thread of iperf using TCP when IOMMU is in strict mode and iperf shares the same cpu with irq/NAPI(rx_copybreak = 2048 and mtu = 1500). Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yunsheng Lin 提交于
Current rx page offset only reset to zero when all the below conditions are satisfied: 1. rx page is only owned by driver. 2. rx page is reusable. 3. the page offset that is above to be given to the stack has reached the end of the page. If the page offset is over the hns3_buf_size(), it means the buffer below the offset of the page is usable when the above condition 1 & 2 are satisfied, so page offset can be reset to zero instead of increasing the offset. We may be able to always reuse the first 4K buffer of a 64K page, which means we can limit the hot buffer size as much as possible. The above optimization is a side effect when refacting the rx page reuse handling in order to support the rx copybreak. Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yunsheng Lin 提交于
Using the queue based tx buffer, it is also possible to allocate a sgl buffer, and use skb_to_sgvec() to convert the skb to the sgvec in order to support the dma_map_sg() to decreases the overhead of IOMMU mapping and unmapping. Firstly, it reduces the number of buffers. For example, a tcp skb may have a 66-byte header and 3 fragments of 4328, 32768, and 28064 bytes. With this patch, dma_map_sg() will combine them into two buffers, 66-bytes header and one 65160-bytes fragment by using IOMMU. Secondly, it reduces the number of dma mapping and unmapping. All the original 4 buffers are mapped only once rather than 4 times. The throughput improves above 10% when running single thread of iperf using TCP when IOMMU is in strict mode. Suggested-by: NBarry Song <song.bao.hua@hisilicon.com> Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Huazhong Tan 提交于
Add support to query tx spare buffer size from configuration file, and use this info to do spare buffer initialization when the module parameter 'tx_spare_buf_size' is not specified. Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yunsheng Lin 提交于
when the packet or frag size is small, it causes both security and performance issue. As dma can't map sub-page, this means some extra kernel data is visible to devices. On the other hand, the overhead of dma map and unmap is huge when IOMMU is on. So add a queue based tx shared bounce buffer to memcpy the small packet when the len of the xmitted skb is below tx_copybreak. Add tx_spare_buf_size module param to set the size of tx spare buffer, and add set/get_tunable to set or query the tx_copybreak. The throughtput improves from 30 Gbps to 90+ Gbps when running 16 netperf threads with 32KB UDP message size when IOMMU is in the strict mode(tx_copybreak = 2000 and mtu = 1500). Suggested-by: NBarry Song <song.bao.hua@hisilicon.com> Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yunsheng Lin 提交于
Factor out hns3_fill_desc() so that it can be reused in the tx bounce supporting. Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yunsheng Lin 提交于
desc_cb is used to store mapping and freeing info for the corresponding desc, which is used in the cleaning process. There will be more desc_cb type coming up when supporting the tx bounce buffer, change desc_cb type to bit-wise value in order to reduce the desc_cb type checking operation in the data path. Also move the desc_cb type definition to hns3_enet.h because it is only used in hns3_enet.c, and declare a local variable desc_cb in hns3_clear_desc() to reduce lines of code. Signed-off-by: NYunsheng Lin <linyunsheng@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 12 6月, 2021 2 次提交
-
-
由 Huazhong Tan 提交于
Add a debugfs interface for dumping ptp information, which is helpful for debugging. Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NYufeng Mo <moyufeng@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Huazhong Tan 提交于
Adds PTP support for HNS3 ethernet driver. Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NYufeng Mo <moyufeng@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 10 6月, 2021 2 次提交
-
-
由 Baokun Li 提交于
Using list_move_tail() instead of list_del() + list_add_tail() in hclge_main.c. Reported-by: NHulk Robot <hulkci@huawei.com> Signed-off-by: NBaokun Li <libaokun1@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Baokun Li 提交于
Using list_move_tail() instead of list_del() + list_add_tail() in hclgevf_main.c. Reported-by: NHulk Robot <hulkci@huawei.com> Signed-off-by: NBaokun Li <libaokun1@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 09 6月, 2021 5 次提交
-
-
由 Jiaran Zhang 提交于
During initialization, the driver logs and clears the hw errors that already occurred. For device supports imp-handle ras capability, it needs handle different error status, otherwise it may cause wrong reset. So fix it by adding a new processing branch. Signed-off-by: Jiaran Zhang <zhangjiaran@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jiaran Zhang 提交于
Update error recovery module and type for RoCE. The enumeration values of module names and error types are not sorted in sequence. If use the current printing mode, they cannot be correctly printed. Use the index mode, If mod_id and type_id match the enumerated value, display the corresponding information. Signed-off-by: Jiaran Zhang <zhangjiaran@huawei.com> Signed-off-by: NWeihang Li <liweihang@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jiaran Zhang 提交于
IMP(Intelligent Management Processor) firmware add a new feature to handle and consolidate RAS information for new devices, NIC driver only needs to query the reported RAS information. NIC driver adds support for this feature. Driver queries device capability to check whether IMP support this feature, If yes, execute the new RAS processing branch. In order to add a method to check whether PF supports imp-handle RAS feature, add dumping this info in debugfs. Signed-off-by: Jiaran Zhang <zhangjiaran@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jiaran Zhang 提交于
To adapt to hardware modification and ensure that the driver is compatible with the original error handling content, we need to add the RAS compatibility adaptation solution. Add a processing branch to the driver during error handling. In the new processing branch, NIC fault information is integrated by the IMP. An interaction command is added between the driver and IMP to query and clear the fault source and interrupt source. The IMP integrates error information and reports the highest reset level to the driver. Signed-off-by: Jiaran Zhang <zhangjiaran@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Yufeng Mo 提交于
Currently, hardware errors can be reported through AER or MSI-X mode. However, the AER mode is intended to handle only bus errors, but not hardware errors. On the other hand, virtual machines cannot handle AER errors. When an AER error is reported, virtual machines will be suspended. So add support for handling all these hardware errors through MSI-X mode which depends on a newer version of firmware, and reserve the handler of the AER mode for compatibility. Signed-off-by: NYufeng Mo <moyufeng@huawei.com> Signed-off-by: Jiaran Zhang <zhangjiaran@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 08 6月, 2021 3 次提交
-
-
由 Yufeng Mo 提交于
Earlier patches have decoupled the MSI-X conveyed error handling and recovery logic. This earlier concept code is no longer required. Signed-off-by: NYufeng Mo <moyufeng@huawei.com> Signed-off-by: NSalil Mehta <salil.mehta@huawei.com> Signed-off-by: Jiaran Zhang <zhangjiaran@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jiaran Zhang 提交于
Error handling & recovery is done in context of reset task which gets scheduled from misc interrupt handler in existing code. But since error handling has been moved to new task, it should get scheduled instead of the reset task from the interrupt handler. Signed-off-by: Jiaran Zhang <zhangjiaran@huawei.com> Signed-off-by: NSalil Mehta <salil.mehta@huawei.com> Signed-off-by: NYufeng Mo <moyufeng@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jiaran Zhang 提交于
Error handling and recovery logic are intertwined. Error handling (i.e. error identification, clearing error sources and initiation of recovery) is done in context of reset task. If certain hardware errors get delivered during driver init time, which can cause driver init/loading to fail. Introduce a separate error handling task to ensure below: 1. Reset logic remains independent of the error handling logic. 2. Add the hclge_errhand_task_schedule to schedule error recovery tasks, This will ensure that common misellaneous MSI-X interrupt are re-enabled quickly. Signed-off-by: Jiaran Zhang <zhangjiaran@huawei.com> Signed-off-by: NSalil Mehta <salil.mehta@huawei.com> Signed-off-by: NYufeng Mo <moyufeng@huawei.com> Signed-off-by: NGuangbin Huang <huangguangbin2@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 01 6月, 2021 4 次提交
-
-
由 Jian Shen 提交于
Add debugfs support for vlan configuraion. create a single file "vlan_config" for it, and query it by command "cat vlan_config", return the result to userspace. The new display style is below: $ cat vlan_config I_PORT_VLAN_FILTER: on E_PORT_VLAN_FILTER: off FUNC_ID I_VF_VLAN_FILTER E_VF_VLAN_FILTER PORT_VLAN_FILTER_BYPASS pf off on off vf0 off on off FUNC_ID PVID ACCEPT_TAG1 ACCEPT_TAG2 ACCEPT_UNTAG1 ACCEPT_UNTAG2 pf 0 on on on on vf0 0 on on on on Signed-off-by: NJian Shen <shenjian15@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Jian Shen 提交于
Previously, there is hardware limitation for VF to modify the VLAN filter state, and the VLAN filter state is default enabled. Now the limitation has been removed in some device, so add capability flag to check whether the device supports modify VLAN filter state. If flag on, user will be able to modify the VLAN filter state by ethtool -K. VF needs to send mailbox to request the PF to modify the VLAN filter state for it. Signed-off-by: NJian Shen <shenjian15@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Jian Shen 提交于
There are some features of VF depend on PF, so it's necessary for VF to know whether PF supports. For compatibility, modify the mailbox HCLGE_MBX_GET_TCINFO, extend its function, use to get the basic information of PF, including mailbox api version and PF capabilities. Signed-off-by: NJian Shen <shenjian15@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-
由 Jian Shen 提交于
Previously, with hardware limitation, the port VLAN filter are effective for both PF and its VFs simultaneously, so a single function is not able to enable/disable separately, and the VLAN filter state is default enabled. Now some device supports each function to bypass port VLAN filter, then each function can switch VLAN filter separately. Add capability flag to check whether the device supports modify VLAN filter state. If flag on, user will be able to modify the VLAN filter state by ethtool -K. Furtherly, the default VLAN filter state is also changed according to whether non-zero VLAN used. Then the device can receive packet with any VLAN tag if only VLAN 0 used. The function hclge_need_enable_vport_vlan_filter() is used to help implement above changes. And the VLAN filter handle for promisc mode can also be simplified by this function. Signed-off-by: NJian Shen <shenjian15@huawei.com> Signed-off-by: NHuazhong Tan <tanhuazhong@huawei.com> Signed-off-by: NJakub Kicinski <kuba@kernel.org>
-