未验证 提交 1f7abdfd 编写于 作者: O openeuler-ci-bot 提交者: Gitee

!835 Add Huawei Intelligent Network Card Driver: hinic3

Merge Pull Request from: @aspiresky01 
 
The NIC driver supports the following features:
Supports IPv4/IPv6 TCP/UDP checksum offload, TSO (TCP Segmentation Offload), LRO (Large Receive Offload) offload and RSS (Receive Side Scaling) functions
Supports interrupt aggregation parameter configuration and interrupt adaptation.
Supports 802.1Q VLAN (Virtual Local Area Network) offloading and filtering.
Supports NIC SR-IOV (Single Root I/O Virtualization).
Support PF promiscuous mode, unicast list filtering, multicast list filtering, and full multicast mode.
Support VF unicast list filtering, multicast list filtering, and full multicast mode.
Supports VF QinQ mode.
Supports VF link state configuration and QoS configuration.
Support VF MAC address management.
Support VF spoofchk check.
Loopback testing is supported.
Support port lighting.
Support Ethernet mouth self-negotiation, support pause frame. 
 
Link:https://gitee.com/openeuler/kernel/pulls/835 

Reviewed-by: Zheng Zengkai <zhengzengkai@huawei.com> 
Reviewed-by: Chiqijun <chiqijun@huawei.com> 
Signed-off-by: Zheng Zengkai <zhengzengkai@huawei.com> 
Linux Kernel Driver for Huawei Intelligent NIC(HiNIC3) family
============================================================
Overview:
=========
HiNIC3 is a network interface card for the Data Center Area.
The driver supports a range of link-speed devices (10GbE, 25GbE, 40GbE, etc.).
The driver supports also a negotiated and extendable feature set.
Some HiNIC3 devices support SR-IOV. This driver is used for Physical Function
(PF).
HiNIC3 devices support MSI-X interrupt vector for each Tx/Rx queue and
adaptive interrupt moderation.
HiNIC3 devices support also various offload features such as checksum offload,
TCP Transmit Segmentation Offload(TSO), Receive-Side Scaling(RSS) and
LRO(Large Receive Offload).
Supported PCI vendor ID/device IDs:
===================================
19e5:1822 - HiNIC3 PF
Driver Architecture and Source Code:
====================================
hinic3_dev - Implement a Logical Network device that is independent from
specific HW details about HW data structure formats.
hinic3_hwdev - Implement the HW details of the device and include the components
for accessing the PCI NIC.
hinic3_hwdev contains the following components:
===============================================
HW Interface:
=============
The interface for accessing the pci device (DMA memory and PCI BARs).
(hinic3_hw_if.c, hinic3_hw_if.h)
Configuration Status Registers Area that describes the HW Registers on the
configuration and status BAR0. (hinic3_hw_csr.h)
MGMT components:
================
Asynchronous Event Queues(AEQs) - The event queues for receiving messages from
the MGMT modules on the cards. (hinic3_hw_eqs.c, hinic3_hw_eqs.h)
Application Programmable Interface commands(API CMD) - Interface for sending
MGMT commands to the card. (hinic3_hw_api_cmd.c, hinic3_hw_api_cmd.h)
Management (MGMT) - the PF to MGMT channel that uses API CMD for sending MGMT
commands to the card and receives notifications from the MGMT modules on the
card by AEQs. Also set the addresses of the IO CMDQs in HW.
(hinic3_hw_mgmt.c, hinic3_hw_mgmt.h)
IO components:
==============
Completion Event Queues(CEQs) - The completion Event Queues that describe IO
tasks that are finished. (hinic3_hw_eqs.c, hinic3_hw_eqs.h)
Work Queues(WQ) - Contain the memory and operations for use by CMD queues and
the Queue Pairs. The WQ is a Memory Block in a Page. The Block contains
pointers to Memory Areas that are the Memory for the Work Queue Elements(WQEs).
(hinic3_hw_wq.c, hinic3_hw_wq.h)
Command Queues(CMDQ) - The queues for sending commands for IO management and is
used to set the QPs addresses in HW. The commands completion events are
accumulated on the CEQ that is configured to receive the CMDQ completion events.
(hinic3_hw_cmdq.c, hinic3_hw_cmdq.h)
Queue Pairs(QPs) - The HW Receive and Send queues for Receiving and Transmitting
Data. (hinic3_hw_qp.c, hinic3_hw_qp.h, hinic3_hw_qp_ctxt.h)
IO - de/constructs all the IO components. (hinic3_hw_io.c, hinic3_hw_io.h)
HW device:
==========
HW device - de/constructs the HW Interface, the MGMT components on the
initialization of the driver and the IO components on the case of Interface
UP/DOWN Events. (hinic3_hw_dev.c, hinic3_hw_dev.h)
hinic3_dev contains the following components:
===============================================
PCI ID table - Contains the supported PCI Vendor/Device IDs.
(hinic3_pci_tbl.h)
Port Commands - Send commands to the HW device for port management
(MAC, Vlan, MTU, ...). (hinic3_port.c, hinic3_port.h)
Tx Queues - Logical Tx Queues that use the HW Send Queues for transmit.
The Logical Tx queue is not dependent on the format of the HW Send Queue.
(hinic3_tx.c, hinic3_tx.h)
Rx Queues - Logical Rx Queues that use the HW Receive Queues for receive.
The Logical Rx queue is not dependent on the format of the HW Receive Queue.
(hinic3_rx.c, hinic3_rx.h)
hinic_dev - de/constructs the Logical Tx and Rx Queues.
(hinic3_main.c, hinic3_dev.h)
Miscellaneous:
=============
Common functions that are used by HW and Logical Device.
(hinic3_common.c, hinic3_common.h)
Support
=======
If an issue is identified with the released source code on the supported kernel
with a supported adapter, email the specific information related to the issue to
wulike1@huawei.com.
...@@ -8153,6 +8153,13 @@ S: Supported ...@@ -8153,6 +8153,13 @@ S: Supported
F: Documentation/networking/device_drivers/ethernet/huawei/hinic.rst F: Documentation/networking/device_drivers/ethernet/huawei/hinic.rst
F: drivers/net/ethernet/huawei/hinic/ F: drivers/net/ethernet/huawei/hinic/
HUAWEI ETHERNET DRIVER
M: Wulike(Collin) <wulike1@huawei.com>
L: netdev@vger.kernel.org
S: Supported
F: Documentation/networking/hinic3.txt
F: drivers/net/ethernet/huawei/hinic3/
HUGETLB FILESYSTEM HUGETLB FILESYSTEM
M: Mike Kravetz <mike.kravetz@oracle.com> M: Mike Kravetz <mike.kravetz@oracle.com>
L: linux-mm@kvack.org L: linux-mm@kvack.org
......
...@@ -2757,6 +2757,7 @@ CONFIG_HNS3_HCLGEVF=m ...@@ -2757,6 +2757,7 @@ CONFIG_HNS3_HCLGEVF=m
CONFIG_HNS3_ENET=m CONFIG_HNS3_ENET=m
CONFIG_NET_VENDOR_HUAWEI=y CONFIG_NET_VENDOR_HUAWEI=y
CONFIG_HINIC=m CONFIG_HINIC=m
CONFIG_HINIC3=m
CONFIG_BMA=m CONFIG_BMA=m
# CONFIG_NET_VENDOR_I825XX is not set # CONFIG_NET_VENDOR_I825XX is not set
CONFIG_NET_VENDOR_INTEL=y CONFIG_NET_VENDOR_INTEL=y
......
...@@ -2726,6 +2726,7 @@ CONFIG_NET_VENDOR_GOOGLE=y ...@@ -2726,6 +2726,7 @@ CONFIG_NET_VENDOR_GOOGLE=y
# CONFIG_GVE is not set # CONFIG_GVE is not set
CONFIG_NET_VENDOR_HUAWEI=y CONFIG_NET_VENDOR_HUAWEI=y
CONFIG_HINIC=m CONFIG_HINIC=m
CONFIG_HINIC3=m
CONFIG_BMA=m CONFIG_BMA=m
# CONFIG_NET_VENDOR_I825XX is not set # CONFIG_NET_VENDOR_I825XX is not set
CONFIG_NET_VENDOR_INTEL=y CONFIG_NET_VENDOR_INTEL=y
......
...@@ -16,6 +16,7 @@ config NET_VENDOR_HUAWEI ...@@ -16,6 +16,7 @@ config NET_VENDOR_HUAWEI
if NET_VENDOR_HUAWEI if NET_VENDOR_HUAWEI
source "drivers/net/ethernet/huawei/hinic/Kconfig" source "drivers/net/ethernet/huawei/hinic/Kconfig"
source "drivers/net/ethernet/huawei/hinic3/Kconfig"
source "drivers/net/ethernet/huawei/bma/Kconfig" source "drivers/net/ethernet/huawei/bma/Kconfig"
endif # NET_VENDOR_HUAWEI endif # NET_VENDOR_HUAWEI
...@@ -4,4 +4,5 @@ ...@@ -4,4 +4,5 @@
# #
obj-$(CONFIG_HINIC) += hinic/ obj-$(CONFIG_HINIC) += hinic/
obj-$(CONFIG_HINIC3) += hinic3/
obj-$(CONFIG_BMA) += bma/ obj-$(CONFIG_BMA) += bma/
# SPDX-License-Identifier: GPL-2.0-only
#
# Huawei driver configuration
#
config HINIC3
tristate "Huawei Intelligent Network Interface Card 3rd"
depends on PCI_MSI && NUMA && PCI_IOV && DCB && (X86 || ARM64)
help
This driver supports HiNIC PCIE Ethernet cards.
To compile this driver as part of the kernel, choose Y here.
If unsure, choose N.
The default is N.
# SPDX-License-Identifier: GPL-2.0-only
ccflags-y += -I$(srctree)/drivers/net/ethernet/huawei/hinic3/
obj-$(CONFIG_HINIC3) += hinic3.o
hinic3-objs := hw/hinic3_hwdev.o \
hw/hinic3_hw_cfg.o \
hw/hinic3_hw_comm.o \
hw/hinic3_prof_adap.o \
hw/hinic3_sriov.o \
hw/hinic3_lld.o \
hw/hinic3_dev_mgmt.o \
hw/hinic3_common.o \
hw/hinic3_hwif.o \
hw/hinic3_wq.o \
hw/hinic3_cmdq.o \
hw/hinic3_eqs.o \
hw/hinic3_mbox.o \
hw/hinic3_mgmt.o \
hw/hinic3_api_cmd.o \
hw/hinic3_hw_api.o \
hw/hinic3_sml_lt.o \
hw/hinic3_hw_mt.o \
hw/hinic3_nictool.o \
hw/hinic3_devlink.o \
hw/ossl_knl_linux.o \
hinic3_main.o \
hinic3_tx.o \
hinic3_rx.o \
hinic3_rss.o \
hinic3_ntuple.o \
hinic3_dcb.o \
hinic3_ethtool.o \
hinic3_ethtool_stats.o \
hinic3_dbg.o \
hinic3_irq.o \
hinic3_filter.o \
hinic3_netdev_ops.o \
hinic3_nic_prof.o \
hinic3_nic_cfg.o \
hinic3_mag_cfg.o \
hinic3_nic_cfg_vf.o \
hinic3_rss_cfg.o \
hinic3_nic_event.o \
hinic3_nic_io.o \
hinic3_nic_dbg.o
\ No newline at end of file
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) Huawei Technologies Co., Ltd. 2016-2022. All rights reserved.
* File name: Cfg_mgt_comm_pub.h
* Version No.: Draft
* Generation date: 2016 year 05 month 07 day
* Latest modification:
* Function description: Header file for communication between the: Host and FW
* Function list:
* Modification history:
* 1. Date: 2016 May 07
* Modify content: Create a file.
*/
#ifndef CFG_MGT_COMM_PUB_H
#define CFG_MGT_COMM_PUB_H
#include "mgmt_msg_base.h"
enum servic_bit_define {
SERVICE_BIT_NIC = 0,
SERVICE_BIT_ROCE = 1,
SERVICE_BIT_VBS = 2,
SERVICE_BIT_TOE = 3,
SERVICE_BIT_IPSEC = 4,
SERVICE_BIT_FC = 5,
SERVICE_BIT_VIRTIO = 6,
SERVICE_BIT_OVS = 7,
SERVICE_BIT_NVME = 8,
SERVICE_BIT_ROCEAA = 9,
SERVICE_BIT_CURRENET = 10,
SERVICE_BIT_PPA = 11,
SERVICE_BIT_MIGRATE = 12,
SERVICE_BIT_MAX
};
#define CFG_SERVICE_MASK_NIC (0x1 << SERVICE_BIT_NIC)
#define CFG_SERVICE_MASK_ROCE (0x1 << SERVICE_BIT_ROCE)
#define CFG_SERVICE_MASK_VBS (0x1 << SERVICE_BIT_VBS)
#define CFG_SERVICE_MASK_TOE (0x1 << SERVICE_BIT_TOE)
#define CFG_SERVICE_MASK_IPSEC (0x1 << SERVICE_BIT_IPSEC)
#define CFG_SERVICE_MASK_FC (0x1 << SERVICE_BIT_FC)
#define CFG_SERVICE_MASK_VIRTIO (0x1 << SERVICE_BIT_VIRTIO)
#define CFG_SERVICE_MASK_OVS (0x1 << SERVICE_BIT_OVS)
#define CFG_SERVICE_MASK_NVME (0x1 << SERVICE_BIT_NVME)
#define CFG_SERVICE_MASK_ROCEAA (0x1 << SERVICE_BIT_ROCEAA)
#define CFG_SERVICE_MASK_CURRENET (0x1 << SERVICE_BIT_CURRENET)
#define CFG_SERVICE_MASK_PPA (0x1 << SERVICE_BIT_PPA)
#define CFG_SERVICE_MASK_MIGRATE (0x1 << SERVICE_BIT_MIGRATE)
/* Definition of the scenario ID in the cfg_data, which is used for SML memory allocation. */
enum scenes_id_define {
SCENES_ID_FPGA_ETH = 0,
SCENES_ID_FPGA_TIOE = 1, /* Discarded */
SCENES_ID_STORAGE_ROCEAA_2x100 = 2,
SCENES_ID_STORAGE_ROCEAA_4x25 = 3,
SCENES_ID_CLOUD = 4,
SCENES_ID_FC = 5,
SCENES_ID_STORAGE_ROCE = 6,
SCENES_ID_COMPUTE_ROCE = 7,
SCENES_ID_STORAGE_TOE = 8,
SCENES_ID_MAX
};
/* struct cfg_cmd_dev_cap.sf_svc_attr */
enum {
SF_SVC_FT_BIT = (1 << 0),
SF_SVC_RDMA_BIT = (1 << 1),
};
enum cfg_cmd {
CFG_CMD_GET_DEV_CAP = 0,
CFG_CMD_GET_HOST_TIMER = 1,
};
struct cfg_cmd_host_timer {
struct mgmt_msg_head head;
u8 host_id;
u8 rsvd1;
u8 timer_pf_num;
u8 timer_pf_id_start;
u16 timer_vf_num;
u16 timer_vf_id_start;
u32 rsvd2[8];
};
struct cfg_cmd_dev_cap {
struct mgmt_msg_head head;
u16 func_id;
u16 rsvd1;
/* Public resources */
u8 host_id;
u8 ep_id;
u8 er_id;
u8 port_id;
u16 host_total_func;
u8 host_pf_num;
u8 pf_id_start;
u16 host_vf_num;
u16 vf_id_start;
u8 host_oq_id_mask_val;
u8 timer_en;
u8 host_valid_bitmap;
u8 rsvd_host;
u16 svc_cap_en;
u16 max_vf;
u8 flexq_en;
u8 valid_cos_bitmap;
/* Reserved for func_valid_cos_bitmap */
u8 port_cos_valid_bitmap;
u8 rsvd_func1;
u32 rsvd_func2;
u8 sf_svc_attr;
u8 func_sf_en;
u8 lb_mode;
u8 smf_pg;
u32 max_conn_num;
u16 max_stick2cache_num;
u16 max_bfilter_start_addr;
u16 bfilter_len;
u16 hash_bucket_num;
/* shared resource */
u8 host_sf_en;
u8 master_host_id;
u8 srv_multi_host_mode;
u8 virtio_vq_size;
u32 rsvd_func3[5];
/* l2nic */
u16 nic_max_sq_id;
u16 nic_max_rq_id;
u16 nic_default_num_queues;
u16 rsvd1_nic;
u32 rsvd2_nic[2];
/* RoCE */
u32 roce_max_qp;
u32 roce_max_cq;
u32 roce_max_srq;
u32 roce_max_mpt;
u32 roce_max_drc_qp;
u32 roce_cmtt_cl_start;
u32 roce_cmtt_cl_end;
u32 roce_cmtt_cl_size;
u32 roce_dmtt_cl_start;
u32 roce_dmtt_cl_end;
u32 roce_dmtt_cl_size;
u32 roce_wqe_cl_start;
u32 roce_wqe_cl_end;
u32 roce_wqe_cl_size;
u8 roce_srq_container_mode;
u8 rsvd_roce1[3];
u32 rsvd_roce2[5];
/* IPsec */
u32 ipsec_max_sactx;
u16 ipsec_max_cq;
u16 rsvd_ipsec1;
u32 rsvd_ipsec[2];
/* OVS */
u32 ovs_max_qpc;
u32 rsvd_ovs1[3];
/* ToE */
u32 toe_max_pctx;
u32 toe_max_cq;
u16 toe_max_srq;
u16 toe_srq_id_start;
u16 toe_max_mpt;
u16 toe_max_cctxt;
u32 rsvd_toe[2];
/* FC */
u32 fc_max_pctx;
u32 fc_max_scq;
u32 fc_max_srq;
u32 fc_max_cctx;
u32 fc_cctx_id_start;
u8 fc_vp_id_start;
u8 fc_vp_id_end;
u8 rsvd_fc1[2];
u32 rsvd_fc2[5];
/* VBS */
u16 vbs_max_volq;
u16 rsvd0_vbs;
u32 rsvd1_vbs[3];
u16 fake_vf_start_id;
u16 fake_vf_num;
u32 fake_vf_max_pctx;
u16 fake_vf_bfilter_start_addr;
u16 fake_vf_bfilter_len;
u32 rsvd_glb[8];
};
#endif
/* SPDX-License-Identifier: GPL-2.0 */
/******************************************************************************
* Copyright (c) Huawei Technologies Co., Ltd. 2022. All rights reserved.
******************************************************************************
File Name : comm_cmdq_intf.h
Version : Initial Draft
Description : common command queue interface
Function List :
History :
Modification: Created file
******************************************************************************/
#ifndef COMM_CMDQ_INTF_H
#define COMM_CMDQ_INTF_H
/* Cmdq ack type */
enum hinic3_ack_type {
HINIC3_ACK_TYPE_CMDQ,
HINIC3_ACK_TYPE_SHARE_CQN,
HINIC3_ACK_TYPE_APP_CQN,
HINIC3_MOD_ACK_MAX = 15,
};
/* Defines the queue type of the set arm bit. */
enum {
SET_ARM_BIT_FOR_CMDQ = 0,
SET_ARM_BIT_FOR_L2NIC_SQ,
SET_ARM_BIT_FOR_L2NIC_RQ,
SET_ARM_BIT_TYPE_NUM
};
/* Defines the type. Each function supports a maximum of eight CMDQ types. */
enum {
CMDQ_0 = 0,
CMDQ_1 = 1, /* dedicated and non-blocking queues */
CMDQ_NUM
};
/* *******************cmd common command data structure ************************ */
// Func->ucode, which is used to set arm bit data,
// The microcode needs to perform big-endian conversion.
struct comm_info_ucode_set_arm_bit {
u32 q_type;
u32 q_id;
};
/* *******************WQE data structure ************************ */
union cmdq_wqe_cs_dw0 {
struct {
u32 err_status : 29;
u32 error_code : 2;
u32 rsvd : 1;
} bs;
u32 val;
};
union cmdq_wqe_cs_dw1 {
// This structure is used when the driver writes the wqe.
struct {
u32 token : 16; // [15:0]
u32 cmd : 8; // [23:16]
u32 mod : 5; // [28:24]
u32 ack_type : 2; // [30:29]
u32 obit : 1; // [31]
} drv_wr;
/* The uCode writes back the structure of the CS_DW1.
* The driver reads and uses the structure. */
struct {
u32 mod : 5; // [4:0]
u32 ack_type : 3; // [7:5]
u32 cmd : 8; // [15:8]
u32 arm : 1; // [16]
u32 rsvd : 14; // [30:17]
u32 obit : 1; // [31]
} wb;
u32 val;
};
/* CmdQ BD information or write back buffer information */
struct cmdq_sge {
u32 pa_h; // Upper 32 bits of the physical address
u32 pa_l; // Upper 32 bits of the physical address
u32 len; // Invalid bit[31].
u32 resv;
};
/* Ctrls section definition of WQE */
struct cmdq_wqe_ctrls {
union {
struct {
u32 bdsl : 8; // [7:0]
u32 drvsl : 2; // [9:8]
u32 rsv : 4; // [13:10]
u32 wf : 1; // [14]
u32 cf : 1; // [15]
u32 tsl : 5; // [20:16]
u32 va : 1; // [21]
u32 df : 1; // [22]
u32 cr : 1; // [23]
u32 difsl : 3; // [26:24]
u32 csl : 2; // [28:27]
u32 ctrlsl : 2; // [30:29]
u32 obit : 1; // [31]
} bs;
u32 val;
} header;
u32 qsf;
};
/* Complete section definition of WQE */
struct cmdq_wqe_cs {
union cmdq_wqe_cs_dw0 dw0;
union cmdq_wqe_cs_dw1 dw1;
union {
struct cmdq_sge sge;
u32 dw2_5[4];
} ack;
};
/* Inline header in WQE inline, describing the length of inline data */
union cmdq_wqe_inline_header {
struct {
u32 buf_len : 11; // [10:0] inline data len
u32 rsv : 21; // [31:11]
} bs;
u32 val;
};
/* Definition of buffer descriptor section in WQE */
union cmdq_wqe_bds {
struct {
struct cmdq_sge bds_sge;
u32 rsvd[4]; /* Zwy is used to transfer the virtual address of the buffer. */
} lcmd; /* Long command, non-inline, and SGE describe the buffer information. */
};
/* Definition of CMDQ WQE */
/* (long cmd, 64B)
* +----------------------------------------+
* | ctrl section(8B) |
* +----------------------------------------+
* | |
* | complete section(24B) |
* | |
* +----------------------------------------+
* | |
* | buffer descriptor section(16B) |
* | |
* +----------------------------------------+
* | driver section(16B) |
* +----------------------------------------+
*
*
* (middle cmd, 128B)
* +----------------------------------------+
* | ctrl section(8B) |
* +----------------------------------------+
* | |
* | complete section(24B) |
* | |
* +----------------------------------------+
* | |
* | buffer descriptor section(88B) |
* | |
* +----------------------------------------+
* | driver section(8B) |
* +----------------------------------------+
*
*
* (short cmd, 64B)
* +----------------------------------------+
* | ctrl section(8B) |
* +----------------------------------------+
* | |
* | complete section(24B) |
* | |
* +----------------------------------------+
* | |
* | buffer descriptor section(24B) |
* | |
* +----------------------------------------+
* | driver section(8B) |
* +----------------------------------------+
*/
struct cmdq_wqe {
struct cmdq_wqe_ctrls ctrls;
struct cmdq_wqe_cs cs;
union cmdq_wqe_bds bds;
};
/* Definition of ctrls section in inline WQE */
struct cmdq_wqe_ctrls_inline {
union {
struct {
u32 bdsl : 8; // [7:0]
u32 drvsl : 2; // [9:8]
u32 rsv : 4; // [13:10]
u32 wf : 1; // [14]
u32 cf : 1; // [15]
u32 tsl : 5; // [20:16]
u32 va : 1; // [21]
u32 df : 1; // [22]
u32 cr : 1; // [23]
u32 difsl : 3; // [26:24]
u32 csl : 2; // [28:27]
u32 ctrlsl : 2; // [30:29]
u32 obit : 1; // [31]
} bs;
u32 val;
} header;
u32 qsf;
u64 db;
};
/* Buffer descriptor section definition of WQE */
union cmdq_wqe_bds_inline {
struct {
union cmdq_wqe_inline_header header;
u32 rsvd;
u8 data_inline[80];
} mcmd; /* Middle command, inline mode */
struct {
union cmdq_wqe_inline_header header;
u32 rsvd;
u8 data_inline[16];
} scmd; /* Short command, inline mode */
};
struct cmdq_wqe_inline {
struct cmdq_wqe_ctrls_inline ctrls;
struct cmdq_wqe_cs cs;
union cmdq_wqe_bds_inline bds;
};
#endif
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) Huawei Technologies Co., Ltd. 2021-2022. All rights reserved.
* File Name : comm_defs.h
* Version : Initial Draft
* Description : common definitions
* Function List :
* History :
* Modification: Created file
*/
#ifndef COMM_DEFS_H
#define COMM_DEFS_H
/* CMDQ MODULE_TYPE */
enum hinic3_mod_type {
HINIC3_MOD_COMM = 0, /* HW communication module */
HINIC3_MOD_L2NIC = 1, /* L2NIC module */
HINIC3_MOD_ROCE = 2,
HINIC3_MOD_PLOG = 3,
HINIC3_MOD_TOE = 4,
HINIC3_MOD_FLR = 5,
HINIC3_MOD_RSVD1 = 6,
HINIC3_MOD_CFGM = 7, /* Configuration module */
HINIC3_MOD_CQM = 8,
HINIC3_MOD_RSVD2 = 9,
COMM_MOD_FC = 10,
HINIC3_MOD_OVS = 11,
HINIC3_MOD_DSW = 12,
HINIC3_MOD_MIGRATE = 13,
HINIC3_MOD_HILINK = 14,
HINIC3_MOD_CRYPT = 15, /* secure crypto module */
HINIC3_MOD_VIO = 16,
HINIC3_MOD_IMU = 17,
HINIC3_MOD_DFT = 18, /* DFT */
HINIC3_MOD_HW_MAX = 19, /* hardware max module id */
/* Software module id, for PF/VF and multi-host */
HINIC3_MOD_SW_FUNC = 20,
HINIC3_MOD_MAX,
};
/* func reset的flag ,用于指示清理哪种资源 */
enum func_reset_flag {
RES_TYPE_FLUSH_BIT = 0,
RES_TYPE_MQM,
RES_TYPE_SMF,
RES_TYPE_PF_BW_CFG,
RES_TYPE_COMM = 10,
RES_TYPE_COMM_MGMT_CH, /* clear mbox and aeq, The RES_TYPE_COMM bit must be set */
RES_TYPE_COMM_CMD_CH, /* clear cmdq and ceq, The RES_TYPE_COMM bit must be set */
RES_TYPE_NIC,
RES_TYPE_OVS,
RES_TYPE_VBS,
RES_TYPE_ROCE,
RES_TYPE_FC,
RES_TYPE_TOE,
RES_TYPE_IPSEC,
RES_TYPE_MAX,
};
#define HINIC3_COMM_RES \
((1 << RES_TYPE_COMM) | (1 << RES_TYPE_COMM_CMD_CH) | \
(1 << RES_TYPE_FLUSH_BIT) | (1 << RES_TYPE_MQM) | \
(1 << RES_TYPE_SMF) | (1 << RES_TYPE_PF_BW_CFG))
#define HINIC3_NIC_RES (1 << RES_TYPE_NIC)
#define HINIC3_OVS_RES (1 << RES_TYPE_OVS)
#define HINIC3_VBS_RES (1 << RES_TYPE_VBS)
#define HINIC3_ROCE_RES (1 << RES_TYPE_ROCE)
#define HINIC3_FC_RES (1 << RES_TYPE_FC)
#define HINIC3_TOE_RES (1 << RES_TYPE_TOE)
#define HINIC3_IPSEC_RES (1 << RES_TYPE_IPSEC)
/* MODE OVS、NIC、UNKNOWN */
#define HINIC3_WORK_MODE_OVS 0
#define HINIC3_WORK_MODE_UNKNOWN 1
#define HINIC3_WORK_MODE_NIC 2
#define DEVICE_TYPE_L2NIC 0
#define DEVICE_TYPE_NVME 1
#define DEVICE_TYPE_VIRTIO_NET 2
#define DEVICE_TYPE_VIRTIO_BLK 3
#define DEVICE_TYPE_VIRTIO_VSOCK 4
#define DEVICE_TYPE_VIRTIO_NET_TRANSITION 5
#define DEVICE_TYPE_VIRTIO_BLK_TRANSITION 6
#define DEVICE_TYPE_VIRTIO_SCSI_TRANSITION 7
#define DEVICE_TYPE_VIRTIO_HPC 8
#define IS_STORAGE_DEVICE_TYPE(dev_type) \
((dev_type) == DEVICE_TYPE_VIRTIO_BLK || \
(dev_type) == DEVICE_TYPE_VIRTIO_BLK_TRANSITION || \
(dev_type) == DEVICE_TYPE_VIRTIO_SCSI_TRANSITION)
/* Common header control information of the COMM message
* interaction command word between the driver and PF
*/
struct comm_info_head {
u8 status;
u8 version;
u8 rep_aeq_num;
u8 rsvd[5];
};
#endif
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) Huawei Technologies Co., Ltd. 2021-2022. All rights reserved.
* File Name : comm_msg_intf.h
* Version : Initial Draft
* Created : 2021/6/28
* Last Modified :
* Description : COMM Command interfaces between Driver and MPU
* Function List :
*/
#ifndef COMM_MSG_INTF_H
#define COMM_MSG_INTF_H
#include "comm_defs.h"
#include "mgmt_msg_base.h"
/* func_reset_flag的边界值 */
#define FUNC_RESET_FLAG_MAX_VALUE ((1U << (RES_TYPE_MAX + 1)) - 1)
struct comm_cmd_func_reset {
struct mgmt_msg_head head;
u16 func_id;
u16 rsvd1[3];
u64 reset_flag;
};
struct comm_cmd_ppf_flr_type_set {
struct mgmt_msg_head head;
u16 func_id;
u8 rsvd1[2];
u32 ppf_flr_type;
};
enum {
COMM_F_API_CHAIN = 1U << 0,
COMM_F_CLP = 1U << 1,
COMM_F_CHANNEL_DETECT = 1U << 2,
COMM_F_MBOX_SEGMENT = 1U << 3,
COMM_F_CMDQ_NUM = 1U << 4,
COMM_F_VIRTIO_VQ_SIZE = 1U << 5,
};
#define COMM_MAX_FEATURE_QWORD 4
struct comm_cmd_feature_nego {
struct mgmt_msg_head head;
u16 func_id;
u8 opcode; /* 1: set, 0: get */
u8 rsvd;
u64 s_feature[COMM_MAX_FEATURE_QWORD];
};
struct comm_cmd_clear_doorbell {
struct mgmt_msg_head head;
u16 func_id;
u16 rsvd1[3];
};
struct comm_cmd_clear_resource {
struct mgmt_msg_head head;
u16 func_id;
u16 rsvd1[3];
};
struct comm_global_attr {
u8 max_host_num;
u8 max_pf_num;
u16 vf_id_start;
u8 mgmt_host_node_id; /* for api cmd to mgmt cpu */
u8 cmdq_num;
u8 rsvd1[2];
u32 rsvd2[8];
};
struct spu_cmd_freq_operation {
struct comm_info_head head;
u8 op_code; /* 0: get 1: set 2: check */
u8 rsvd[3];
u32 freq;
};
struct spu_cmd_power_operation {
struct comm_info_head head;
u8 op_code; /* 0: get 1: set 2: init */
u8 slave_addr;
u8 cmd_id;
u8 size;
u32 value;
};
struct spu_cmd_tsensor_operation {
struct comm_info_head head;
u8 op_code;
u8 rsvd[3];
s16 fabric_tsensor_temp_avg;
s16 fabric_tsensor_temp;
s16 sys_tsensor_temp_avg;
s16 sys_tsensor_temp;
};
struct comm_cmd_heart_event {
struct mgmt_msg_head head;
u8 init_sta; /* 0: mpu init ok, 1: mpu init error. */
u8 rsvd1[3];
u32 heart; /* add one by one */
u32 heart_handshake; /* should be alwasys: 0x5A5A5A5A */
};
struct comm_cmd_channel_detect {
struct mgmt_msg_head head;
u16 func_id;
u16 rsvd1[3];
u32 rsvd2[2];
};
enum hinic3_svc_type {
SVC_T_COMM = 0,
SVC_T_NIC,
SVC_T_OVS,
SVC_T_ROCE,
SVC_T_TOE,
SVC_T_IOE,
SVC_T_FC,
SVC_T_VBS,
SVC_T_IPSEC,
SVC_T_VIRTIO,
SVC_T_MIGRATE,
SVC_T_PPA,
SVC_T_MAX,
};
struct comm_cmd_func_svc_used_state {
struct mgmt_msg_head head;
u16 func_id;
u16 svc_type;
u8 used_state;
u8 rsvd[35];
};
#define TABLE_INDEX_MAX 129
struct sml_table_id_info {
u8 node_id;
u8 instance_id;
};
struct comm_cmd_get_sml_tbl_data {
struct comm_info_head head; /* 8B */
u8 tbl_data[512];
};
struct comm_cmd_get_glb_attr {
struct mgmt_msg_head head;
struct comm_global_attr attr;
};
enum hinic3_fw_ver_type {
HINIC3_FW_VER_TYPE_BOOT,
HINIC3_FW_VER_TYPE_MPU,
HINIC3_FW_VER_TYPE_NPU,
HINIC3_FW_VER_TYPE_SMU_L0,
HINIC3_FW_VER_TYPE_SMU_L1,
HINIC3_FW_VER_TYPE_CFG,
};
#define HINIC3_FW_VERSION_LEN 16
#define HINIC3_FW_COMPILE_TIME_LEN 20
struct comm_cmd_get_fw_version {
struct mgmt_msg_head head;
u16 fw_type;
u16 rsvd1;
u8 ver[HINIC3_FW_VERSION_LEN];
u8 time[HINIC3_FW_COMPILE_TIME_LEN];
};
/* hardware define: cmdq context */
struct cmdq_ctxt_info {
u64 curr_wqe_page_pfn;
u64 wq_block_pfn;
};
struct comm_cmd_cmdq_ctxt {
struct mgmt_msg_head head;
u16 func_id;
u8 cmdq_id;
u8 rsvd1[5];
struct cmdq_ctxt_info ctxt;
};
struct comm_cmd_root_ctxt {
struct mgmt_msg_head head;
u16 func_id;
u8 set_cmdq_depth;
u8 cmdq_depth;
u16 rx_buf_sz;
u8 lro_en;
u8 rsvd1;
u16 sq_depth;
u16 rq_depth;
u64 rsvd2;
};
struct comm_cmd_wq_page_size {
struct mgmt_msg_head head;
u16 func_id;
u8 opcode;
/* real_size=4KB*2^page_size, range(0~20) must be checked by driver */
u8 page_size;
u32 rsvd1;
};
struct comm_cmd_msix_config {
struct mgmt_msg_head head;
u16 func_id;
u8 opcode;
u8 rsvd1;
u16 msix_index;
u8 pending_cnt;
u8 coalesce_timer_cnt;
u8 resend_timer_cnt;
u8 lli_timer_cnt;
u8 lli_credit_cnt;
u8 rsvd2[5];
};
enum cfg_msix_operation {
CFG_MSIX_OPERATION_FREE = 0,
CFG_MSIX_OPERATION_ALLOC = 1,
};
struct comm_cmd_cfg_msix_num {
struct comm_info_head head; /* 8B */
u16 func_id;
u8 op_code; /* 1: alloc 0: free */
u8 rsvd0;
u16 msix_num;
u16 rsvd1;
};
struct comm_cmd_dma_attr_config {
struct mgmt_msg_head head;
u16 func_id;
u8 entry_idx;
u8 st;
u8 at;
u8 ph;
u8 no_snooping;
u8 tph_en;
u32 resv1;
};
struct comm_cmd_ceq_ctrl_reg {
struct mgmt_msg_head head;
u16 func_id;
u16 q_id;
u32 ctrl0;
u32 ctrl1;
u32 rsvd1;
};
struct comm_cmd_func_tmr_bitmap_op {
struct mgmt_msg_head head;
u16 func_id;
u8 opcode; /* 1: start, 0: stop */
u8 rsvd1[5];
};
struct comm_cmd_ppf_tmr_op {
struct mgmt_msg_head head;
u8 ppf_id;
u8 opcode; /* 1: start, 0: stop */
u8 rsvd1[6];
};
struct comm_cmd_ht_gpa {
struct mgmt_msg_head head;
u8 host_id;
u8 rsvd0[3];
u32 rsvd1[7];
u64 page_pa0;
u64 page_pa1;
};
struct comm_cmd_get_eqm_num {
struct mgmt_msg_head head;
u8 host_id;
u8 rsvd1[3];
u32 chunk_num;
u32 search_gpa_num;
};
struct comm_cmd_eqm_cfg {
struct mgmt_msg_head head;
u8 host_id;
u8 valid;
u16 rsvd1;
u32 page_size;
u32 rsvd2;
};
struct comm_cmd_eqm_search_gpa {
struct mgmt_msg_head head;
u8 host_id;
u8 rsvd1[3];
u32 start_idx;
u32 num;
u32 rsvd2;
u64 gpa_hi52[0]; /*lint !e1501*/
};
struct comm_cmd_ffm_info {
struct mgmt_msg_head head;
u8 node_id;
/* error level of the interrupt source */
u8 err_level;
/* Classification by interrupt source properties */
u16 err_type;
u32 err_csr_addr;
u32 err_csr_value;
u32 rsvd1;
};
#define HARDWARE_ID_1XX3V100_TAG 31 /* 1xx3v100 tag */
struct hinic3_board_info {
u8 board_type;
u8 port_num;
u8 port_speed;
u8 pcie_width;
u8 host_num;
u8 pf_num;
u16 vf_total_num;
u8 tile_num;
u8 qcm_num;
u8 core_num;
u8 work_mode;
u8 service_mode;
u8 pcie_mode;
u8 boot_sel;
u8 board_id;
u32 cfg_addr;
u32 service_en_bitmap;
u8 scenes_id;
u8 cfg_template_id;
u8 hardware_id;
u8 spu_en;
u16 pf_vendor_id;
u8 tile_bitmap;
u8 sm_bitmap;
};
struct comm_cmd_board_info {
struct mgmt_msg_head head;
struct hinic3_board_info info;
u32 rsvd[22];
};
struct comm_cmd_sync_time {
struct mgmt_msg_head head;
u64 mstime;
u64 rsvd1;
};
struct comm_cmd_sdi_info {
struct mgmt_msg_head head;
u32 cfg_sdi_mode;
};
/* func flr set */
struct comm_cmd_func_flr_set {
struct mgmt_msg_head head;
u16 func_id;
u8 type; /* 1: close 置flush */
u8 isall; /* 是否操作对应pf下的所有vf 1: all vf */
u32 rsvd;
};
struct comm_cmd_bdf_info {
struct mgmt_msg_head head;
u16 function_idx;
u8 rsvd1[2];
u8 bus;
u8 device;
u8 function;
u8 rsvd2[5];
};
struct hw_pf_info {
u16 glb_func_idx;
u16 glb_pf_vf_offset;
u8 p2p_idx;
u8 itf_idx;
u16 max_vfs;
u16 max_queue_num;
u16 vf_max_queue_num;
u16 port_id;
u16 rsvd0;
u32 pf_service_en_bitmap;
u32 vf_service_en_bitmap;
u16 rsvd1[2];
u8 device_type;
u8 bus_num; /* tl_cfg_bus_num */
u16 vf_stride; /* VF_RID_SETTING.vf_stride */
u16 vf_offset; /* VF_RID_SETTING.vf_offset */
u8 rsvd[2];
};
#define CMD_MAX_MAX_PF_NUM 32
struct hinic3_hw_pf_infos {
u8 num_pfs;
u8 rsvd1[3];
struct hw_pf_info infos[CMD_MAX_MAX_PF_NUM];
};
struct comm_cmd_hw_pf_infos {
struct mgmt_msg_head head;
struct hinic3_hw_pf_infos infos;
};
#define DD_CFG_TEMPLATE_MAX_IDX 12
#define DD_CFG_TEMPLATE_MAX_TXT_LEN 64
#define CFG_TEMPLATE_OP_QUERY 0
#define CFG_TEMPLATE_OP_SET 1
#define CFG_TEMPLATE_SET_MODE_BY_IDX 0
#define CFG_TEMPLATE_SET_MODE_BY_NAME 1
struct comm_cmd_cfg_template {
struct mgmt_msg_head head;
u8 opt_type; /* 0: query 1: set */
u8 set_mode; /* 0-index mode. 1-name mode. */
u8 tp_err;
u8 rsvd0;
u8 cur_index; /* Current cfg tempalte index. */
u8 cur_max_index; /* Max support cfg tempalte index. */
u8 rsvd1[2];
u8 cur_name[DD_CFG_TEMPLATE_MAX_TXT_LEN];
u8 cur_cfg_temp_info[DD_CFG_TEMPLATE_MAX_IDX][DD_CFG_TEMPLATE_MAX_TXT_LEN];
u8 next_index; /* Next reset cfg tempalte index. */
u8 next_max_index; /* Max support cfg tempalte index. */
u8 rsvd2[2];
u8 next_name[DD_CFG_TEMPLATE_MAX_TXT_LEN];
u8 next_cfg_temp_info[DD_CFG_TEMPLATE_MAX_IDX][DD_CFG_TEMPLATE_MAX_TXT_LEN];
};
#define MQM_SUPPORT_COS_NUM 8
#define MQM_INVALID_WEIGHT 256
#define MQM_LIMIT_SET_FLAG_READ 0
#define MQM_LIMIT_SET_FLAG_WRITE 1
struct comm_cmd_set_mqm_limit {
struct mgmt_msg_head head;
u16 set_flag; /* 置位该标记位表示设置 */
u16 func_id;
/* 对应cos_id所占的权重,0-255, 0为SP调度. */
u16 cos_weight[MQM_SUPPORT_COS_NUM];
u32 host_min_rate; /* 本host支持的最低限速 */
u32 func_min_rate; /* 本function支持的最低限速,单位Mbps */
u32 func_max_rate; /* 本function支持的最高限速,单位Mbps */
u8 rsvd[64]; /* Reserved */
};
#define DUMP_16B_PER_LINE 16
#define DUMP_8_VAR_PER_LINE 8
#define DUMP_4_VAR_PER_LINE 4
#define DATA_LEN_1K 1024
/* 软狗超时信息上报接口 */
struct comm_info_sw_watchdog {
struct comm_info_head head;
/* 全局信息 */
u32 curr_time_h; /* 发生死循环的时间,cycle */
u32 curr_time_l; /* 发生死循环的时间,cycle */
u32 task_id; /* 发生死循环的任务 */
u32 rsv; /* 保留字段,用于扩展 */
/* 寄存器信息,TSK_CONTEXT_S */
u64 pc;
u64 elr;
u64 spsr;
u64 far;
u64 esr;
u64 xzr;
u64 x30;
u64 x29;
u64 x28;
u64 x27;
u64 x26;
u64 x25;
u64 x24;
u64 x23;
u64 x22;
u64 x21;
u64 x20;
u64 x19;
u64 x18;
u64 x17;
u64 x16;
u64 x15;
u64 x14;
u64 x13;
u64 x12;
u64 x11;
u64 x10;
u64 x09;
u64 x08;
u64 x07;
u64 x06;
u64 x05;
u64 x04;
u64 x03;
u64 x02;
u64 x01;
u64 x00;
/* 堆栈控制信息,STACK_INFO_S */
u64 stack_top; /* 栈顶 */
u64 stack_bottom; /* 栈底 */
u64 sp; /* 栈当前SP指针值 */
u32 curr_used; /* 栈当前使用的大小 */
u32 peak_used; /* 栈使用的历史峰值 */
u32 is_overflow; /* 栈是否溢出 */
/* 堆栈具体内容 */
u32 stack_actlen; /* 实际的堆栈长度(<=1024) */
u8 stack_data[DATA_LEN_1K]; /* 超过1024部分,会被截断 */
};
/* 临终遗言信息 */
#define XREGS_NUM 31
struct tag_cpu_tick {
u32 cnt_hi; /* *< cycle计数高32位 */
u32 cnt_lo; /* *< cycle计数低32位 */
};
struct tag_ax_exc_reg_info {
u64 ttbr0;
u64 ttbr1;
u64 tcr;
u64 mair;
u64 sctlr;
u64 vbar;
u64 current_el;
u64 sp;
/* 以下字段的内存布局与TskContext保持一致 */
u64 elr; /* 返回地址 */
u64 spsr;
u64 far_r;
u64 esr;
u64 xzr;
u64 xregs[XREGS_NUM]; /* 0~30: x30~x0 */
};
struct tag_exc_info {
char os_ver[48]; /* *< OS版本号 */
char app_ver[64]; /* *< 产品版本号 */
u32 exc_cause; /* *< 异常原因 */
u32 thread_type; /* *< 异常前的线程类型 */
u32 thread_id; /* *< 异常前线程PID */
u16 byte_order; /* *< 字节序 */
u16 cpu_type; /* *< CPU类型 */
u32 cpu_id; /* *< CPU ID */
struct tag_cpu_tick cpu_tick; /* *< CPU Tick */
u32 nest_cnt; /* *< 异常嵌套计数 */
u32 fatal_errno; /* *< 致命错误码,发生致命错误时有效 */
u64 uw_sp; /* *< 异常前栈指针 */
u64 stack_bottom; /* *< 异常前栈底 */
/* 异常发生时的核内寄存器上下文信息,82\57必须位于152字节处,
* 若有改动,需更新sre_platform.eh中的OS_EXC_REGINFO_OFFSET宏
*/
struct tag_ax_exc_reg_info reg_info;
};
/* 上报给驱动的up lastword模块接口 */
#define MPU_LASTWORD_SIZE 1024
struct tag_comm_info_up_lastword {
struct comm_info_head head;
struct tag_exc_info stack_info;
/* 堆栈具体内容 */
u32 stack_actlen; /* 实际的堆栈长度(<=1024) */
u8 stack_data[MPU_LASTWORD_SIZE]; /* 超过1024部分,会被截断 */
};
#define FW_UPDATE_MGMT_TIMEOUT 3000000U
struct hinic3_cmd_update_firmware {
struct mgmt_msg_head msg_head;
struct {
u32 sl : 1;
u32 sf : 1;
u32 flag : 1;
u32 bit_signed : 1;
u32 reserved : 12;
u32 fragment_len : 16;
} ctl_info;
struct {
u32 section_crc;
u32 section_type;
} section_info;
u32 total_len;
u32 section_len;
u32 section_version;
u32 section_offset;
u32 data[384];
};
struct hinic3_cmd_activate_firmware {
struct mgmt_msg_head msg_head;
u8 index; /* 0 ~ 7 */
u8 data[7];
};
struct hinic3_cmd_switch_config {
struct mgmt_msg_head msg_head;
u8 index; /* 0 ~ 7 */
u8 data[7];
};
#endif
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) Huawei Technologies Co., Ltd. 2019-2022. All rights reserved.
* File Name : hinic3_comm_cmd.h
* Version : Initial Draft
* Created : 2019/4/25
* Last Modified :
* Description : COMM Commands between Driver and MPU
* Function List :
*/
#ifndef HINIC3_COMMON_CMD_H
#define HINIC3_COMMON_CMD_H
/* COMM Commands between Driver to MPU */
enum hinic3_mgmt_cmd {
/* flr及资源清理相关命令 */
COMM_MGMT_CMD_FUNC_RESET = 0,
COMM_MGMT_CMD_FEATURE_NEGO,
COMM_MGMT_CMD_FLUSH_DOORBELL,
COMM_MGMT_CMD_START_FLUSH,
COMM_MGMT_CMD_SET_FUNC_FLR,
COMM_MGMT_CMD_GET_GLOBAL_ATTR,
COMM_MGMT_CMD_SET_PPF_FLR_TYPE,
COMM_MGMT_CMD_SET_FUNC_SVC_USED_STATE,
/* 分配msi-x中断资源 */
COMM_MGMT_CMD_CFG_MSIX_NUM = 10,
/* 驱动相关配置命令 */
COMM_MGMT_CMD_SET_CMDQ_CTXT = 20,
COMM_MGMT_CMD_SET_VAT,
COMM_MGMT_CMD_CFG_PAGESIZE,
COMM_MGMT_CMD_CFG_MSIX_CTRL_REG,
COMM_MGMT_CMD_SET_CEQ_CTRL_REG,
COMM_MGMT_CMD_SET_DMA_ATTR,
/* INFRA配置相关命令字 */
COMM_MGMT_CMD_GET_MQM_FIX_INFO = 40,
COMM_MGMT_CMD_SET_MQM_CFG_INFO,
COMM_MGMT_CMD_SET_MQM_SRCH_GPA,
COMM_MGMT_CMD_SET_PPF_TMR,
COMM_MGMT_CMD_SET_PPF_HT_GPA,
COMM_MGMT_CMD_SET_FUNC_TMR_BITMAT,
COMM_MGMT_CMD_SET_MBX_CRDT,
COMM_MGMT_CMD_CFG_TEMPLATE,
COMM_MGMT_CMD_SET_MQM_LIMIT,
/* 信息获取相关命令字 */
COMM_MGMT_CMD_GET_FW_VERSION = 60,
COMM_MGMT_CMD_GET_BOARD_INFO,
COMM_MGMT_CMD_SYNC_TIME,
COMM_MGMT_CMD_GET_HW_PF_INFOS,
COMM_MGMT_CMD_SEND_BDF_INFO,
COMM_MGMT_CMD_GET_VIRTIO_BDF_INFO,
COMM_MGMT_CMD_GET_SML_TABLE_INFO,
COMM_MGMT_CMD_GET_SDI_INFO,
/* 升级相关命令字 */
COMM_MGMT_CMD_UPDATE_FW = 80,
COMM_MGMT_CMD_ACTIVE_FW,
COMM_MGMT_CMD_HOT_ACTIVE_FW,
COMM_MGMT_CMD_HOT_ACTIVE_DONE_NOTICE,
COMM_MGMT_CMD_SWITCH_CFG,
COMM_MGMT_CMD_CHECK_FLASH,
COMM_MGMT_CMD_CHECK_FLASH_RW,
COMM_MGMT_CMD_RESOURCE_CFG,
COMM_MGMT_CMD_UPDATE_BIOS, /* TODO: merge to COMM_MGMT_CMD_UPDATE_FW */
COMM_MGMT_CMD_MPU_GIT_CODE,
/* chip reset相关 */
COMM_MGMT_CMD_FAULT_REPORT = 100,
COMM_MGMT_CMD_WATCHDOG_INFO,
COMM_MGMT_CMD_MGMT_RESET,
COMM_MGMT_CMD_FFM_SET, /* TODO: check if needed */
/* chip info/log 相关 */
COMM_MGMT_CMD_GET_LOG = 120,
COMM_MGMT_CMD_TEMP_OP,
COMM_MGMT_CMD_EN_AUTO_RST_CHIP,
COMM_MGMT_CMD_CFG_REG,
COMM_MGMT_CMD_GET_CHIP_ID,
COMM_MGMT_CMD_SYSINFO_DFX,
COMM_MGMT_CMD_PCIE_DFX_NTC,
COMM_MGMT_CMD_DICT_LOG_STATUS, /* LOG STATUS 127 */
COMM_MGMT_CMD_MSIX_INFO,
COMM_MGMT_CMD_CHANNEL_DETECT,
COMM_MGMT_CMD_DICT_COUNTER_STATUS,
/* switch workmode 相关 */
COMM_MGMT_CMD_CHECK_IF_SWITCH_WORKMODE = 140,
COMM_MGMT_CMD_SWITCH_WORKMODE,
/* mpu 相关 */
COMM_MGMT_CMD_MIGRATE_DFX_HPA = 150,
COMM_MGMT_CMD_BDF_INFO,
COMM_MGMT_CMD_NCSI_CFG_INFO_GET_PROC,
/* rsvd0 section */
COMM_MGMT_CMD_SECTION_RSVD_0 = 160,
/* rsvd1 section */
COMM_MGMT_CMD_SECTION_RSVD_1 = 170,
/* rsvd2 section */
COMM_MGMT_CMD_SECTION_RSVD_2 = 180,
/* rsvd3 section */
COMM_MGMT_CMD_SECTION_RSVD_3 = 190,
/* TODO: move to DFT mode */
COMM_MGMT_CMD_GET_DIE_ID = 200,
COMM_MGMT_CMD_GET_EFUSE_TEST,
COMM_MGMT_CMD_EFUSE_INFO_CFG,
COMM_MGMT_CMD_GPIO_CTL,
COMM_MGMT_CMD_HI30_SERLOOP_START, /* TODO: DFT or hilink */
COMM_MGMT_CMD_HI30_SERLOOP_STOP, /* TODO: DFT or hilink */
COMM_MGMT_CMD_HI30_MBIST_SET_FLAG, /* TODO: DFT or hilink */
COMM_MGMT_CMD_HI30_MBIST_GET_RESULT, /* TODO: DFT or hilink */
COMM_MGMT_CMD_ECC_TEST,
COMM_MGMT_CMD_FUNC_BIST_TEST, /* 209 */
COMM_MGMT_CMD_VPD_SET = 210,
COMM_MGMT_CMD_VPD_GET,
COMM_MGMT_CMD_ERASE_FLASH,
COMM_MGMT_CMD_QUERY_FW_INFO,
COMM_MGMT_CMD_GET_CFG_INFO,
COMM_MGMT_CMD_GET_UART_LOG,
COMM_MGMT_CMD_SET_UART_CMD,
COMM_MGMT_CMD_SPI_TEST,
/* TODO: ALL reg read/write merge to COMM_MGMT_CMD_CFG_REG */
COMM_MGMT_CMD_UP_REG_GET,
COMM_MGMT_CMD_UP_REG_SET, /* 219 */
COMM_MGMT_CMD_REG_READ = 220,
COMM_MGMT_CMD_REG_WRITE,
COMM_MGMT_CMD_MAG_REG_WRITE,
COMM_MGMT_CMD_ANLT_REG_WRITE,
COMM_MGMT_CMD_HEART_EVENT, /* TODO: delete */
COMM_MGMT_CMD_NCSI_OEM_GET_DRV_INFO, /* TODO: delete */
COMM_MGMT_CMD_LASTWORD_GET,
COMM_MGMT_CMD_READ_BIN_DATA, /* TODO: delete */
/* COMM_MGMT_CMD_WWPN_GET, TODO: move to FC? */
/* COMM_MGMT_CMD_WWPN_SET, TODO: move to FC? */ /* 229 */
/* TODO: check if needed */
COMM_MGMT_CMD_SET_VIRTIO_DEV = 230,
COMM_MGMT_CMD_SET_MAC,
/* MPU patch cmd */
COMM_MGMT_CMD_LOAD_PATCH,
COMM_MGMT_CMD_REMOVE_PATCH,
COMM_MGMT_CMD_PATCH_ACTIVE,
COMM_MGMT_CMD_PATCH_DEACTIVE,
COMM_MGMT_CMD_PATCH_SRAM_OPTIMIZE,
/* container host process */
COMM_MGMT_CMD_CONTAINER_HOST_PROC,
/* nsci counter */
COMM_MGMT_CMD_NCSI_COUNTER_PROC,
COMM_MGMT_CMD_CHANNEL_STATUS_CHECK, /* 239 */
/* hot patch rsvd cmd */
COMM_MGMT_CMD_RSVD_0 = 240,
COMM_MGMT_CMD_RSVD_1,
COMM_MGMT_CMD_RSVD_2,
COMM_MGMT_CMD_RSVD_3,
COMM_MGMT_CMD_RSVD_4,
/* 无效字段,版本收编删除,编译使用 */
COMM_MGMT_CMD_SEND_API_ACK_BY_UP,
/* 注:添加cmd,不能修改已有命令字的值,请在前方rsvd
* section中添加;原则上所有分支cmd表完全一致
*/
COMM_MGMT_CMD_MAX = 255,
};
/* CmdQ Common subtype */
enum comm_cmdq_cmd {
COMM_CMD_UCODE_ARM_BIT_SET = 2,
COMM_CMD_SEND_NPU_DFT_CMD,
};
#endif /* HINIC3_COMMON_CMD_H */
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
#ifndef HINIC3_COMMON_H
#define HINIC3_COMMON_H
#include <linux/types.h>
struct hinic3_dma_addr_align {
u32 real_size;
void *ori_vaddr;
dma_addr_t ori_paddr;
void *align_vaddr;
dma_addr_t align_paddr;
};
enum hinic3_wait_return {
WAIT_PROCESS_CPL = 0,
WAIT_PROCESS_WAITING = 1,
WAIT_PROCESS_ERR = 2,
};
struct hinic3_sge {
u32 hi_addr;
u32 lo_addr;
u32 len;
};
#ifdef static
#undef static
#define LLT_STATIC_DEF_SAVED
#endif
/* *
* hinic_cpu_to_be32 - convert data to big endian 32 bit format
* @data: the data to convert
* @len: length of data to convert, must be Multiple of 4B
*/
static inline void hinic3_cpu_to_be32(void *data, int len)
{
int i, chunk_sz = sizeof(u32);
int data_len = len;
u32 *mem = data;
if (!data)
return;
data_len = data_len / chunk_sz;
for (i = 0; i < data_len; i++) {
*mem = cpu_to_be32(*mem);
mem++;
}
}
/* *
* hinic3_cpu_to_be32 - convert data from big endian 32 bit format
* @data: the data to convert
* @len: length of data to convert
*/
static inline void hinic3_be32_to_cpu(void *data, int len)
{
int i, chunk_sz = sizeof(u32);
int data_len = len;
u32 *mem = data;
if (!data)
return;
data_len = data_len / chunk_sz;
for (i = 0; i < data_len; i++) {
*mem = be32_to_cpu(*mem);
mem++;
}
}
/* *
* hinic3_set_sge - set dma area in scatter gather entry
* @sge: scatter gather entry
* @addr: dma address
* @len: length of relevant data in the dma address
*/
static inline void hinic3_set_sge(struct hinic3_sge *sge, dma_addr_t addr,
int len)
{
sge->hi_addr = upper_32_bits(addr);
sge->lo_addr = lower_32_bits(addr);
sge->len = len;
}
#define hinic3_hw_be32(val) (val)
#define hinic3_hw_cpu32(val) (val)
#define hinic3_hw_cpu16(val) (val)
static inline void hinic3_hw_be32_len(void *data, int len)
{
}
static inline void hinic3_hw_cpu32_len(void *data, int len)
{
}
int hinic3_dma_zalloc_coherent_align(void *dev_hdl, u64 size, u64 align,
unsigned int flag,
struct hinic3_dma_addr_align *mem_align);
void hinic3_dma_free_coherent_align(void *dev_hdl,
struct hinic3_dma_addr_align *mem_align);
typedef enum hinic3_wait_return (*wait_cpl_handler)(void *priv_data);
int hinic3_wait_for_timeout(void *priv_data, wait_cpl_handler handler,
u32 wait_total_ms, u32 wait_once_us);
#endif
此差异已折叠。
此差异已折叠。
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
#include <linux/kernel.h>
#include <linux/pci.h>
#include <linux/device.h>
#include <linux/module.h>
#include <linux/types.h>
#include <linux/errno.h>
#include <linux/interrupt.h>
#include <linux/etherdevice.h>
#include <linux/netdevice.h>
#include "hinic3_crm.h"
#include "hinic3_lld.h"
#include "hinic3_nic_cfg.h"
#include "hinic3_srv_nic.h"
#include "hinic3_nic_dev.h"
#include "hinic3_dcb.h"
#define MAX_BW_PERCENT 100
u8 hinic3_get_dev_user_cos_num(struct hinic3_nic_dev *nic_dev)
{
if (nic_dev->hw_dcb_cfg.trust == 0)
return nic_dev->hw_dcb_cfg.pcp_user_cos_num;
if (nic_dev->hw_dcb_cfg.trust == 1)
return nic_dev->hw_dcb_cfg.dscp_user_cos_num;
return 0;
}
u8 hinic3_get_dev_valid_cos_map(struct hinic3_nic_dev *nic_dev)
{
if (nic_dev->hw_dcb_cfg.trust == 0)
return nic_dev->hw_dcb_cfg.pcp_valid_cos_map;
if (nic_dev->hw_dcb_cfg.trust == 1)
return nic_dev->hw_dcb_cfg.dscp_valid_cos_map;
return 0;
}
void hinic3_update_qp_cos_cfg(struct hinic3_nic_dev *nic_dev, u8 num_cos)
{
struct hinic3_dcb_config *dcb_cfg = &nic_dev->hw_dcb_cfg;
u8 i, remainder, num_sq_per_cos, cur_cos_num = 0;
u8 valid_cos_map = hinic3_get_dev_valid_cos_map(nic_dev);
if (num_cos == 0)
return;
num_sq_per_cos = (u8)(nic_dev->q_params.num_qps / num_cos);
if (num_sq_per_cos == 0)
return;
remainder = nic_dev->q_params.num_qps % num_sq_per_cos;
memset(dcb_cfg->cos_qp_offset, 0, sizeof(dcb_cfg->cos_qp_offset));
memset(dcb_cfg->cos_qp_num, 0, sizeof(dcb_cfg->cos_qp_num));
for (i = 0; i < PCP_MAX_UP; i++) {
if (BIT(i) & valid_cos_map) {
u8 cos_qp_num = num_sq_per_cos;
u8 cos_qp_offset = (u8)(cur_cos_num * num_sq_per_cos);
if (cur_cos_num < remainder) {
cos_qp_num++;
cos_qp_offset += cur_cos_num;
} else {
cos_qp_offset += remainder;
}
cur_cos_num++;
valid_cos_map -= (u8)BIT(i);
dcb_cfg->cos_qp_offset[i] = cos_qp_offset;
dcb_cfg->cos_qp_num[i] = cos_qp_num;
hinic3_info(nic_dev, drv, "cos %u, cos_qp_offset=%u cos_qp_num=%u\n",
i, cos_qp_offset, cos_qp_num);
}
}
memcpy(nic_dev->wanted_dcb_cfg.cos_qp_offset, dcb_cfg->cos_qp_offset,
sizeof(dcb_cfg->cos_qp_offset));
memcpy(nic_dev->wanted_dcb_cfg.cos_qp_num, dcb_cfg->cos_qp_num,
sizeof(dcb_cfg->cos_qp_num));
}
void hinic3_update_tx_db_cos(struct hinic3_nic_dev *nic_dev, u8 dcb_en)
{
u8 i;
u16 start_qid, q_num;
hinic3_set_txq_cos(nic_dev, 0, nic_dev->q_params.num_qps,
nic_dev->hw_dcb_cfg.default_cos);
if (!dcb_en)
return;
for (i = 0; i < NIC_DCB_COS_MAX; i++) {
q_num = (u16)nic_dev->hw_dcb_cfg.cos_qp_num[i];
if (q_num) {
start_qid = (u16)nic_dev->hw_dcb_cfg.cos_qp_offset[i];
hinic3_set_txq_cos(nic_dev, start_qid, q_num, i);
hinic3_info(nic_dev, drv, "update tx db cos, start_qid %u, q_num=%u cos=%u\n",
start_qid, q_num, i);
}
}
}
static int hinic3_set_tx_cos_state(struct hinic3_nic_dev *nic_dev, u8 dcb_en)
{
struct hinic3_dcb_config *dcb_cfg = &nic_dev->hw_dcb_cfg;
struct hinic3_dcb_state dcb_state = {0};
u8 i;
int err;
if (HINIC3_FUNC_IS_VF(nic_dev->hwdev)) {
/* VF does not support DCB, use the default cos */
dcb_cfg->default_cos = (u8)fls(nic_dev->func_dft_cos_bitmap) - 1;
return 0;
}
dcb_state.dcb_on = dcb_en;
dcb_state.default_cos = dcb_cfg->default_cos;
dcb_state.trust = dcb_cfg->trust;
if (dcb_en) {
for (i = 0; i < NIC_DCB_COS_MAX; i++)
dcb_state.pcp2cos[i] = dcb_cfg->pcp2cos[i];
for (i = 0; i < NIC_DCB_IP_PRI_MAX; i++)
dcb_state.dscp2cos[i] = dcb_cfg->dscp2cos[i];
} else {
memset(dcb_state.pcp2cos, dcb_cfg->default_cos, sizeof(dcb_state.pcp2cos));
memset(dcb_state.dscp2cos, dcb_cfg->default_cos, sizeof(dcb_state.dscp2cos));
}
err = hinic3_set_dcb_state(nic_dev->hwdev, &dcb_state);
if (err)
hinic3_err(nic_dev, drv, "Failed to set dcb state\n");
return err;
}
static int hinic3_configure_dcb_hw(struct hinic3_nic_dev *nic_dev, u8 dcb_en)
{
int err;
u8 user_cos_num = hinic3_get_dev_user_cos_num(nic_dev);
err = hinic3_sync_dcb_state(nic_dev->hwdev, 1, dcb_en);
if (err) {
hinic3_err(nic_dev, drv, "Set dcb state failed\n");
return err;
}
hinic3_update_qp_cos_cfg(nic_dev, user_cos_num);
hinic3_update_tx_db_cos(nic_dev, dcb_en);
err = hinic3_set_tx_cos_state(nic_dev, dcb_en);
if (err) {
hinic3_err(nic_dev, drv, "Set tx cos state failed\n");
goto set_tx_cos_fail;
}
err = hinic3_rx_configure(nic_dev->netdev, dcb_en);
if (err) {
hinic3_err(nic_dev, drv, "rx configure failed\n");
goto rx_configure_fail;
}
if (dcb_en)
set_bit(HINIC3_DCB_ENABLE, &nic_dev->flags);
else
clear_bit(HINIC3_DCB_ENABLE, &nic_dev->flags);
return 0;
rx_configure_fail:
hinic3_set_tx_cos_state(nic_dev, dcb_en ? 0 : 1);
set_tx_cos_fail:
hinic3_update_tx_db_cos(nic_dev, dcb_en ? 0 : 1);
hinic3_sync_dcb_state(nic_dev->hwdev, 1, dcb_en ? 0 : 1);
return err;
}
int hinic3_setup_cos(struct net_device *netdev, u8 cos, u8 netif_run)
{
struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
int err;
if (cos && test_bit(HINIC3_SAME_RXTX, &nic_dev->flags)) {
nicif_err(nic_dev, drv, netdev, "Failed to enable DCB while Symmetric RSS is enabled\n");
return -EOPNOTSUPP;
}
if (cos > nic_dev->cos_config_num_max) {
nicif_err(nic_dev, drv, netdev, "Invalid num_tc: %u, max cos: %u\n",
cos, nic_dev->cos_config_num_max);
return -EINVAL;
}
err = hinic3_configure_dcb_hw(nic_dev, cos ? 1 : 0);
if (err)
return err;
return 0;
}
static u8 get_cos_num(u8 hw_valid_cos_bitmap)
{
u8 support_cos = 0;
u8 i;
for (i = 0; i < NIC_DCB_COS_MAX; i++)
if (hw_valid_cos_bitmap & BIT(i))
support_cos++;
return support_cos;
}
static void hinic3_sync_dcb_cfg(struct hinic3_nic_dev *nic_dev,
const struct hinic3_dcb_config *dcb_cfg)
{
struct hinic3_dcb_config *hw_cfg = &nic_dev->hw_dcb_cfg;
memcpy(hw_cfg, dcb_cfg, sizeof(struct hinic3_dcb_config));
}
static int init_default_dcb_cfg(struct hinic3_nic_dev *nic_dev,
struct hinic3_dcb_config *dcb_cfg)
{
u8 i, hw_dft_cos_map, port_cos_bitmap, dscp_ind;
int err;
err = hinic3_cos_valid_bitmap(nic_dev->hwdev, &hw_dft_cos_map, &port_cos_bitmap);
if (err) {
hinic3_err(nic_dev, drv, "None cos supported\n");
return -EFAULT;
}
nic_dev->func_dft_cos_bitmap = hw_dft_cos_map;
nic_dev->port_dft_cos_bitmap = port_cos_bitmap;
nic_dev->cos_config_num_max = get_cos_num(hw_dft_cos_map);
dcb_cfg->trust = DCB_PCP;
dcb_cfg->pcp_user_cos_num = nic_dev->cos_config_num_max;
dcb_cfg->dscp_user_cos_num = nic_dev->cos_config_num_max;
dcb_cfg->default_cos = (u8)fls(nic_dev->func_dft_cos_bitmap) - 1;
dcb_cfg->pcp_valid_cos_map = hw_dft_cos_map;
dcb_cfg->dscp_valid_cos_map = hw_dft_cos_map;
for (i = 0; i < NIC_DCB_COS_MAX; i++) {
dcb_cfg->pcp2cos[i] = hw_dft_cos_map & BIT(i) ? i : dcb_cfg->default_cos;
for (dscp_ind = 0; dscp_ind < NIC_DCB_COS_MAX; dscp_ind++)
dcb_cfg->dscp2cos[i * NIC_DCB_DSCP_NUM + dscp_ind] = dcb_cfg->pcp2cos[i];
}
return 0;
}
void hinic3_dcb_reset_hw_config(struct hinic3_nic_dev *nic_dev)
{
struct hinic3_dcb_config dft_cfg = {0};
init_default_dcb_cfg(nic_dev, &dft_cfg);
hinic3_sync_dcb_cfg(nic_dev, &dft_cfg);
hinic3_info(nic_dev, drv, "Reset DCB configuration done\n");
}
int hinic3_configure_dcb(struct net_device *netdev)
{
struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
int err;
err = hinic3_sync_dcb_state(nic_dev->hwdev, 1,
test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags) ? 1 : 0);
if (err) {
hinic3_err(nic_dev, drv, "Set dcb state failed\n");
return err;
}
if (test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags))
hinic3_sync_dcb_cfg(nic_dev, &nic_dev->wanted_dcb_cfg);
else
hinic3_dcb_reset_hw_config(nic_dev);
return 0;
}
int hinic3_dcb_init(struct hinic3_nic_dev *nic_dev)
{
struct hinic3_dcb_config *dcb_cfg = &nic_dev->hw_dcb_cfg;
int err;
u8 dcb_en = test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags) ? 1 : 0;
if (HINIC3_FUNC_IS_VF(nic_dev->hwdev))
return hinic3_set_tx_cos_state(nic_dev, dcb_en);
err = init_default_dcb_cfg(nic_dev, dcb_cfg);
if (err) {
hinic3_err(nic_dev, drv, "Initialize dcb configuration failed\n");
return err;
}
memcpy(&nic_dev->wanted_dcb_cfg, &nic_dev->hw_dcb_cfg, sizeof(struct hinic3_dcb_config));
hinic3_info(nic_dev, drv, "Support num cos %u, default cos %u\n",
nic_dev->cos_config_num_max, dcb_cfg->default_cos);
err = hinic3_set_tx_cos_state(nic_dev, dcb_en);
if (err) {
hinic3_err(nic_dev, drv, "Set tx cos state failed\n");
return err;
}
sema_init(&nic_dev->dcb_sem, 1);
return 0;
}
static int change_qos_cfg(struct hinic3_nic_dev *nic_dev, const struct hinic3_dcb_config *dcb_cfg)
{
struct net_device *netdev = nic_dev->netdev;
int err = 0;
u8 user_cos_num = hinic3_get_dev_user_cos_num(nic_dev);
if (test_and_set_bit(HINIC3_DCB_UP_COS_SETTING, &nic_dev->dcb_flags)) {
nicif_warn(nic_dev, drv, netdev,
"Cos_up map setting in inprocess, please try again later\n");
return -EFAULT;
}
hinic3_sync_dcb_cfg(nic_dev, dcb_cfg);
hinic3_update_qp_cos_cfg(nic_dev, user_cos_num);
clear_bit(HINIC3_DCB_UP_COS_SETTING, &nic_dev->dcb_flags);
return err;
}
int hinic3_dcbcfg_set_up_bitmap(struct hinic3_nic_dev *nic_dev)
{
int err, rollback_err;
u8 netif_run = 0;
struct hinic3_dcb_config old_dcb_cfg;
u8 user_cos_num = hinic3_get_dev_user_cos_num(nic_dev);
memcpy(&old_dcb_cfg, &nic_dev->hw_dcb_cfg, sizeof(struct hinic3_dcb_config));
if (!memcmp(&nic_dev->wanted_dcb_cfg, &old_dcb_cfg, sizeof(struct hinic3_dcb_config))) {
nicif_info(nic_dev, drv, nic_dev->netdev,
"Same valid up bitmap, don't need to change anything\n");
return 0;
}
rtnl_lock();
if (netif_running(nic_dev->netdev)) {
netif_run = 1;
hinic3_vport_down(nic_dev);
}
err = change_qos_cfg(nic_dev, &nic_dev->wanted_dcb_cfg);
if (err) {
nicif_err(nic_dev, drv, nic_dev->netdev, "Set cos_up map to hw failed\n");
goto change_qos_cfg_fail;
}
if (test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags)) {
err = hinic3_setup_cos(nic_dev->netdev, user_cos_num, netif_run);
if (err)
goto set_err;
}
if (netif_run) {
err = hinic3_vport_up(nic_dev);
if (err)
goto vport_up_fail;
}
rtnl_unlock();
return 0;
vport_up_fail:
if (test_bit(HINIC3_DCB_ENABLE, &nic_dev->flags))
hinic3_setup_cos(nic_dev->netdev, user_cos_num ? 0 : user_cos_num, netif_run);
set_err:
rollback_err = change_qos_cfg(nic_dev, &old_dcb_cfg);
if (rollback_err)
nicif_err(nic_dev, drv, nic_dev->netdev,
"Failed to rollback qos configure\n");
change_qos_cfg_fail:
if (netif_run)
hinic3_vport_up(nic_dev);
rtnl_unlock();
return err;
}
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
#ifndef HINIC3_DCB_H
#define HINIC3_DCB_H
#include "ossl_knl.h"
enum HINIC3_DCB_FLAGS {
HINIC3_DCB_UP_COS_SETTING,
HINIC3_DCB_TRAFFIC_STOPPED,
};
struct hinic3_cos_cfg {
u8 up;
u8 bw_pct;
u8 tc_id;
u8 prio_sp; /* 0 - DWRR, 1 - SP */
};
struct hinic3_tc_cfg {
u8 bw_pct;
u8 prio_sp; /* 0 - DWRR, 1 - SP */
u16 rsvd;
};
enum HINIC3_DCB_TRUST {
DCB_PCP,
DCB_DSCP,
};
#define PCP_MAX_UP 8
#define DSCP_MAC_UP 64
#define DBG_DFLT_DSCP_VAL 0xFF
struct hinic3_dcb_config {
u8 trust; /* pcp, dscp */
u8 default_cos;
u8 pcp_user_cos_num;
u8 pcp_valid_cos_map;
u8 dscp_user_cos_num;
u8 dscp_valid_cos_map;
u8 pcp2cos[PCP_MAX_UP];
u8 dscp2cos[DSCP_MAC_UP];
u8 cos_qp_offset[NIC_DCB_COS_MAX];
u8 cos_qp_num[NIC_DCB_COS_MAX];
};
u8 hinic3_get_dev_user_cos_num(struct hinic3_nic_dev *nic_dev);
u8 hinic3_get_dev_valid_cos_map(struct hinic3_nic_dev *nic_dev);
int hinic3_dcb_init(struct hinic3_nic_dev *nic_dev);
void hinic3_dcb_reset_hw_config(struct hinic3_nic_dev *nic_dev);
int hinic3_configure_dcb(struct net_device *netdev);
int hinic3_setup_cos(struct net_device *netdev, u8 cos, u8 netif_run);
void hinic3_dcbcfg_set_pfc_state(struct hinic3_nic_dev *nic_dev, u8 pfc_state);
u8 hinic3_dcbcfg_get_pfc_state(struct hinic3_nic_dev *nic_dev);
void hinic3_dcbcfg_set_pfc_pri_en(struct hinic3_nic_dev *nic_dev,
u8 pfc_en_bitmap);
u8 hinic3_dcbcfg_get_pfc_pri_en(struct hinic3_nic_dev *nic_dev);
int hinic3_dcbcfg_set_ets_up_tc_map(struct hinic3_nic_dev *nic_dev,
const u8 *up_tc_map);
void hinic3_dcbcfg_get_ets_up_tc_map(struct hinic3_nic_dev *nic_dev,
u8 *up_tc_map);
int hinic3_dcbcfg_set_ets_tc_bw(struct hinic3_nic_dev *nic_dev,
const u8 *tc_bw);
void hinic3_dcbcfg_get_ets_tc_bw(struct hinic3_nic_dev *nic_dev, u8 *tc_bw);
void hinic3_dcbcfg_set_ets_tc_prio_type(struct hinic3_nic_dev *nic_dev,
u8 tc_prio_bitmap);
void hinic3_dcbcfg_get_ets_tc_prio_type(struct hinic3_nic_dev *nic_dev,
u8 *tc_prio_bitmap);
int hinic3_dcbcfg_set_up_bitmap(struct hinic3_nic_dev *nic_dev);
void hinic3_update_tx_db_cos(struct hinic3_nic_dev *nic_dev, u8 dcb_en);
void hinic3_update_qp_cos_cfg(struct hinic3_nic_dev *nic_dev, u8 num_cos);
void hinic3_vport_down(struct hinic3_nic_dev *nic_dev);
int hinic3_vport_up(struct hinic3_nic_dev *nic_dev);
#endif
此差异已折叠。
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
#include <linux/kernel.h>
#include <linux/pci.h>
#include <linux/device.h>
#include <linux/types.h>
#include <linux/errno.h>
#include <linux/etherdevice.h>
#include <linux/netdevice.h>
#include <linux/debugfs.h>
#include <linux/module.h>
#include <linux/moduleparam.h>
#include "ossl_knl.h"
#include "hinic3_hw.h"
#include "hinic3_crm.h"
#include "hinic3_nic_dev.h"
#include "hinic3_srv_nic.h"
static unsigned char set_filter_state = 1;
module_param(set_filter_state, byte, 0444);
MODULE_PARM_DESC(set_filter_state, "Set mac filter config state: 0 - disable, 1 - enable (default=1)");
static int hinic3_uc_sync(struct net_device *netdev, u8 *addr)
{
struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
return hinic3_set_mac(nic_dev->hwdev, addr, 0,
hinic3_global_func_id(nic_dev->hwdev),
HINIC3_CHANNEL_NIC);
}
static int hinic3_uc_unsync(struct net_device *netdev, u8 *addr)
{
struct hinic3_nic_dev *nic_dev = netdev_priv(netdev);
/* The addr is in use */
if (ether_addr_equal(addr, netdev->dev_addr))
return 0;
return hinic3_del_mac(nic_dev->hwdev, addr, 0,
hinic3_global_func_id(nic_dev->hwdev),
HINIC3_CHANNEL_NIC);
}
void hinic3_clean_mac_list_filter(struct hinic3_nic_dev *nic_dev)
{
struct net_device *netdev = nic_dev->netdev;
struct hinic3_mac_filter *ftmp = NULL;
struct hinic3_mac_filter *f = NULL;
list_for_each_entry_safe(f, ftmp, &nic_dev->uc_filter_list, list) {
if (f->state == HINIC3_MAC_HW_SYNCED)
hinic3_uc_unsync(netdev, f->addr);
list_del(&f->list);
kfree(f);
}
list_for_each_entry_safe(f, ftmp, &nic_dev->mc_filter_list, list) {
if (f->state == HINIC3_MAC_HW_SYNCED)
hinic3_uc_unsync(netdev, f->addr);
list_del(&f->list);
kfree(f);
}
}
static struct hinic3_mac_filter *hinic3_find_mac(const struct list_head *filter_list,
u8 *addr)
{
struct hinic3_mac_filter *f = NULL;
list_for_each_entry(f, filter_list, list) {
if (ether_addr_equal(addr, f->addr))
return f;
}
return NULL;
}
static struct hinic3_mac_filter *hinic3_add_filter(struct hinic3_nic_dev *nic_dev,
struct list_head *mac_filter_list,
u8 *addr)
{
struct hinic3_mac_filter *f;
f = kzalloc(sizeof(*f), GFP_ATOMIC);
if (!f)
goto out;
ether_addr_copy(f->addr, addr);
INIT_LIST_HEAD(&f->list);
list_add_tail(&f->list, mac_filter_list);
f->state = HINIC3_MAC_WAIT_HW_SYNC;
set_bit(HINIC3_MAC_FILTER_CHANGED, &nic_dev->flags);
out:
return f;
}
static void hinic3_del_filter(struct hinic3_nic_dev *nic_dev,
struct hinic3_mac_filter *f)
{
set_bit(HINIC3_MAC_FILTER_CHANGED, &nic_dev->flags);
if (f->state == HINIC3_MAC_WAIT_HW_SYNC) {
/* have not added to hw, delete it directly */
list_del(&f->list);
kfree(f);
return;
}
f->state = HINIC3_MAC_WAIT_HW_UNSYNC;
}
static struct hinic3_mac_filter *hinic3_mac_filter_entry_clone(const struct hinic3_mac_filter *src)
{
struct hinic3_mac_filter *f;
f = kzalloc(sizeof(*f), GFP_ATOMIC);
if (!f)
return NULL;
*f = *src;
INIT_LIST_HEAD(&f->list);
return f;
}
static void hinic3_undo_del_filter_entries(struct list_head *filter_list,
const struct list_head *from)
{
struct hinic3_mac_filter *ftmp = NULL;
struct hinic3_mac_filter *f = NULL;
list_for_each_entry_safe(f, ftmp, from, list) {
if (hinic3_find_mac(filter_list, f->addr))
continue;
if (f->state == HINIC3_MAC_HW_SYNCED)
f->state = HINIC3_MAC_WAIT_HW_UNSYNC;
list_move_tail(&f->list, filter_list);
}
}
static void hinic3_undo_add_filter_entries(struct list_head *filter_list,
const struct list_head *from)
{
struct hinic3_mac_filter *ftmp = NULL;
struct hinic3_mac_filter *tmp = NULL;
struct hinic3_mac_filter *f = NULL;
list_for_each_entry_safe(f, ftmp, from, list) {
tmp = hinic3_find_mac(filter_list, f->addr);
if (tmp && tmp->state == HINIC3_MAC_HW_SYNCED)
tmp->state = HINIC3_MAC_WAIT_HW_SYNC;
}
}
static void hinic3_cleanup_filter_list(const struct list_head *head)
{
struct hinic3_mac_filter *ftmp = NULL;
struct hinic3_mac_filter *f = NULL;
list_for_each_entry_safe(f, ftmp, head, list) {
list_del(&f->list);
kfree(f);
}
}
static int hinic3_mac_filter_sync_hw(struct hinic3_nic_dev *nic_dev,
struct list_head *del_list,
struct list_head *add_list)
{
struct net_device *netdev = nic_dev->netdev;
struct hinic3_mac_filter *ftmp = NULL;
struct hinic3_mac_filter *f = NULL;
int err = 0, add_count = 0;
if (!list_empty(del_list)) {
list_for_each_entry_safe(f, ftmp, del_list, list) {
err = hinic3_uc_unsync(netdev, f->addr);
if (err) { /* ignore errors when delete mac */
nic_err(&nic_dev->pdev->dev, "Failed to delete mac\n");
}
list_del(&f->list);
kfree(f);
}
}
if (!list_empty(add_list)) {
list_for_each_entry_safe(f, ftmp, add_list, list) {
err = hinic3_uc_sync(netdev, f->addr);
if (err) {
nic_err(&nic_dev->pdev->dev, "Failed to add mac\n");
return err;
}
add_count++;
list_del(&f->list);
kfree(f);
}
}
return add_count;
}
static int hinic3_mac_filter_sync(struct hinic3_nic_dev *nic_dev,
struct list_head *mac_filter_list, bool uc)
{
struct net_device *netdev = nic_dev->netdev;
struct list_head tmp_del_list, tmp_add_list;
struct hinic3_mac_filter *fclone = NULL;
struct hinic3_mac_filter *ftmp = NULL;
struct hinic3_mac_filter *f = NULL;
int err = 0, add_count = 0;
INIT_LIST_HEAD(&tmp_del_list);
INIT_LIST_HEAD(&tmp_add_list);
list_for_each_entry_safe(f, ftmp, mac_filter_list, list) {
if (f->state != HINIC3_MAC_WAIT_HW_UNSYNC)
continue;
f->state = HINIC3_MAC_HW_UNSYNCED;
list_move_tail(&f->list, &tmp_del_list);
}
list_for_each_entry_safe(f, ftmp, mac_filter_list, list) {
if (f->state != HINIC3_MAC_WAIT_HW_SYNC)
continue;
fclone = hinic3_mac_filter_entry_clone(f);
if (!fclone) {
err = -ENOMEM;
break;
}
f->state = HINIC3_MAC_HW_SYNCED;
list_add_tail(&fclone->list, &tmp_add_list);
}
if (err) {
hinic3_undo_del_filter_entries(mac_filter_list, &tmp_del_list);
hinic3_undo_add_filter_entries(mac_filter_list, &tmp_add_list);
nicif_err(nic_dev, drv, netdev, "Failed to clone mac_filter_entry\n");
hinic3_cleanup_filter_list(&tmp_del_list);
hinic3_cleanup_filter_list(&tmp_add_list);
return -ENOMEM;
}
add_count = hinic3_mac_filter_sync_hw(nic_dev, &tmp_del_list,
&tmp_add_list);
if (list_empty(&tmp_add_list))
return add_count;
/* there are errors when add mac to hw, delete all mac in hw */
hinic3_undo_add_filter_entries(mac_filter_list, &tmp_add_list);
/* VF don't support to enter promisc mode,
* so we can't delete any other uc mac
*/
if (!HINIC3_FUNC_IS_VF(nic_dev->hwdev) || !uc) {
list_for_each_entry_safe(f, ftmp, mac_filter_list, list) {
if (f->state != HINIC3_MAC_HW_SYNCED)
continue;
fclone = hinic3_mac_filter_entry_clone(f);
if (!fclone)
break;
f->state = HINIC3_MAC_WAIT_HW_SYNC;
list_add_tail(&fclone->list, &tmp_del_list);
}
}
hinic3_cleanup_filter_list(&tmp_add_list);
hinic3_mac_filter_sync_hw(nic_dev, &tmp_del_list, &tmp_add_list);
/* need to enter promisc/allmulti mode */
return -ENOMEM;
}
static void hinic3_mac_filter_sync_all(struct hinic3_nic_dev *nic_dev)
{
struct net_device *netdev = nic_dev->netdev;
int add_count;
if (test_bit(HINIC3_MAC_FILTER_CHANGED, &nic_dev->flags)) {
clear_bit(HINIC3_MAC_FILTER_CHANGED, &nic_dev->flags);
add_count = hinic3_mac_filter_sync(nic_dev,
&nic_dev->uc_filter_list,
true);
if (add_count < 0 && HINIC3_SUPPORT_PROMISC(nic_dev->hwdev)) {
set_bit(HINIC3_PROMISC_FORCE_ON,
&nic_dev->rx_mod_state);
nicif_info(nic_dev, drv, netdev, "Promisc mode forced on\n");
} else if (add_count) {
clear_bit(HINIC3_PROMISC_FORCE_ON,
&nic_dev->rx_mod_state);
}
add_count = hinic3_mac_filter_sync(nic_dev,
&nic_dev->mc_filter_list,
false);
if (add_count < 0 && HINIC3_SUPPORT_ALLMULTI(nic_dev->hwdev)) {
set_bit(HINIC3_ALLMULTI_FORCE_ON,
&nic_dev->rx_mod_state);
nicif_info(nic_dev, drv, netdev, "All multicast mode forced on\n");
} else if (add_count) {
clear_bit(HINIC3_ALLMULTI_FORCE_ON,
&nic_dev->rx_mod_state);
}
}
}
#define HINIC3_DEFAULT_RX_MODE (NIC_RX_MODE_UC | NIC_RX_MODE_MC | \
NIC_RX_MODE_BC)
static void hinic3_update_mac_filter(struct hinic3_nic_dev *nic_dev,
const struct netdev_hw_addr_list *src_list,
struct list_head *filter_list)
{
struct hinic3_mac_filter *filter = NULL;
struct hinic3_mac_filter *ftmp = NULL;
struct hinic3_mac_filter *f = NULL;
struct netdev_hw_addr *ha = NULL;
/* add addr if not already in the filter list */
netif_addr_lock_bh(nic_dev->netdev);
netdev_hw_addr_list_for_each(ha, src_list) {
filter = hinic3_find_mac(filter_list, ha->addr);
if (!filter)
hinic3_add_filter(nic_dev, filter_list, ha->addr);
else if (filter->state == HINIC3_MAC_WAIT_HW_UNSYNC)
filter->state = HINIC3_MAC_HW_SYNCED;
}
netif_addr_unlock_bh(nic_dev->netdev);
/* delete addr if not in netdev list */
list_for_each_entry_safe(f, ftmp, filter_list, list) {
bool found = false;
netif_addr_lock_bh(nic_dev->netdev);
netdev_hw_addr_list_for_each(ha, src_list)
if (ether_addr_equal(ha->addr, f->addr)) {
found = true;
break;
}
netif_addr_unlock_bh(nic_dev->netdev);
if (found)
continue;
hinic3_del_filter(nic_dev, f);
}
}
#ifndef NETDEV_HW_ADDR_T_MULTICAST
static void hinic3_update_mc_filter(struct hinic3_nic_dev *nic_dev,
struct list_head *filter_list)
{
struct hinic3_mac_filter *filter = NULL;
struct hinic3_mac_filter *ftmp = NULL;
struct hinic3_mac_filter *f = NULL;
struct dev_mc_list *ha = NULL;
/* add addr if not already in the filter list */
netif_addr_lock_bh(nic_dev->netdev);
netdev_for_each_mc_addr(ha, nic_dev->netdev) {
filter = hinic3_find_mac(filter_list, ha->da_addr);
if (!filter)
hinic3_add_filter(nic_dev, filter_list, ha->da_addr);
else if (filter->state == HINIC3_MAC_WAIT_HW_UNSYNC)
filter->state = HINIC3_MAC_HW_SYNCED;
}
netif_addr_unlock_bh(nic_dev->netdev);
/* delete addr if not in netdev list */
list_for_each_entry_safe(f, ftmp, filter_list, list) {
bool found = false;
netif_addr_lock_bh(nic_dev->netdev);
netdev_for_each_mc_addr(ha, nic_dev->netdev)
if (ether_addr_equal(ha->da_addr, f->addr)) {
found = true;
break;
}
netif_addr_unlock_bh(nic_dev->netdev);
if (found)
continue;
hinic3_del_filter(nic_dev, f);
}
}
#endif
static void update_mac_filter(struct hinic3_nic_dev *nic_dev)
{
struct net_device *netdev = nic_dev->netdev;
if (test_and_clear_bit(HINIC3_UPDATE_MAC_FILTER, &nic_dev->flags)) {
hinic3_update_mac_filter(nic_dev, &netdev->uc,
&nic_dev->uc_filter_list);
/* FPGA mc only 12 entry, default disable mc */
if (set_filter_state) {
#ifdef NETDEV_HW_ADDR_T_MULTICAST
hinic3_update_mac_filter(nic_dev, &netdev->mc,
&nic_dev->mc_filter_list);
#else
hinic3_update_mc_filter(nic_dev,
&nic_dev->mc_filter_list);
#endif
}
}
}
static void sync_rx_mode_to_hw(struct hinic3_nic_dev *nic_dev, int promisc_en,
int allmulti_en)
{
struct net_device *netdev = nic_dev->netdev;
u32 rx_mod = HINIC3_DEFAULT_RX_MODE;
int err;
rx_mod |= (promisc_en ? NIC_RX_MODE_PROMISC : 0);
rx_mod |= (allmulti_en ? NIC_RX_MODE_MC_ALL : 0);
if (promisc_en != test_bit(HINIC3_HW_PROMISC_ON,
&nic_dev->rx_mod_state))
nicif_info(nic_dev, drv, netdev,
"%s promisc mode\n",
promisc_en ? "Enter" : "Left");
if (allmulti_en !=
test_bit(HINIC3_HW_ALLMULTI_ON, &nic_dev->rx_mod_state))
nicif_info(nic_dev, drv, netdev,
"%s all_multi mode\n",
allmulti_en ? "Enter" : "Left");
err = hinic3_set_rx_mode(nic_dev->hwdev, rx_mod);
if (err) {
nicif_err(nic_dev, drv, netdev, "Failed to set rx_mode\n");
return;
}
promisc_en ? set_bit(HINIC3_HW_PROMISC_ON, &nic_dev->rx_mod_state) :
clear_bit(HINIC3_HW_PROMISC_ON, &nic_dev->rx_mod_state);
allmulti_en ? set_bit(HINIC3_HW_ALLMULTI_ON, &nic_dev->rx_mod_state) :
clear_bit(HINIC3_HW_ALLMULTI_ON, &nic_dev->rx_mod_state);
}
void hinic3_set_rx_mode_work(struct work_struct *work)
{
struct hinic3_nic_dev *nic_dev =
container_of(work, struct hinic3_nic_dev, rx_mode_work);
struct net_device *netdev = nic_dev->netdev;
int promisc_en = 0, allmulti_en = 0;
update_mac_filter(nic_dev);
hinic3_mac_filter_sync_all(nic_dev);
if (HINIC3_SUPPORT_PROMISC(nic_dev->hwdev))
promisc_en = !!(netdev->flags & IFF_PROMISC) ||
test_bit(HINIC3_PROMISC_FORCE_ON,
&nic_dev->rx_mod_state);
if (HINIC3_SUPPORT_ALLMULTI(nic_dev->hwdev))
allmulti_en = !!(netdev->flags & IFF_ALLMULTI) ||
test_bit(HINIC3_ALLMULTI_FORCE_ON,
&nic_dev->rx_mod_state);
if (promisc_en !=
test_bit(HINIC3_HW_PROMISC_ON, &nic_dev->rx_mod_state) ||
allmulti_en !=
test_bit(HINIC3_HW_ALLMULTI_ON, &nic_dev->rx_mod_state))
sync_rx_mode_to_hw(nic_dev, promisc_en, allmulti_en);
}
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
#ifndef HINIC3_NIC_DBG_H
#define HINIC3_NIC_DBG_H
#include "hinic3_mt.h"
#include "hinic3_nic_io.h"
#include "hinic3_srv_nic.h"
int hinic3_dbg_get_sq_info(void *hwdev, u16 q_id, struct nic_sq_info *sq_info,
u32 msg_size);
int hinic3_dbg_get_rq_info(void *hwdev, u16 q_id, struct nic_rq_info *rq_info,
u32 msg_size);
int hinic3_dbg_get_wqe_info(void *hwdev, u16 q_id, u16 idx, u16 wqebb_cnt,
u8 *wqe, const u16 *wqe_size,
enum hinic3_queue_type q_type);
#endif
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
#define pr_fmt(fmt) KBUILD_MODNAME ": [NIC]" fmt
#include <linux/kernel.h>
#include <linux/netdevice.h>
#include <linux/device.h>
#include <linux/types.h>
#include <linux/errno.h>
#include "ossl_knl.h"
#include "hinic3_nic_dev.h"
#include "hinic3_profile.h"
#include "hinic3_nic_prof.h"
static bool is_match_nic_prof_default_adapter(void *device)
{
/* always match default profile adapter in standard scene */
return true;
}
struct hinic3_prof_adapter nic_prof_adap_objs[] = {
/* Add prof adapter before default profile */
{
.type = PROF_ADAP_TYPE_DEFAULT,
.match = is_match_nic_prof_default_adapter,
.init = NULL,
.deinit = NULL,
},
};
void hinic3_init_nic_prof_adapter(struct hinic3_nic_dev *nic_dev)
{
u16 num_adap = ARRAY_SIZE(nic_prof_adap_objs);
nic_dev->prof_adap = hinic3_prof_init(nic_dev, nic_prof_adap_objs, num_adap,
(void *)&nic_dev->prof_attr);
if (nic_dev->prof_adap)
nic_info(&nic_dev->pdev->dev, "Find profile adapter type: %d\n",
nic_dev->prof_adap->type);
}
void hinic3_deinit_nic_prof_adapter(struct hinic3_nic_dev *nic_dev)
{
hinic3_prof_deinit(nic_dev->prof_adap, nic_dev->prof_attr);
}
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
#ifndef HINIC3_NIC_PROF_H
#define HINIC3_NIC_PROF_H
#include <linux/socket.h>
#include <linux/types.h>
#include "hinic3_nic_cfg.h"
struct hinic3_nic_prof_attr {
void *priv_data;
char netdev_name[IFNAMSIZ];
};
struct hinic3_nic_dev;
#ifdef static
#undef static
#define LLT_STATIC_DEF_SAVED
#endif
static inline char *hinic3_get_dft_netdev_name_fmt(struct hinic3_nic_dev *nic_dev)
{
if (nic_dev->prof_attr)
return nic_dev->prof_attr->netdev_name;
return NULL;
}
#ifdef CONFIG_MODULE_PROF
int hinic3_set_master_dev_state(struct hinic3_nic_dev *nic_dev, u32 flag);
u32 hinic3_get_link(struct net_device *dev)
int hinic3_config_port_mtu(struct hinic3_nic_dev *nic_dev, u32 mtu);
int hinic3_config_port_mac(struct hinic3_nic_dev *nic_dev, struct sockaddr *saddr);
#else
static inline int hinic3_set_master_dev_state(struct hinic3_nic_dev *nic_dev, u32 flag)
{
return 0;
}
static inline int hinic3_config_port_mtu(struct hinic3_nic_dev *nic_dev, u32 mtu)
{
return hinic3_set_port_mtu(nic_dev->hwdev, (u16)mtu);
}
static inline int hinic3_config_port_mac(struct hinic3_nic_dev *nic_dev, struct sockaddr *saddr)
{
return hinic3_update_mac(nic_dev->hwdev, nic_dev->netdev->dev_addr, saddr->sa_data, 0,
hinic3_global_func_id(nic_dev->hwdev));
}
#endif
void hinic3_init_nic_prof_adapter(struct hinic3_nic_dev *nic_dev);
void hinic3_deinit_nic_prof_adapter(struct hinic3_nic_dev *nic_dev);
#endif
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2021 Huawei Technologies Co., Ltd */
#ifndef HINIC3_PCI_ID_TBL_H
#define HINIC3_PCI_ID_TBL_H
#define PCI_VENDOR_ID_HUAWEI 0x19e5
#define HINIC3_DEV_ID_STANDARD 0x0222
#define HINIC3_DEV_ID_SDI_5_1_PF 0x0226
#define HINIC3_DEV_ID_SDI_5_0_PF 0x0225
#define HINIC3_DEV_ID_VF 0x375F
#define HINIC3_DEV_ID_VF_HV 0x379F
#define HINIC3_DEV_ID_SPU 0xAC00
#endif
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册