未验证 提交 7420251e 编写于 作者: O openeuler-ci-bot 提交者: Gitee

!608 Net: ethernet: Support 3snic 3s9xx network card

Merge Pull Request from: @steven-song3 
 
The driver supports 3snic 3s9xx serial network cards (100GE (40GE
compatible)-3S930 and 25GE (10GE compatible)-3S910/3S920).

feature:
1. Support single-root I/O virtualization (SR-IOV)
2. Support virtual machine multi queue (VMMQ)
3. Support receive side scaling (RSS)
4. Support physical function (PF) passthrough VMs
5. Support the PF promiscuous mode,unicast or multicast MAC filtering, and
all multicast mode
6. Support IPv4/IPv6, checksum offload,TCP Segmentation Offload (TSO), and
Large Receive Offload (LRO)
7. Support in-band one-click logs collection
8. Support loopback tests
9. Support port location indicators
==================================
Test:
compille: pass
insmod/rmmod: pass
iperf: Pass 
 
Link:https://gitee.com/openeuler/kernel/pulls/608 

Reviewed-by: Liu Chao <liuchao173@huawei.com> 
Reviewed-by: Jialin Zhang <zhangjialin11@huawei.com> 
Signed-off-by: Jialin Zhang <zhangjialin11@huawei.com> 
.. SPDX-License-Identifier: GPL-2.0
====================================================
Linux Kernel Driver for 3SNIC Intelligent NIC family
====================================================
Contents
========
- `Overview`_
- `Supported PCI vendor ID/device IDs`_
- `Supported features`_
- `Product specification`_
- `Support`_
Overview:
=========
SSSNIC is a network interface card that can meet the demand of a range
of application scenarios,such as the Data Center Area,cloud computing
and Financial industry,etc.
The construction of SSSNIC card facilities mainly depends on servers and
switches. 3S910, 920 and 930 are PCIe standard cards adapted to servers,
which provide extended external business interfaces for servers.
The driver supports a range of link-speed devices (100GE (40GE
compatible) and 25GE (10GE compatible)).A negotiated and extendable
feature set also supported.
Supported PCI vendor ID/device IDs:
===================================
1f3f:9020 - SSSNIC PF
Supported features:
===================
1. Support single-root I/O virtualization (SR-IOV)
2. Support virtual machine multi queue (VMMQ)
3. Support receive side scaling (RSS)
4. Support physical function (PF) passthrough VMs
5. Support the PF promiscuous mode,unicast or multicast MAC filtering, and
all multicast mode
6. Support IPv4/IPv6, checksum offload,TCP Segmentation Offload (TSO), and
Large Receive Offload (LRO)
7. Support in-band one-click logs collection
8. Support loopback tests
9. Support port location indicators
Product specification
=====================
=================== ======= ============================= ===============================================
PCI ID (pci.ids) OEM Product PCIe port
=================== ======= ============================= ===============================================
1F3F:9020 3SNIC 3S910(2 x 25GE SFP28 ports) PCIe Gen3 x8(compatible with Gen2/ Gen1)
1F3F:9020 3SNIC 3S920(4 x 25GE SFP28 ports) PCIe Gen4 x16, compatible with Gen3/ Gen2/ Gen1
1F3F:9020 3SNIC 3S930(2 x 100GE QSFP28 ports) PCIe Gen4 x16, compatible with Gen3/ Gen2/ Gen1
=================== ======= ============================= ===============================================
Support
=======
If an issue is identified with the released source code on the supported kernel
with a supported adapter, email the specific information related to the issue to
https://www.3snic.com.
...@@ -16684,6 +16684,12 @@ S: Maintained ...@@ -16684,6 +16684,12 @@ S: Maintained
F: Documentation/scsi/sssraid.rst F: Documentation/scsi/sssraid.rst
F: drivers/scsi/sssraid/ F: drivers/scsi/sssraid/
SSSNIC Ethernet Controller DRIVERS
M: Steven Song <steven.song@3snic.com>
S: Maintained
F: Documentation/networking/device_drivers/ethernet/3snic/sssnic/sssnic.rst
F: drivers/net/ethernet/3snic/sssnic
ST LSM6DSx IMU IIO DRIVER ST LSM6DSx IMU IIO DRIVER
M: Lorenzo Bianconi <lorenzo.bianconi83@gmail.com> M: Lorenzo Bianconi <lorenzo.bianconi83@gmail.com>
L: linux-iio@vger.kernel.org L: linux-iio@vger.kernel.org
......
...@@ -2670,6 +2670,9 @@ CONFIG_VSOCKMON=m ...@@ -2670,6 +2670,9 @@ CONFIG_VSOCKMON=m
CONFIG_ETHERNET=y CONFIG_ETHERNET=y
CONFIG_MDIO=m CONFIG_MDIO=m
# CONFIG_NET_VENDOR_3COM is not set # CONFIG_NET_VENDOR_3COM is not set
CONFIG_NET_VENDOR_3SNIC=y
CONFIG_SSSNIC=m
CONFIG_SSSNIC_HW=m
# CONFIG_NET_VENDOR_ADAPTEC is not set # CONFIG_NET_VENDOR_ADAPTEC is not set
# CONFIG_NET_VENDOR_AGERE is not set # CONFIG_NET_VENDOR_AGERE is not set
CONFIG_NET_VENDOR_ALACRITECH=y CONFIG_NET_VENDOR_ALACRITECH=y
......
...@@ -2641,6 +2641,9 @@ CONFIG_VSOCKMON=m ...@@ -2641,6 +2641,9 @@ CONFIG_VSOCKMON=m
CONFIG_ETHERNET=y CONFIG_ETHERNET=y
CONFIG_MDIO=m CONFIG_MDIO=m
# CONFIG_NET_VENDOR_3COM is not set # CONFIG_NET_VENDOR_3COM is not set
CONFIG_NET_VENDOR_3SNIC=y
CONFIG_SSSNIC=m
CONFIG_SSSNIC_HW=m
# CONFIG_NET_VENDOR_ADAPTEC is not set # CONFIG_NET_VENDOR_ADAPTEC is not set
# CONFIG_NET_VENDOR_AGERE is not set # CONFIG_NET_VENDOR_AGERE is not set
# CONFIG_NET_VENDOR_ALACRITECH is not set # CONFIG_NET_VENDOR_ALACRITECH is not set
......
# SPDX-License-Identifier: GPL-2.0
#
# 3SNIC network device configuration
#
config NET_VENDOR_3SNIC
bool "3SNIC smart NIC devices"
default y
depends on PCI
help
If you have a network (Ethernet) card belonging to this class, say Y.
Note that the answer to this question doesn't directly affect the
kernel: saying N will just cause the configurator to skip all
the questions about 3SNIC cards. If you say Y, you will be
asked for your specific card in the following questions.
if NET_VENDOR_3SNIC
source "drivers/net/ethernet/3snic/sssnic/Kconfig"
endif # NET_VENDOR_3SNIC
# SPDX-License-Identifier: GPL-2.0
#
# Makefile for the 3SNIC network device drivers.
#
obj-$(CONFIG_SSSNIC) += sssnic/
# SPDX-License-Identifier: GPL-2.0
#
# 3SNIC network device configuration
#
config SSSNIC
tristate "3SNIC Ethernet Controller SSSNIC Support"
depends on PCI
depends on ARM64 || X86_64
select SSSNIC_HW
default m
help
This driver supports 3SNIC Ethernet Controller SSSNIC device.
For more information about this product, go to the product
description with smart NIC:
<http://www.3snic.com>
To compile this driver as a module, choose M here. The module
will be called sssnic.
config SSSNIC_HW
tristate
depends on PCI
default n
# SPDX-License-Identifier: GPL-2.0
#
# Makefile for the 3SNIC network device drivers.
#
obj-$(CONFIG_SSSNIC_HW) += hw/
obj-$(CONFIG_SSSNIC) += nic/
# SPDX-License-Identifier: GPL-2.0
# Copyright (c) 2023 3SNIC
#
ccflags-y += -I$(srctree)/drivers/net/ethernet/3snic/sssnic/hw
ccflags-y += -I$(srctree)/drivers/net/ethernet/3snic/sssnic/include
ccflags-y += -I$(srctree)/drivers/net/ethernet/3snic/sssnic/include/hw
ccflags-y += -I$(srctree)/drivers/net/ethernet/3snic/sssnic/include/kernel
ccflags-y += -I$(srctree)/drivers/net/ethernet/3snic/sssnic/hw/include
ccflags-y += -Werror
ccflags-y += -Wno-implicit-fallthrough
obj-$(CONFIG_SSSNIC_HW) += sssdk.o
sssdk-y := sss_hw_main.o \
sss_pci_probe.o \
sss_pci_remove.o \
sss_pci_shutdown.o \
sss_pci_error.o \
sss_pci_sriov.o \
sss_pci_global.o \
sss_hwdev_api.o \
sss_hwdev_cap.o \
sss_hwdev_export.o \
sss_hwdev_link.o \
sss_hwdev_init.o \
sss_hwdev_mgmt_info.o \
sss_hwdev_mgmt_channel.o \
sss_hwdev_io_flush.o \
sss_hwif_ctrlq.o \
sss_hwif_ctrlq_init.o \
sss_hwif_ctrlq_export.o \
sss_hwif_mbx.o \
sss_hwif_mbx_init.o \
sss_hwif_mbx_export.o \
sss_hwif_adm.o \
sss_hwif_adm_init.o \
sss_hwif_init.o \
sss_hwif_api.o \
sss_hwif_export.o \
sss_hwif_eq.o \
sss_hwif_mgmt_init.o \
sss_hwif_irq.o \
sss_hwif_aeq.o \
sss_common.o \
sss_wq.o \
sss_hwif_ceq.o \
sss_adapter_mgmt.o
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2021 3snic Technologies Co., Ltd */
#ifndef SSS_ADAPTER_H
#define SSS_ADAPTER_H
#include <linux/types.h>
#include <linux/pci.h>
#include <linux/list.h>
#include <linux/atomic.h>
#include <linux/spinlock.h>
#include "sss_hw_common.h"
#include "sss_hw_uld_driver.h"
#include "sss_hw_svc_cap.h"
#include "sss_sriov_info.h"
#define SSS_MAX_FUNCTION_NUM 4096
struct sss_card_node {
struct list_head node;
struct list_head func_list;
char chip_name[IFNAMSIZ];
u8 bus_id;
u8 resvd[7];
atomic_t channel_timeout_cnt;
};
/* Structure pcidev private */
struct sss_pci_adapter {
struct pci_dev *pcidev;
void *hwdev;
struct sss_hal_dev hal_dev;
/* Record the upper driver object address,
* such as nic_dev and toe_dev, fc_dev
*/
void *uld_dev[SSS_SERVICE_TYPE_MAX];
/* Record the upper driver object name */
char uld_dev_name[SSS_SERVICE_TYPE_MAX][IFNAMSIZ];
/* Manage all function device linked by list */
struct list_head node;
void __iomem *cfg_reg_bar;
void __iomem *intr_reg_bar;
void __iomem *mgmt_reg_bar;
void __iomem *db_reg_bar;
u64 db_dwqe_len;
u64 db_base_paddr;
struct sss_card_node *chip_node;
int init_state;
struct sss_sriov_info sriov_info;
/* set when uld driver processing event */
unsigned long uld_run_state;
unsigned long uld_attach_state;
/* lock for attach/detach uld */
struct mutex uld_attach_mutex;
/* spin lock for uld_attach_state access */
spinlock_t dettach_uld_lock;
};
#endif
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2021 3snic Technologies Co., Ltd */
#ifndef SSS_ADM_INFO_H
#define SSS_ADM_INFO_H
#include <linux/types.h>
#include <linux/semaphore.h>
#include <linux/spinlock.h>
#include <linux/completion.h>
#include "sss_hw_common.h"
enum sss_adm_msg_type {
/* write to mgmt cpu command with completion */
SSS_ADM_MSG_WRITE_TO_MGMT_MODULE = 2,
/* multi read command with completion notification */
SSS_ADM_MSG_MULTI_READ = 3,
/* write command without completion notification */
SSS_ADM_MSG_POLL_WRITE = 4,
/* read command without completion notification */
SSS_ADM_MSG_POLL_READ = 5,
/* read from mgmt cpu command with completion */
SSS_ADM_MSG_WRITE_ASYNC_TO_MGMT_MODULE = 6,
SSS_ADM_MSG_MAX,
};
struct sss_adm_msg_state {
u64 head;
u32 desc_buf;
u32 elem_hi;
u32 elem_lo;
u32 rsvd0;
u64 rsvd1;
};
/* HW struct */
struct sss_adm_msg_elem {
u64 control;
u64 next_elem_paddr;
u64 desc;
/* HW struct */
union {
struct {
u64 hw_msg_paddr;
} write;
struct {
u64 hw_wb_reply_paddr;
u64 hw_msg_paddr;
} read;
};
};
struct sss_adm_msg_reply_fmt {
u64 head;
u64 reply;
};
struct sss_adm_msg_elem_ctx {
struct sss_adm_msg_elem *elem_vaddr;
void *adm_msg_vaddr;
struct sss_adm_msg_reply_fmt *reply_fmt;
struct completion done;
int state;
u32 store_pi;
void *hwdev;
};
struct sss_adm_msg {
void *hwdev;
enum sss_adm_msg_type msg_type;
u32 elem_num;
u16 elem_size;
u16 reply_size;
u32 pi;
u32 ci;
struct semaphore sem;
dma_addr_t wb_state_paddr;
dma_addr_t head_elem_paddr;
struct sss_adm_msg_state *wb_state;
struct sss_adm_msg_elem *head_node;
struct sss_adm_msg_elem_ctx *elem_ctx;
struct sss_adm_msg_elem *now_node;
struct sss_dma_addr_align elem_addr;
u8 *elem_vaddr_base;
u8 *reply_vaddr_base;
u8 *buf_vaddr_base;
u64 elem_paddr_base;
u64 reply_paddr_base;
u64 buf_paddr_base;
u64 elem_size_align;
u64 reply_size_align;
u64 buf_size_align;
};
#endif
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2021 3snic Technologies Co., Ltd */
#ifndef SSS_AEQ_INFO_H
#define SSS_AEQ_INFO_H
#include <linux/types.h>
#include <linux/workqueue.h>
#include "sss_eq_info.h"
#include "sss_hw_aeq.h"
#define SSS_MAX_AEQ 4
typedef void (*sss_aeq_hw_event_handler_t)(void *pri_handle, u8 *data, u8 size);
typedef u8 (*sss_aeq_sw_event_handler_t)(void *pri_handle, u8 event, u8 *data);
struct sss_aeq_info {
void *hwdev;
sss_aeq_hw_event_handler_t hw_event_handler[SSS_AEQ_EVENT_MAX];
void *hw_event_data[SSS_AEQ_EVENT_MAX];
sss_aeq_sw_event_handler_t sw_event_handler[SSS_AEQ_SW_EVENT_MAX];
void *sw_event_data[SSS_AEQ_SW_EVENT_MAX];
unsigned long hw_event_handler_state[SSS_AEQ_EVENT_MAX];
unsigned long sw_event_handler_state[SSS_AEQ_SW_EVENT_MAX];
struct sss_eq aeq[SSS_MAX_AEQ];
u16 num;
u16 rsvd1;
u32 rsvd2;
struct workqueue_struct *workq;
};
#endif
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2021 3snic Technologies Co., Ltd */
#ifndef SSS_BOARD_INFO_H
#define SSS_BOARD_INFO_H
enum sss_board_type_define {
SSS_BOARD_TYPE_MPU_DEFAULT = 0, /* Default config */
SSS_BOARD_TYPE_TEST_EVB_4X25G = 1, /* EVB Board */
SSS_BOARD_TYPE_TEST_CEM_2X100G = 2, /* 2X100G CEM Card */
SSS_BOARD_TYPE_STRG_SMARTIO_4X32G_FC = 30, /* 4X32G SmartIO FC Card */
SSS_BOARD_TYPE_STRG_SMARTIO_4X25G_TIOE = 31, /* 4X25GE SmartIO TIOE Card */
SSS_BOARD_TYPE_STRG_SMARTIO_4X25G_ROCE = 32, /* 4X25GE SmartIO ROCE Card */
SSS_BOARD_TYPE_STRG_SMARTIO_4X25G_ROCE_AA = 33, /* 4X25GE SmartIO ROCE_AA Card */
SSS_BOARD_TYPE_STRG_SMARTIO_4X25G_SRIOV = 34, /* 4X25GE SmartIO container Card */
SSS_BOARD_TYPE_STRG_SMARTIO_4X25G_SRIOV_SW = 35, /* 4X25GE SmartIO container switch Card */
SSS_BOARD_TYPE_STRG_2X100G_TIOE = 40, /* 2X100G SmartIO TIOE Card */
SSS_BOARD_TYPE_STRG_2X100G_ROCE = 41, /* 2X100G SmartIO ROCE Card */
SSS_BOARD_TYPE_STRG_2X100G_ROCE_AA = 42, /* 2X100G SmartIO ROCE_AA Card */
SSS_BOARD_TYPE_CAL_2X25G_NIC_75MPPS = 100, /* 2X25G ETH Standard card 75MPPS */
SSS_BOARD_TYPE_CAL_2X25G_NIC_40MPPS = 101, /* 2X25G ETH Standard card 40MPPS */
SSS_BOARD_TYPE_CAL_4X25G_NIC_120MPPS = 105, /* 4X25G ETH Standard card 120MPPS */
SSS_BOARD_TYPE_CAL_2X32G_FC_HBA = 110, /* 2X32G FC HBA card */
SSS_BOARD_TYPE_CAL_2X16G_FC_HBA = 111, /* 2X16G FC HBA card */
SSS_BOARD_TYPE_CAL_2X100G_NIC_120MPPS = 115, /* 2X100G ETH Standard card 120MPPS */
SSS_BOARD_TYPE_CLD_2X100G_SDI5_1 = 170, /* 2X100G SDI 5.1 Card */
SSS_BOARD_TYPE_CLD_2X25G_SDI5_0_LITE = 171, /* 2x25G SDI5.0 Lite Card */
SSS_BOARD_TYPE_CLD_2X100G_SDI5_0 = 172, /* 2x100G SDI5.0 Card */
SSS_BOARD_MAX_TYPE = 0xFF
};
#endif
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2021 3snic Technologies Co., Ltd */
#ifndef SSS_CEQ_INFO_H
#define SSS_CEQ_INFO_H
#include <linux/types.h>
#include "sss_hw_ceq.h"
#include "sss_eq_info.h"
#define SSS_MAX_CEQ 32
typedef void (*sss_ceq_event_handler_t)(void *dev, u32 data);
struct sss_ceq_info {
void *hwdev;
sss_ceq_event_handler_t event_handler[SSS_CEQ_EVENT_MAX];
void *event_handler_data[SSS_CEQ_EVENT_MAX];
void *ceq_data[SSS_CEQ_EVENT_MAX];
unsigned long event_handler_state[SSS_CEQ_EVENT_MAX];
struct sss_eq ceq[SSS_MAX_CEQ];
u16 num;
u16 rsvd1;
u32 rsvd2;
};
#endif
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2021 3snic Technologies Co., Ltd */
#ifndef SSS_CSR_H
#define SSS_CSR_H
#define SSS_CSR_CFG_FLAG 0x40000000
#define SSS_MGMT_FLAG 0xC0000000
#define SSS_CSR_FLAG_MASK 0x3FFFFFFF
#define SSS_VF_CFG_REG_OFFSET 0x2000
#define SSS_HOST_CSR_BASE_ADDR (SSS_MGMT_FLAG + 0x6000)
#define SSS_CSR_GLOBAL_BASE_ADDR (SSS_MGMT_FLAG + 0x6400)
/* HW interface registers */
#define SSS_CSR_HW_ATTR0_ADDR (SSS_CSR_CFG_FLAG + 0x0)
#define SSS_CSR_HW_ATTR1_ADDR (SSS_CSR_CFG_FLAG + 0x4)
#define SSS_CSR_HW_ATTR2_ADDR (SSS_CSR_CFG_FLAG + 0x8)
#define SSS_CSR_HW_ATTR3_ADDR (SSS_CSR_CFG_FLAG + 0xC)
#define SSS_CSR_HW_ATTR4_ADDR (SSS_CSR_CFG_FLAG + 0x10)
#define SSS_CSR_HW_ATTR5_ADDR (SSS_CSR_CFG_FLAG + 0x14)
#define SSS_CSR_HW_ATTR6_ADDR (SSS_CSR_CFG_FLAG + 0x18)
#define SSS_HW_CSR_MBX_DATA_OFF 0x80
#define SSS_HW_CSR_MBX_CTRL_OFF (SSS_CSR_CFG_FLAG + 0x0100)
#define SSS_HW_CSR_MBX_INT_OFFSET_OFF (SSS_CSR_CFG_FLAG + 0x0104)
#define SSS_HW_CSR_MBX_RES_H_OFF (SSS_CSR_CFG_FLAG + 0x0108)
#define SSS_HW_CSR_MBX_RES_L_OFF (SSS_CSR_CFG_FLAG + 0x010C)
#define SSS_PPF_ELECT_OFF 0x0
#define SSS_MPF_ELECT_OFF 0x20
#define SSS_CSR_PPF_ELECT_ADDR \
(SSS_HOST_CSR_BASE_ADDR + SSS_PPF_ELECT_OFF)
#define SSS_CSR_GLOBAL_MPF_ELECT_ADDR \
(SSS_HOST_CSR_BASE_ADDR + SSS_MPF_ELECT_OFF)
#define SSS_CSR_HW_PPF_ELECT_BASE_ADDR (SSS_CSR_CFG_FLAG + 0x60)
#define SSS_CSR_HW_PPF_ELECT_PORT_STRIDE 0x4
#define SSS_CSR_FUNC_PPF_ELECT(host_id) \
(SSS_CSR_HW_PPF_ELECT_BASE_ADDR + \
(host_id) * SSS_CSR_HW_PPF_ELECT_PORT_STRIDE)
#define SSS_CSR_DMA_ATTR_TBL_ADDR (SSS_CSR_CFG_FLAG + 0x380)
#define SSS_CSR_DMA_ATTR_INDIR_ID_ADDR (SSS_CSR_CFG_FLAG + 0x390)
/* MSI-X registers */
#define SSS_CSR_MSIX_INDIR_ID_ADDR (SSS_CSR_CFG_FLAG + 0x310)
#define SSS_CSR_MSIX_CTRL_ADDR (SSS_CSR_CFG_FLAG + 0x300)
#define SSS_CSR_MSIX_CNT_ADDR (SSS_CSR_CFG_FLAG + 0x304)
#define SSS_CSR_FUNC_MSI_CLR_WR_ADDR (SSS_CSR_CFG_FLAG + 0x58)
#define SSS_MSI_CLR_INDIR_RESEND_TIMER_CLR_SHIFT 0
#define SSS_MSI_CLR_INDIR_INT_MSK_SET_SHIFT 1
#define SSS_MSI_CLR_INDIR_INT_MSK_CLR_SHIFT 2
#define SSS_MSI_CLR_INDIR_AUTO_MSK_SET_SHIFT 3
#define SSS_MSI_CLR_INDIR_AUTO_MSK_CLR_SHIFT 4
#define SSS_MSI_CLR_INDIR_SIMPLE_INDIR_ID_SHIFT 22
#define SSS_MSI_CLR_INDIR_RESEND_TIMER_CLR_MASK 0x1U
#define SSS_MSI_CLR_INDIR_INT_MSK_SET_MASK 0x1U
#define SSS_MSI_CLR_INDIR_INT_MSK_CLR_MASK 0x1U
#define SSS_MSI_CLR_INDIR_AUTO_MSK_SET_MASK 0x1U
#define SSS_MSI_CLR_INDIR_AUTO_MSK_CLR_MASK 0x1U
#define SSS_MSI_CLR_INDIR_SIMPLE_INDIR_ID_MASK 0x3FFU
#define SSS_SET_MSI_CLR_INDIR(val, member) \
(((val) & SSS_MSI_CLR_INDIR_##member##_MASK) << \
SSS_MSI_CLR_INDIR_##member##_SHIFT)
/* EQ registers */
#define SSS_AEQ_INDIR_ID_ADDR (SSS_CSR_CFG_FLAG + 0x210)
#define SSS_CEQ_INDIR_ID_ADDR (SSS_CSR_CFG_FLAG + 0x290)
#define SSS_EQ_INDIR_ID_ADDR(type) \
((type == SSS_AEQ) ? SSS_AEQ_INDIR_ID_ADDR : SSS_CEQ_INDIR_ID_ADDR)
#define SSS_AEQ_MTT_OFF_BASE_ADDR (SSS_CSR_CFG_FLAG + 0x240)
#define SSS_CEQ_MTT_OFF_BASE_ADDR (SSS_CSR_CFG_FLAG + 0x2C0)
#define SSS_CSR_EQ_PAGE_OFF_STRIDE 8
#define SSS_AEQ_PHY_HI_ADDR_REG(pg_num) \
(SSS_AEQ_MTT_OFF_BASE_ADDR + (pg_num) * SSS_CSR_EQ_PAGE_OFF_STRIDE)
#define SSS_AEQ_PHY_LO_ADDR_REG(pg_num) \
(SSS_AEQ_MTT_OFF_BASE_ADDR + (pg_num) * SSS_CSR_EQ_PAGE_OFF_STRIDE + 4)
#define SSS_CEQ_PHY_HI_ADDR_REG(pg_num) \
(SSS_CEQ_MTT_OFF_BASE_ADDR + (pg_num) * SSS_CSR_EQ_PAGE_OFF_STRIDE)
#define SSS_CEQ_PHY_LO_ADDR_REG(pg_num) \
(SSS_CEQ_MTT_OFF_BASE_ADDR + \
(pg_num) * SSS_CSR_EQ_PAGE_OFF_STRIDE + 4)
#define SSS_CSR_AEQ_CTRL_0_ADDR (SSS_CSR_CFG_FLAG + 0x200)
#define SSS_CSR_AEQ_CTRL_1_ADDR (SSS_CSR_CFG_FLAG + 0x204)
#define SSS_CSR_AEQ_CI_ADDR (SSS_CSR_CFG_FLAG + 0x208)
#define SSS_CSR_AEQ_PI_ADDR (SSS_CSR_CFG_FLAG + 0x20C)
#define SSS_CSR_AEQ_CI_SIMPLE_INDIR_ADDR (SSS_CSR_CFG_FLAG + 0x50)
#define SSS_CSR_CEQ_CTRL_0_ADDR (SSS_CSR_CFG_FLAG + 0x280)
#define SSS_CSR_CEQ_CTRL_1_ADDR (SSS_CSR_CFG_FLAG + 0x284)
#define SSS_CSR_CEQ_CI_ADDR (SSS_CSR_CFG_FLAG + 0x288)
#define SSS_CSR_CEQ_PI_ADDR (SSS_CSR_CFG_FLAG + 0x28c)
#define SSS_CSR_CEQ_CI_SIMPLE_INDIR_ADDR (SSS_CSR_CFG_FLAG + 0x54)
/* ADM MSG registers */
#define SSS_CSR_ADM_MSG_BASE (SSS_MGMT_FLAG + 0x2000)
#define SSS_CSR_ADM_MSG_STRIDE 0x80
#define SSS_CSR_ADM_MSG_HEAD_HI_ADDR(id) \
(SSS_CSR_ADM_MSG_BASE + 0x0 + (id) * SSS_CSR_ADM_MSG_STRIDE)
#define SSS_CSR_ADM_MSG_HEAD_LO_ADDR(id) \
(SSS_CSR_ADM_MSG_BASE + 0x4 + (id) * SSS_CSR_ADM_MSG_STRIDE)
#define SSS_CSR_ADM_MSG_STATE_HI_ADDR(id) \
(SSS_CSR_ADM_MSG_BASE + 0x8 + (id) * SSS_CSR_ADM_MSG_STRIDE)
#define SSS_CSR_ADM_MSG_STATE_LO_ADDR(id) \
(SSS_CSR_ADM_MSG_BASE + 0xC + (id) * SSS_CSR_ADM_MSG_STRIDE)
#define SSS_CSR_ADM_MSG_NUM_ELEM_ADDR(id) \
(SSS_CSR_ADM_MSG_BASE + 0x10 + (id) * SSS_CSR_ADM_MSG_STRIDE)
#define SSS_CSR_ADM_MSG_CTRL_ADDR(id) \
(SSS_CSR_ADM_MSG_BASE + 0x14 + (id) * SSS_CSR_ADM_MSG_STRIDE)
#define SSS_CSR_ADM_MSG_PI_ADDR(id) \
(SSS_CSR_ADM_MSG_BASE + 0x1C + (id) * SSS_CSR_ADM_MSG_STRIDE)
#define SSS_CSR_ADM_MSG_REQ_ADDR(id) \
(SSS_CSR_ADM_MSG_BASE + 0x20 + (id) * SSS_CSR_ADM_MSG_STRIDE)
#define SSS_CSR_ADM_MSG_STATE_0_ADDR(id) \
(SSS_CSR_ADM_MSG_BASE + 0x30 + (id) * SSS_CSR_ADM_MSG_STRIDE)
/* self test register */
#define SSS_MGMT_HEALTH_STATUS_ADDR (SSS_MGMT_FLAG + 0x983c)
#define SSS_CHIP_BASE_INFO_ADDR (SSS_MGMT_FLAG + 0xB02C)
#define SSS_CHIP_ERR_STATUS0_ADDR (SSS_MGMT_FLAG + 0xC0EC)
#define SSS_CHIP_ERR_STATUS1_ADDR (SSS_MGMT_FLAG + 0xC0F0)
#define SSS_ERR_INFO0_ADDR (SSS_MGMT_FLAG + 0xC0F4)
#define SSS_ERR_INFO1_ADDR (SSS_MGMT_FLAG + 0xC0F8)
#define SSS_ERR_INFO2_ADDR (SSS_MGMT_FLAG + 0xC0FC)
#endif
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2021 3snic Technologies Co., Ltd */
#ifndef SSS_CTRLQ_INFO_H
#define SSS_CTRLQ_INFO_H
#include <linux/types.h>
#include <linux/spinlock.h>
#include <linux/completion.h>
#include <linux/pci.h>
#include "sss_hw_mbx_msg.h"
#include "sss_hw_wq.h"
#include "sss_hw_ctrlq.h"
#define SSS_DEFAULT_WQ_PAGE_SIZE 0x100000
#define SSS_HW_WQ_PAGE_SIZE 0x1000
#define SSS_MAX_WQ_PAGE_NUM 8
/* ctrlq ack type */
enum sss_ack_type {
SSS_ACK_TYPE_CTRLQ,
SSS_ACK_TYPE_SHARE_CQN,
SSS_ACK_TYPE_APP_CQN,
SSS_MOD_ACK_MAX = 15,
};
enum sss_ctrlq_type {
SSS_CTRLQ_SYNC,
SSS_CTRLQ_ASYNC,
SSS_MAX_CTRLQ_TYPE = 4
};
enum sss_ctrlq_msg_type {
SSS_MSG_TYPE_NONE,
SSS_MSG_TYPE_SET_ARM,
SSS_MSG_TYPE_DIRECT_RESP,
SSS_MSG_TYPE_SGE_RESP,
SSS_MSG_TYPE_ASYNC,
SSS_MSG_TYPE_PSEUDO_TIMEOUT,
SSS_MSG_TYPE_TIMEOUT,
SSS_MSG_TYPE_FORCE_STOP,
SSS_MSG_TYPE_MAX
};
struct sss_ctrlq_cmd_info {
enum sss_ctrlq_msg_type msg_type;
u16 channel;
struct completion *done;
int *err_code;
int *cmpt_code;
u64 *direct_resp;
u64 msg_id;
struct sss_ctrl_msg_buf *in_buf;
struct sss_ctrl_msg_buf *out_buf;
};
struct sss_ctrlq {
struct sss_wq wq;
enum sss_ctrlq_type ctrlq_type;
int wrapped;
/* spinlock for send ctrlq commands */
spinlock_t ctrlq_lock;
struct sss_ctrlq_ctxt_info ctrlq_ctxt;
struct sss_ctrlq_cmd_info *cmd_info;
void *hwdev;
};
struct sss_ctrlq_info {
void *hwdev;
struct pci_pool *msg_buf_pool;
/* doorbell area */
u8 __iomem *db_base;
/* All ctrlq's CLA of a VF occupy a PAGE when ctrlq wq is 1-level CLA */
void *wq_block_vaddr;
dma_addr_t wq_block_paddr;
struct sss_ctrlq ctrlq[SSS_MAX_CTRLQ_TYPE];
u32 state;
u32 disable_flag;
u8 lock_channel_en;
u8 num;
u8 rsvd[6];
unsigned long channel_stop;
};
#endif
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2021 3snic Technologies Co., Ltd */
#ifndef SSS_EQ_INFO_H
#define SSS_EQ_INFO_H
#include <linux/types.h>
#include <linux/mutex.h>
#include <linux/workqueue.h>
#include <linux/interrupt.h>
#include "sss_hw_common.h"
#include "sss_hw_irq.h"
#include "sss_hw_svc_cap.h"
#define SSS_EQ_IRQ_NAME_LEN 64
enum sss_eq_type {
SSS_AEQ,
SSS_CEQ
};
typedef void (*sss_init_desc_handler_t)(void *eq);
typedef u32 (*sss_chip_init_attr_handler_t)(void *eq);
struct sss_eq {
char *name;
void *hwdev;
enum sss_eq_type type;
u32 page_size;
u32 old_page_size;
u32 len;
u32 ci;
u16 wrap;
u16 qid;
u16 entry_size;
u16 page_num;
u32 num_entry_per_pg;
struct sss_irq_desc irq_desc;
char irq_name[SSS_EQ_IRQ_NAME_LEN];
struct sss_dma_addr_align *page_array;
struct work_struct aeq_work;
struct tasklet_struct ceq_tasklet;
u64 hw_intr_jiffies;
u64 sw_intr_jiffies;
sss_init_desc_handler_t init_desc_handler;
sss_chip_init_attr_handler_t init_attr_handler;
irq_handler_t irq_handler;
};
struct sss_eq_cfg {
enum sss_service_type type;
int id;
int free; /* 1 - alocated, 0- freed */
};
struct sss_eq_info {
struct sss_eq_cfg *eq;
u8 ceq_num;
u8 remain_ceq_num;
/* mutex used for allocate EQs */
struct mutex eq_mutex;
};
#endif
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2021 3snic Technologies Co., Ltd */
#ifndef SSS_HWDEV_H
#define SSS_HWDEV_H
#include <linux/types.h>
#include <linux/mutex.h>
#include <linux/spinlock.h>
#include <linux/workqueue.h>
#include <linux/timer.h>
#include "sss_hw_common.h"
#include "sss_hw_svc_cap.h"
#include "sss_hw_mbx_msg.h"
#include "sss_hw_statistics.h"
#include "sss_hw_event.h"
#include "sss_hwif.h"
#include "sss_mgmt_info.h"
#include "sss_ctrlq_info.h"
#include "sss_aeq_info.h"
#include "sss_ceq_info.h"
#include "sss_mbx_info.h"
#include "sss_mgmt_channel.h"
enum sss_func_mode {
SSS_FUNC_MOD_MIN,
/* single host */
SSS_FUNC_MOD_NORMAL_HOST = SSS_FUNC_MOD_MIN,
/* multi host, bare-metal, sdi side */
SSS_FUNC_MOD_MULTI_BM_MASTER,
/* multi host, bare-metal, host side */
SSS_FUNC_MOD_MULTI_BM_SLAVE,
/* multi host, vm mode, sdi side */
SSS_FUNC_MOD_MULTI_VM_MASTER,
/* multi host, vm mode, host side */
SSS_FUNC_MOD_MULTI_VM_SLAVE,
SSS_FUNC_MOD_MAX = SSS_FUNC_MOD_MULTI_VM_SLAVE,
};
struct sss_page_addr {
void *virt_addr;
u64 phys_addr;
};
struct sss_mqm_addr_trans_tbl_info {
u32 chunk_num;
u32 search_gpa_num;
u32 page_size;
u32 page_num;
struct sss_page_addr *brm_srch_page_addr;
};
struct sss_devlink {
void *hwdev;
u8 active_cfg_id; /* 1 ~ 8 */
u8 switch_cfg_id; /* 1 ~ 8 */
};
struct sss_heartbeat {
u8 pcie_link_down;
u8 heartbeat_lost;
u16 rsvd;
u32 pcie_link_down_cnt;
struct timer_list heartbeat_timer;
struct work_struct lost_work;
};
struct sss_aeq_stat {
u16 busy_cnt;
u16 rsvd;
u64 cur_recv_cnt;
u64 last_recv_cnt;
};
struct sss_hwdev {
void *adapter_hdl; /* pointer to sss_pci_adapter or NDIS_Adapter */
void *pcidev_hdl; /* pointer to pcidev or Handler */
/* pointer to pcidev->dev or Handler, for
* sdk_err() or dma_alloc()
*/
void *dev_hdl;
void *chip_node;
void *service_adapter[SSS_SERVICE_TYPE_MAX];
u32 wq_page_size;
int chip_present_flag;
u8 poll; /* use polling mode or int mode */
u8 rsvd[3];
struct sss_hwif *hwif; /* include void __iomem *bar */
struct sss_comm_global_attr glb_attr;
u64 features[SSS_MAX_FEATURE_QWORD];
struct sss_mgmt_info *mgmt_info;
struct sss_ctrlq_info *ctrlq_info;
struct sss_aeq_info *aeq_info;
struct sss_ceq_info *ceq_info;
struct sss_mbx *mbx; // mbx
struct sss_msg_pf_to_mgmt *pf_to_mgmt; // adm
struct sss_hw_stats hw_stats;
u8 *chip_fault_stats;
sss_event_handler_t event_handler;
void *event_handler_data;
struct sss_board_info board_info;
struct delayed_work sync_time_task;
struct delayed_work channel_detect_task;
struct workqueue_struct *workq;
struct sss_heartbeat heartbeat;
ulong func_state;
spinlock_t channel_lock; /* protect channel init and deinit */
struct sss_devlink *devlink_dev;
enum sss_func_mode func_mode;
struct sss_aeq_stat aeq_stat;
};
#define SSS_TO_HWDEV(ptr) ((struct sss_hwdev *)(ptr)->hwdev)
#define SSS_TO_DEV(hwdev) (((struct sss_hwdev *)hwdev)->dev_hdl)
#define SSS_TO_HWIF(hwdev) (((struct sss_hwdev *)hwdev)->hwif)
#define SSS_TO_MGMT_INFO(hwdev) (((struct sss_hwdev *)hwdev)->mgmt_info)
#define SSS_TO_AEQ_INFO(hwdev) (((struct sss_hwdev *)hwdev)->aeq_info)
#define SSS_TO_CEQ_INFO(hwdev) (((struct sss_hwdev *)hwdev)->ceq_info)
#define SSS_TO_CTRLQ_INFO(hwdev) (((struct sss_hwdev *)hwdev)->ctrlq_info)
#define SSS_TO_IRQ_INFO(hwdev) (&((struct sss_hwdev *)hwdev)->mgmt_info->irq_info)
#define SSS_TO_SVC_CAP(hwdev) (&(((struct sss_hwdev *)hwdev)->mgmt_info->svc_cap))
#define SSS_TO_NIC_CAP(hwdev) (&(((struct sss_hwdev *)hwdev)->mgmt_info->svc_cap.nic_cap))
#define SSS_TO_MAX_SQ_NUM(hwdev) \
(((struct sss_hwdev *)hwdev)->mgmt_info->svc_cap.nic_cap.max_sq)
#define SSS_TO_PHY_PORT_ID(hwdev) (((struct sss_hwdev *)hwdev)->mgmt_info->svc_cap.port_id)
#define SSS_TO_MAX_VF_NUM(hwdev) (((struct sss_hwdev *)hwdev)->mgmt_info->svc_cap.max_vf)
#define SSS_TO_FUNC_COS_BITMAP(hwdev) \
(((struct sss_hwdev *)hwdev)->mgmt_info->svc_cap.cos_valid_bitmap)
#define SSS_TO_PORT_COS_BITMAP(hwdev) \
(((struct sss_hwdev *)hwdev)->mgmt_info->svc_cap.port_cos_valid_bitmap)
enum sss_servic_bit_define {
SSS_SERVICE_BIT_NIC = 0,
SSS_SERVICE_BIT_ROCE = 1,
SSS_SERVICE_BIT_VBS = 2,
SSS_SERVICE_BIT_TOE = 3,
SSS_SERVICE_BIT_IPSEC = 4,
SSS_SERVICE_BIT_FC = 5,
SSS_SERVICE_BIT_VIRTIO = 6,
SSS_SERVICE_BIT_OVS = 7,
SSS_SERVICE_BIT_NVME = 8,
SSS_SERVICE_BIT_ROCEAA = 9,
SSS_SERVICE_BIT_CURRENET = 10,
SSS_SERVICE_BIT_PPA = 11,
SSS_SERVICE_BIT_MIGRATE = 12,
SSS_MAX_SERVICE_BIT
};
#define SSS_CFG_SERVICE_MASK_NIC (0x1 << SSS_SERVICE_BIT_NIC)
#define SSS_CFG_SERVICE_MASK_ROCE (0x1 << SSS_SERVICE_BIT_ROCE)
#define SSS_CFG_SERVICE_MASK_VBS (0x1 << SSS_SERVICE_BIT_VBS)
#define SSS_CFG_SERVICE_MASK_TOE (0x1 << SSS_SERVICE_BIT_TOE)
#define SSS_CFG_SERVICE_MASK_IPSEC (0x1 << SSS_SERVICE_BIT_IPSEC)
#define SSS_CFG_SERVICE_MASK_FC (0x1 << SSS_SERVICE_BIT_FC)
#define SSS_CFG_SERVICE_MASK_VIRTIO (0x1 << SSS_SERVICE_BIT_VIRTIO)
#define SSS_CFG_SERVICE_MASK_OVS (0x1 << SSS_SERVICE_BIT_OVS)
#define SSS_CFG_SERVICE_MASK_NVME (0x1 << SSS_SERVICE_BIT_NVME)
#define SSS_CFG_SERVICE_MASK_ROCEAA (0x1 << SSS_SERVICE_BIT_ROCEAA)
#define SSS_CFG_SERVICE_MASK_CURRENET (0x1 << SSS_SERVICE_BIT_CURRENET)
#define SSS_CFG_SERVICE_MASK_PPA (0x1 << SSS_SERVICE_BIT_PPA)
#define SSS_CFG_SERVICE_MASK_MIGRATE (0x1 << SSS_SERVICE_BIT_MIGRATE)
#define SSS_CFG_SERVICE_RDMA_EN SSS_CFG_SERVICE_MASK_ROCE
#define SSS_IS_NIC_TYPE(dev) \
(((u32)(dev)->mgmt_info->svc_cap.chip_svc_type) & SSS_CFG_SERVICE_MASK_NIC)
#define SSS_IS_ROCE_TYPE(dev) \
(((u32)(dev)->mgmt_info->svc_cap.chip_svc_type) & SSS_CFG_SERVICE_MASK_ROCE)
#define SSS_IS_VBS_TYPE(dev) \
(((u32)(dev)->mgmt_info->svc_cap.chip_svc_type) & SSS_CFG_SERVICE_MASK_VBS)
#define SSS_IS_TOE_TYPE(dev) \
(((u32)(dev)->mgmt_info->svc_cap.chip_svc_type) & SSS_CFG_SERVICE_MASK_TOE)
#define SSS_IS_IPSEC_TYPE(dev) \
(((u32)(dev)->mgmt_info->svc_cap.chip_svc_type) & SSS_CFG_SERVICE_MASK_IPSEC)
#define SSS_IS_FC_TYPE(dev) \
(((u32)(dev)->mgmt_info->svc_cap.chip_svc_type) & SSS_CFG_SERVICE_MASK_FC)
#define SSS_IS_OVS_TYPE(dev) \
(((u32)(dev)->mgmt_info->svc_cap.chip_svc_type) & SSS_CFG_SERVICE_MASK_OVS)
#define SSS_IS_RDMA_TYPE(dev) \
(((u32)(dev)->mgmt_info->svc_cap.chip_svc_type) & SSS_CFG_SERVICE_RDMA_EN)
#define SSS_IS_RDMA_ENABLE(dev) \
((dev)->mgmt_info->svc_cap.sf_svc_attr.rdma_en)
#define SSS_IS_PPA_TYPE(dev) \
(((u32)(dev)->mgmt_info->svc_cap.chip_svc_type) & SSS_CFG_SERVICE_MASK_PPA)
#define SSS_IS_MIGR_TYPE(dev) \
(((u32)(dev)->mgmt_info->svc_cap.chip_svc_type) & SSS_CFG_SERVICE_MASK_MIGRATE)
#define SSS_MAX_HOST_NUM(hwdev) ((hwdev)->glb_attr.max_host_num)
#define SSS_MAX_PF_NUM(hwdev) ((hwdev)->glb_attr.max_pf_num)
#define SSS_MGMT_CPU_NODE_ID(hwdev) \
((hwdev)->glb_attr.mgmt_host_node_id)
#define SSS_GET_FUNC_TYPE(hwdev) ((hwdev)->hwif->attr.func_type)
#define SSS_IS_PF(dev) (SSS_GET_FUNC_TYPE(dev) == SSS_FUNC_TYPE_PF)
#define SSS_IS_VF(dev) (SSS_GET_FUNC_TYPE(dev) == SSS_FUNC_TYPE_VF)
#define SSS_IS_PPF(dev) \
(SSS_GET_FUNC_TYPE(dev) == SSS_FUNC_TYPE_PPF)
#define SSS_IS_BMGW_MASTER_HOST(hwdev) \
((hwdev)->func_mode == SSS_FUNC_MOD_MULTI_BM_MASTER)
#define SSS_IS_BMGW_SLAVE_HOST(hwdev) \
((hwdev)->func_mode == SSS_FUNC_MOD_MULTI_BM_SLAVE)
#define SSS_IS_VM_MASTER_HOST(hwdev) \
((hwdev)->func_mode == SSS_FUNC_MOD_MULTI_VM_MASTER)
#define SSS_IS_VM_SLAVE_HOST(hwdev) \
((hwdev)->func_mode == SSS_FUNC_MOD_MULTI_VM_SLAVE)
#define SSS_IS_MASTER_HOST(hwdev) \
(SSS_IS_BMGW_MASTER_HOST(hwdev) || SSS_IS_VM_MASTER_HOST(hwdev))
#define SSS_IS_SLAVE_HOST(hwdev) \
(SSS_IS_BMGW_SLAVE_HOST(hwdev) || SSS_IS_VM_SLAVE_HOST(hwdev))
#define SSS_IS_MULTI_HOST(hwdev) \
(SSS_IS_BMGW_MASTER_HOST(hwdev) || SSS_IS_BMGW_SLAVE_HOST(hwdev) || \
SSS_IS_VM_MASTER_HOST(hwdev) || SSS_IS_VM_SLAVE_HOST(hwdev))
#define SSS_SPU_HOST_ID 4
#define SSS_SUPPORT_ADM_MSG(hwdev) ((hwdev)->features[0] & SSS_COMM_F_ADM)
#define SSS_SUPPORT_MBX_SEGMENT(hwdev) \
(SSS_GET_HWIF_PCI_INTF_ID(hwdev->hwif) == SSS_SPU_HOST_ID)
#define SSS_SUPPORT_CTRLQ_NUM(hwdev) \
((hwdev)->features[0] & SSS_COMM_F_CTRLQ_NUM)
#define SSS_SUPPORT_VIRTIO_VQ_SIZE(hwdev) \
((hwdev)->features[0] & SSS_COMM_F_VIRTIO_VQ_SIZE)
enum {
SSS_CFG_FREE = 0,
SSS_CFG_BUSY = 1
};
#endif
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2021 3snic Technologies Co., Ltd */
#ifndef SSS_HWIF_H
#define SSS_HWIF_H
#include <linux/types.h>
#include <linux/spinlock.h>
struct sss_db_pool {
unsigned long *bitmap;
u32 bit_size;
/* spinlock for allocating doorbell area */
spinlock_t id_lock;
};
struct sss_func_attr {
enum sss_func_type func_type;
u16 func_id;
u8 pf_id;
u8 pci_intf_id;
u16 global_vf_off;
u8 mpf_id;
u8 ppf_id;
u16 irq_num; /* max: 2 ^ 15 */
u8 aeq_num; /* max: 2 ^ 3 */
u8 ceq_num; /* max: 2 ^ 7 */
u16 sq_num; /* max: 2 ^ 8 */
u8 dma_attr_num; /* max: 2 ^ 6 */
u8 msix_flex_en;
};
struct sss_hwif {
u8 __iomem *cfg_reg_base;
u8 __iomem *mgmt_reg_base;
u64 db_base_paddr;
u64 db_dwqe_len;
u8 __iomem *db_base_vaddr;
void *pdev;
struct sss_db_pool db_pool;
struct sss_func_attr attr;
};
#define SSS_GET_HWIF_AEQ_NUM(hwif) ((hwif)->attr.aeq_num)
#define SSS_GET_HWIF_CEQ_NUM(hwif) ((hwif)->attr.ceq_num)
#define SSS_GET_HWIF_IRQ_NUM(hwif) ((hwif)->attr.irq_num)
#define SSS_GET_HWIF_GLOBAL_ID(hwif) ((hwif)->attr.func_id)
#define SSS_GET_HWIF_PF_ID(hwif) ((hwif)->attr.pf_id)
#define SSS_GET_HWIF_GLOBAL_VF_OFFSET(hwif) ((hwif)->attr.global_vf_off)
#define SSS_GET_HWIF_PPF_ID(hwif) ((hwif)->attr.ppf_id)
#define SSS_GET_HWIF_MPF_ID(hwif) ((hwif)->attr.mpf_id)
#define SSS_GET_HWIF_PCI_INTF_ID(hwif) ((hwif)->attr.pci_intf_id)
#define SSS_GET_HWIF_FUNC_TYPE(hwif) ((hwif)->attr.func_type)
#define SSS_GET_HWIF_MSIX_EN(hwif) ((hwif)->attr.msix_flex_en)
#define SSS_SET_HWIF_AEQ_NUM(hwif, val) \
((hwif)->attr.aeq_num = (val))
#define SSS_SET_HWIF_CEQ_NUM(hwif, val) \
((hwif)->attr.ceq_num = (val))
#define SSS_SET_HWIF_IRQ_NUM(hwif, val) \
((hwif)->attr.irq_num = (val))
#define SSS_SET_HWIF_GLOBAL_ID(hwif, val) \
((hwif)->attr.func_id = (val))
#define SSS_SET_HWIF_PF_ID(hwif, val) \
((hwif)->attr.pf_id = (val))
#define SSS_SET_HWIF_GLOBAL_VF_OFFSET(hwif, val) \
((hwif)->attr.global_vf_off = (val))
#define SSS_SET_HWIF_PPF_ID(hwif, val) \
((hwif)->attr.ppf_id = (val))
#define SSS_SET_HWIF_MPF_ID(hwif, val) \
((hwif)->attr.mpf_id = (val))
#define SSS_SET_HWIF_PCI_INTF_ID(hwif, val) \
((hwif)->attr.pci_intf_id = (val))
#define SSS_SET_HWIF_FUNC_TYPE(hwif, val) \
((hwif)->attr.func_type = (val))
#define SSS_SET_HWIF_DMA_ATTR_NUM(hwif, val) \
((hwif)->attr.dma_attr_num = (val))
#define SSS_SET_HWIF_MSIX_EN(hwif, val) \
((hwif)->attr.msix_flex_en = (val))
#define SSS_SET_HWIF_SQ_NUM(hwif, val) \
((hwif)->attr.sq_num = (val))
#endif
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2021 3snic Technologies Co., Ltd */
#ifndef SSS_IRQ_INFO_H
#define SSS_IRQ_INFO_H
#include <linux/types.h>
#include <linux/mutex.h>
#include "sss_hw_svc_cap.h"
#include "sss_hw_irq.h"
struct sss_irq {
enum sss_service_type type;
int busy; /* 1 - allocated, 0 - freed */
struct sss_irq_desc desc;
};
struct sss_irq_info {
struct sss_irq *irq;
u16 total_num;
u16 free_num;
u16 max_num; /* device max irq number */
struct mutex irq_mutex; /* mutex is used to allocate eq */
};
#endif
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2021 3snic Technologies Co., Ltd */
#ifndef SSS_MBX_INFO_H
#define SSS_MBX_INFO_H
#include <linux/types.h>
#include <linux/atomic.h>
#include <linux/mutex.h>
#include <linux/workqueue.h>
#include <linux/spinlock.h>
#include "sss_hw_mbx.h"
enum sss_mbx_event_state {
SSS_EVENT_START = 0,
SSS_EVENT_FAIL,
SSS_EVENT_SUCCESS,
SSS_EVENT_TIMEOUT,
SSS_EVENT_END,
};
struct sss_mbx_send {
u8 *data;
u64 *wb_state; /* write back status */
void *wb_vaddr;
dma_addr_t wb_paddr;
};
struct sss_mbx_dma_queue {
void *dma_buff_vaddr;
dma_addr_t dma_buff_paddr;
u16 depth;
u16 pi;
u16 ci;
};
struct sss_mbx_msg_info {
u8 msg_id;
u8 state; /* can only use 1 bit */
};
struct sss_msg_desc {
void *msg;
u16 msg_len;
u8 seq_id;
u8 mod;
u16 cmd;
struct sss_mbx_msg_info msg_info;
};
struct sss_msg_buffer {
struct sss_msg_desc resp_msg;
struct sss_msg_desc recv_msg;
atomic_t recv_msg_cnt;
};
struct sss_mbx {
void *hwdev;
u8 lock_channel_en;
u8 rsvd0[3];
unsigned long channel_stop;
/* lock for send mbx message and ack message */
struct mutex mbx_send_lock;
/* lock for send mbx message */
struct mutex msg_send_lock;
struct sss_mbx_send mbx_send;
struct sss_mbx_dma_queue sync_msg_queue;
struct sss_mbx_dma_queue async_msg_queue;
struct workqueue_struct *workq;
struct sss_msg_buffer mgmt_msg; /* driver and MGMT CPU */
struct sss_msg_buffer *host_msg; /* PPF message between hosts */
struct sss_msg_buffer *func_msg; /* PF to VF or VF to PF */
u16 num_func_msg;
u16 cur_msg_channel;
u8 support_h2h_msg; /* host to host */
u8 rsvd1[3];
/* vf receive pf/ppf callback */
sss_vf_mbx_handler_t vf_mbx_cb[SSS_MOD_TYPE_MAX];
void *vf_mbx_data[SSS_MOD_TYPE_MAX];
/* pf/ppf receive vf callback */
sss_pf_mbx_handler_t pf_mbx_cb[SSS_MOD_TYPE_MAX];
void *pf_mbx_data[SSS_MOD_TYPE_MAX];
/* ppf receive pf/ppf callback */
sss_ppf_mbx_handler_t ppf_mbx_cb[SSS_MOD_TYPE_MAX];
void *ppf_mbx_data[SSS_MOD_TYPE_MAX];
/* pf receive ppf callback */
sss_pf_from_ppf_mbx_handler_t pf_recv_ppf_mbx_cb[SSS_MOD_TYPE_MAX];
void *pf_recv_ppf_mbx_data[SSS_MOD_TYPE_MAX];
unsigned long ppf_to_pf_mbx_cb_state[SSS_MOD_TYPE_MAX];
unsigned long ppf_mbx_cb_state[SSS_MOD_TYPE_MAX];
unsigned long pf_mbx_cb_state[SSS_MOD_TYPE_MAX];
unsigned long vf_mbx_cb_state[SSS_MOD_TYPE_MAX];
enum sss_mbx_event_state event_flag;
/* lock for mbx event flag */
spinlock_t mbx_lock;
u8 send_msg_id;
u8 rsvd2[3];
};
#endif
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2021 3snic Technologies Co., Ltd */
#ifndef SSS_MGMT_CHANNEL_H
#define SSS_MGMT_CHANNEL_H
#include <linux/types.h>
#include <linux/semaphore.h>
#include <linux/spinlock.h>
#include <linux/workqueue.h>
#include <linux/completion.h>
#include "sss_hw_mbx.h"
#include "sss_hw_mgmt.h"
#include "sss_adm_info.h"
/* message header define */
#define SSS_MSG_HEADER_SRC_GLB_FUNC_ID_SHIFT 0
#define SSS_MSG_HEADER_STATUS_SHIFT 13
#define SSS_MSG_HEADER_SOURCE_SHIFT 15
#define SSS_MSG_HEADER_AEQ_ID_SHIFT 16
#define SSS_MSG_HEADER_MSG_ID_SHIFT 18
#define SSS_MSG_HEADER_CMD_SHIFT 22
#define SSS_MSG_HEADER_MSG_LEN_SHIFT 32
#define SSS_MSG_HEADER_MODULE_SHIFT 43
#define SSS_MSG_HEADER_SEG_LEN_SHIFT 48
#define SSS_MSG_HEADER_NO_ACK_SHIFT 54
#define SSS_MSG_HEADER_DATA_TYPE_SHIFT 55
#define SSS_MSG_HEADER_SEQID_SHIFT 56
#define SSS_MSG_HEADER_LAST_SHIFT 62
#define SSS_MSG_HEADER_DIRECTION_SHIFT 63
#define SSS_MSG_HEADER_SRC_GLB_FUNC_ID_MASK 0x1FFF
#define SSS_MSG_HEADER_STATUS_MASK 0x1
#define SSS_MSG_HEADER_SOURCE_MASK 0x1
#define SSS_MSG_HEADER_AEQ_ID_MASK 0x3
#define SSS_MSG_HEADER_MSG_ID_MASK 0xF
#define SSS_MSG_HEADER_CMD_MASK 0x3FF
#define SSS_MSG_HEADER_MSG_LEN_MASK 0x7FF
#define SSS_MSG_HEADER_MODULE_MASK 0x1F
#define SSS_MSG_HEADER_SEG_LEN_MASK 0x3F
#define SSS_MSG_HEADER_NO_ACK_MASK 0x1
#define SSS_MSG_HEADER_DATA_TYPE_MASK 0x1
#define SSS_MSG_HEADER_SEQID_MASK 0x3F
#define SSS_MSG_HEADER_LAST_MASK 0x1
#define SSS_MSG_HEADER_DIRECTION_MASK 0x1
#define SSS_GET_MSG_HEADER(val, field) \
(((val) >> SSS_MSG_HEADER_##field##_SHIFT) & \
SSS_MSG_HEADER_##field##_MASK)
#define SSS_SET_MSG_HEADER(val, field) \
((u64)(((u64)(val)) & SSS_MSG_HEADER_##field##_MASK) << \
SSS_MSG_HEADER_##field##_SHIFT)
enum sss_msg_ack_type {
SSS_MSG_ACK,
SSS_MSG_NO_ACK,
};
enum sss_data_type {
SSS_INLINE_DATA = 0,
SSS_DMA_DATA = 1,
};
enum sss_msg_seg_type {
SSS_NOT_LAST_SEG = 0,
SSS_LAST_SEG = 1,
};
enum sss_msg_direction_type {
SSS_DIRECT_SEND_MSG = 0,
SSS_RESP_MSG = 1,
};
enum sss_msg_src_type {
SSS_MSG_SRC_MGMT = 0,
SSS_MSG_SRC_MBX = 1,
};
enum sss_mgmt_msg_cb_t_state {
SSS_CALLBACK_REG = 0,
SSS_CALLBACK_RUNNING,
};
enum sss_pf_to_mgmt_event_state {
SSS_ADM_EVENT_UNINIT = 0,
SSS_ADM_EVENT_START,
SSS_ADM_EVENT_SUCCESS,
SSS_ADM_EVENT_FAIL,
SSS_ADM_EVENT_TIMEOUT,
SSS_ADM_EVENT_END,
};
struct sss_recv_msg {
void *buf;
u16 buf_len;
u16 cmd;
u16 msg_id;
u8 seq_id;
u8 no_ack;
enum sss_mod_type mod;
struct completion done;
};
struct sss_msg_pf_to_mgmt {
void *hwdev;
struct semaphore sync_lock;
struct workqueue_struct *workq;
void *sync_buf;
void *ack_buf;
struct sss_recv_msg recv_msg;
struct sss_recv_msg recv_resp_msg;
u16 rsvd;
u16 sync_msg_id;
struct sss_adm_msg adm_msg;
sss_mgmt_msg_handler_t recv_handler[SSS_MOD_TYPE_HW_MAX];
void *recv_data[SSS_MOD_TYPE_HW_MAX];
unsigned long recv_handler_state[SSS_MOD_TYPE_HW_MAX];
/* lock when sending msg */
spinlock_t sync_event_lock;
enum sss_pf_to_mgmt_event_state event_state;
};
#endif
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2021 3snic Technologies Co., Ltd */
#ifndef SSS_MGMT_INFO_H
#define SSS_MGMT_INFO_H
#include <linux/types.h>
#include "sss_hw_svc_cap.h"
#include "sss_eq_info.h"
#include "sss_irq_info.h"
struct sss_dev_sf_svc_attr {
u8 rdma_en;
u8 rsvd[3];
};
enum sss_intr_type {
SSS_INTR_TYPE_MSIX,
SSS_INTR_TYPE_MSI,
SSS_INTR_TYPE_INT,
SSS_INTR_TYPE_NONE,
/* PXE,OVS need single thread processing,
* synchronization messages must use poll wait mechanism interface
*/
};
/* device service capability */
struct sss_service_cap {
struct sss_dev_sf_svc_attr sf_svc_attr;
u16 svc_type; /* user input service type */
u16 chip_svc_type; /* HW supported service type, reference to sss_servic_bit_define */
u8 host_id;
u8 ep_id;
u8 er_id; /* PF/VF's ER */
u8 port_id; /* PF/VF's physical port */
/* Host global resources */
u16 host_total_function;
u8 pf_num;
u8 pf_id_start;
u16 vf_num; /* max numbers of vf in current host */
u16 vf_id_start;
u8 host_oq_id_mask_val;
u8 host_valid_bitmap;
u8 master_host_id;
u8 srv_multi_host_mode;
u8 timer_pf_num;
u8 timer_pf_id_start;
u16 timer_vf_num;
u16 timer_vf_id_start;
u8 flexq_en;
u8 resvd;
u8 cos_valid_bitmap;
u8 port_cos_valid_bitmap;
u16 max_vf; /* max VF number that PF supported */
u16 pseudo_vf_start_id;
u16 pseudo_vf_num;
u32 pseudo_vf_max_pctx;
u16 pseudo_vf_bfilter_start_addr;
u16 pseudo_vf_bfilter_len;
u16 pseudo_vf_cfg_num;
u16 virtio_vq_size;
/* DO NOT get interrupt_type from firmware */
enum sss_intr_type intr_type;
u8 sf_en; /* stateful business status */
u8 timer_en; /* 0:disable, 1:enable */
u8 bloomfilter_en; /* 0:disable, 1:enable */
u8 lb_mode;
u8 smf_pg;
u8 rsvd[3];
u32 max_connect_num; /* PF/VF maximum connection number(1M) */
/* The maximum connections which can be stick to cache memory, max 1K */
u16 max_stick2cache_num;
/* Starting address in cache memory for bloom filter, 64Bytes aligned */
u16 bfilter_start_addr;
/* Length for bloom filter, aligned on 64Bytes. The size is length*64B.
* Bloom filter memory size + 1 must be power of 2.
* The maximum memory size of bloom filter is 4M
*/
u16 bfilter_len;
/* The size of hash bucket tables, align on 64 entries.
* Be used to AND (&) the hash value. Bucket Size +1 must be power of 2.
* The maximum number of hash bucket is 4M
*/
u16 hash_bucket_num;
struct sss_nic_service_cap nic_cap; /* NIC capability */
struct sss_rdma_service_cap rdma_cap; /* RDMA capability */
struct sss_fc_service_cap fc_cap; /* FC capability */
struct sss_toe_service_cap toe_cap; /* ToE capability */
struct sss_ovs_service_cap ovs_cap; /* OVS capability */
struct sss_ipsec_service_cap ipsec_cap; /* IPsec capability */
struct sss_ppa_service_cap ppa_cap; /* PPA capability */
struct sss_vbs_service_cap vbs_cap; /* VBS capability */
};
struct sss_mgmt_info {
void *hwdev;
struct sss_service_cap svc_cap;
struct sss_eq_info eq_info; /* CEQ */
struct sss_irq_info irq_info; /* IRQ */
u32 func_seq_num; /* temporary */
};
#endif
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2021 3snic Technologies Co., Ltd */
#ifndef SSS_SRIOV_INFO_H
#define SSS_SRIOV_INFO_H
#include <linux/types.h>
enum sss_sriov_state {
SSS_SRIOV_DISABLE,
SSS_SRIOV_ENABLE,
SSS_SRIOV_PRESENT,
};
struct sss_sriov_info {
u8 enabled;
u8 rsvd[3];
unsigned int vf_num;
unsigned long state;
};
#endif
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2021 3snic Technologies Co., Ltd */
#define pr_fmt(fmt) KBUILD_MODNAME ": [BASE]" fmt
#include <net/addrconf.h>
#include <linux/kernel.h>
#include <linux/pci.h>
#include <linux/device.h>
#include <linux/module.h>
#include <linux/io-mapping.h>
#include <linux/interrupt.h>
#include <linux/time.h>
#include <linux/timex.h>
#include <linux/rtc.h>
#include <linux/debugfs.h>
#include "sss_kernel.h"
#include "sss_hw.h"
#include "sss_pci_sriov.h"
#include "sss_pci_id_tbl.h"
#include "sss_adapter_mgmt.h"
#include "sss_pci_global.h"
#define SSS_ADAPTER_CNT_TIMEOUT 10000
#define SSS_WAIT_ADAPTER_USLEEP_MIN 9900
#define SSS_WAIT_ADAPTER_USLEEP_MAX 10000
#define SSS_CHIP_NODE_HOLD_TIMEOUT (10 * 60 * 1000)
#define SSS_WAIT_CHIP_NODE_CHANGED (10 * 60 * 1000)
#define SSS_PRINT_TIMEOUT_INTERVAL 10000
#define SSS_MICRO_SECOND 1000
#define SSS_CHIP_NODE_USLEEP_MIN 900
#define SSS_CHIP_NODE_USLEEP_MAX 1000
#define SSS_CARD_CNT_MAX 64
#define SSS_IS_SPU_DEV(pdev) ((pdev)->device == SSS_DEV_ID_SPU)
enum sss_node_state {
SSS_NODE_CHANGE = BIT(0),
};
struct sss_chip_node_lock {
/* lock for chip list */
struct mutex chip_mutex;
unsigned long state;
atomic_t ref_cnt;
};
static struct sss_chip_node_lock g_chip_node_lock;
static unsigned long g_index_bit_map;
LIST_HEAD(g_chip_list);
struct list_head *sss_get_chip_list(void)
{
return &g_chip_list;
}
static void sss_chip_node_lock(void)
{
unsigned long end;
bool timeout = true;
u32 loop_cnt;
mutex_lock(&g_chip_node_lock.chip_mutex);
loop_cnt = 0;
end = jiffies + msecs_to_jiffies(SSS_WAIT_CHIP_NODE_CHANGED);
do {
if (!test_and_set_bit(SSS_NODE_CHANGE, &g_chip_node_lock.state)) {
timeout = false;
break;
}
loop_cnt++;
if (loop_cnt % SSS_PRINT_TIMEOUT_INTERVAL == 0)
pr_warn("Wait for adapter change complete for %us\n",
loop_cnt / SSS_MICRO_SECOND);
/* if sleep 1ms, use usleep_range to be more precise */
usleep_range(SSS_CHIP_NODE_USLEEP_MIN,
SSS_CHIP_NODE_USLEEP_MAX);
} while (time_before(jiffies, end));
if (timeout && test_and_set_bit(SSS_NODE_CHANGE, &g_chip_node_lock.state))
pr_warn("Wait for adapter change complete timeout when trying to get adapter lock\n");
loop_cnt = 0;
timeout = true;
end = jiffies + msecs_to_jiffies(SSS_WAIT_CHIP_NODE_CHANGED);
do {
if (!atomic_read(&g_chip_node_lock.ref_cnt)) {
timeout = false;
break;
}
loop_cnt++;
if (loop_cnt % SSS_PRINT_TIMEOUT_INTERVAL == 0)
pr_warn("Wait for adapter unused for %us, reference count: %d\n",
loop_cnt / SSS_MICRO_SECOND,
atomic_read(&g_chip_node_lock.ref_cnt));
usleep_range(SSS_CHIP_NODE_USLEEP_MIN,
SSS_CHIP_NODE_USLEEP_MAX);
} while (time_before(jiffies, end));
if (timeout && atomic_read(&g_chip_node_lock.ref_cnt))
pr_warn("Wait for adapter unused timeout\n");
mutex_unlock(&g_chip_node_lock.chip_mutex);
}
static void sss_chip_node_unlock(void)
{
clear_bit(SSS_NODE_CHANGE, &g_chip_node_lock.state);
}
void sss_hold_chip_node(void)
{
unsigned long end;
u32 loop_cnt = 0;
mutex_lock(&g_chip_node_lock.chip_mutex);
end = jiffies + msecs_to_jiffies(SSS_CHIP_NODE_HOLD_TIMEOUT);
do {
if (!test_bit(SSS_NODE_CHANGE, &g_chip_node_lock.state))
break;
loop_cnt++;
if (loop_cnt % SSS_PRINT_TIMEOUT_INTERVAL == 0)
pr_warn("Wait adapter change complete for %us\n",
loop_cnt / SSS_MICRO_SECOND);
/* if sleep 1ms, use usleep_range to be more precise */
usleep_range(SSS_CHIP_NODE_USLEEP_MIN,
SSS_CHIP_NODE_USLEEP_MAX);
} while (time_before(jiffies, end));
if (test_bit(SSS_NODE_CHANGE, &g_chip_node_lock.state))
pr_warn("Wait adapter change complete timeout when trying to adapter dev\n");
atomic_inc(&g_chip_node_lock.ref_cnt);
mutex_unlock(&g_chip_node_lock.chip_mutex);
}
void sss_put_chip_node(void)
{
atomic_dec(&g_chip_node_lock.ref_cnt);
}
void sss_pre_init(void)
{
mutex_init(&g_chip_node_lock.chip_mutex);
atomic_set(&g_chip_node_lock.ref_cnt, 0);
sss_init_uld_lock();
}
struct sss_pci_adapter *sss_get_adapter_by_pcidev(struct pci_dev *pdev)
{
struct sss_pci_adapter *adapter = pci_get_drvdata(pdev);
if (!pdev)
return NULL;
return adapter;
}
static bool sss_chip_node_exist(struct sss_pci_adapter *adapter,
unsigned char bus_id)
{
struct sss_card_node *chip_node = NULL;
sss_chip_node_lock();
if (bus_id != 0) {
list_for_each_entry(chip_node, &g_chip_list, node) {
if (chip_node->bus_id == bus_id) {
adapter->chip_node = chip_node;
sss_chip_node_unlock();
return true;
}
}
} else if (SSS_IS_VF_DEV(adapter->pcidev) ||
SSS_IS_SPU_DEV(adapter->pcidev)) {
list_for_each_entry(chip_node, &g_chip_list, node) {
if (chip_node) {
adapter->chip_node = chip_node;
sss_chip_node_unlock();
return true;
}
}
}
sss_chip_node_unlock();
return false;
}
static unsigned char sss_get_pci_bus_id(struct sss_pci_adapter *adapter)
{
struct pci_dev *pf_pdev = NULL;
unsigned char bus_id = 0;
if (!pci_is_root_bus(adapter->pcidev->bus))
bus_id = adapter->pcidev->bus->number;
if (bus_id == 0)
return bus_id;
if (adapter->pcidev->is_virtfn) {
pf_pdev = adapter->pcidev->physfn;
bus_id = pf_pdev->bus->number;
}
return bus_id;
}
static bool sss_alloc_card_id(void)
{
unsigned char i;
sss_chip_node_lock();
for (i = 0; i < SSS_CARD_CNT_MAX; i++) {
if (test_and_set_bit(i, &g_index_bit_map) == 0) {
sss_chip_node_unlock();
return true;
}
}
sss_chip_node_unlock();
return false;
}
int sss_alloc_chip_node(struct sss_pci_adapter *adapter)
{
struct sss_card_node *chip_node = NULL;
unsigned char i;
unsigned char bus_id;
bus_id = sss_get_pci_bus_id(adapter);
if (sss_chip_node_exist(adapter, bus_id))
return 0;
chip_node = kzalloc(sizeof(*chip_node), GFP_KERNEL);
if (!chip_node)
return -ENOMEM;
chip_node->bus_id = bus_id;
if (snprintf(chip_node->chip_name, IFNAMSIZ, "%s%u", SSS_CHIP_NAME, i) < 0) {
kfree(chip_node);
return -EINVAL;
}
if (!sss_alloc_card_id()) {
kfree(chip_node);
sdk_err(&adapter->pcidev->dev, "chip node is exceed\n");
return -EINVAL;
}
INIT_LIST_HEAD(&chip_node->func_list);
sss_chip_node_lock();
list_add_tail(&chip_node->node, &g_chip_list);
sss_chip_node_unlock();
adapter->chip_node = chip_node;
sdk_info(&adapter->pcidev->dev,
"Success to add new chip %s to global list\n", chip_node->chip_name);
return 0;
}
void sss_free_chip_node(struct sss_pci_adapter *adapter)
{
struct sss_card_node *chip_node = adapter->chip_node;
int id;
int ret;
sss_chip_node_lock();
if (list_empty(&chip_node->func_list)) {
list_del(&chip_node->node);
sdk_info(&adapter->pcidev->dev,
"Success to delete chip %s from global list\n",
chip_node->chip_name);
ret = sscanf(chip_node->chip_name, SSS_CHIP_NAME "%d", &id);
if (ret < 0)
sdk_err(&adapter->pcidev->dev, "Fail to get nic id\n");
clear_bit(id, &g_index_bit_map);
kfree(chip_node);
}
sss_chip_node_unlock();
}
void sss_add_func_list(struct sss_pci_adapter *adapter)
{
sss_chip_node_lock();
list_add_tail(&adapter->node, &adapter->chip_node->func_list);
sss_chip_node_unlock();
}
void sss_del_func_list(struct sss_pci_adapter *adapter)
{
sss_chip_node_lock();
list_del(&adapter->node);
sss_chip_node_unlock();
}
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2021 3snic Technologies Co., Ltd */
#ifndef SSS_ADAPTER_MGMT_H
#define SSS_ADAPTER_MGMT_H
#include <linux/types.h>
#include <linux/bitops.h>
#include "sss_adapter.h"
#define SSS_DRV_NAME "sssdk"
#define SSS_CHIP_NAME "sssnic"
#define SSS_VF_PCI_CFG_REG_BAR 0
#define SSS_PF_PCI_CFG_REG_BAR 1
#define SSS_PCI_INTR_REG_BAR 2
#define SSS_PCI_MGMT_REG_BAR 3 /* Only PF have mgmt bar */
#define SSS_PCI_DB_BAR 4
#define SSS_IS_VF_DEV(pdev) ((pdev)->device == SSS_DEV_ID_VF)
enum {
SSS_NO_PROBE = 1,
SSS_PROBE_START = 2,
SSS_PROBE_OK = 3,
SSS_IN_REMOVE = 4,
};
struct list_head *sss_get_chip_list(void);
int sss_alloc_chip_node(struct sss_pci_adapter *adapter);
void sss_free_chip_node(struct sss_pci_adapter *adapter);
void sss_pre_init(void);
struct sss_pci_adapter *sss_get_adapter_by_pcidev(struct pci_dev *pdev);
void sss_add_func_list(struct sss_pci_adapter *adapter);
void sss_del_func_list(struct sss_pci_adapter *adapter);
void sss_hold_chip_node(void);
void sss_put_chip_node(void);
void sss_set_adapter_probe_state(struct sss_pci_adapter *adapter, int state);
#endif
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2021 3snic Technologies Co., Ltd */
#include <linux/kernel.h>
#include <linux/io-mapping.h>
#include <linux/delay.h>
#include "sss_kernel.h"
#include "sss_common.h"
#define SSS_MIN_SLEEP_TIME(us) ((us) - (us) / 10)
/* Sleep more than 20ms using msleep is accurate */
#define SSS_HANDLER_SLEEP(usleep_min, wait_once_us) \
do { \
if ((wait_once_us) >= 20 * USEC_PER_MSEC) \
msleep((wait_once_us) / USEC_PER_MSEC); \
else \
usleep_range((usleep_min), (wait_once_us)); \
} while (0)
int sss_dma_zalloc_coherent_align(void *dev_hdl, u64 size, u64 align,
unsigned int flag, struct sss_dma_addr_align *addr)
{
dma_addr_t pa;
dma_addr_t pa_align;
void *va = NULL;
void *va_align = NULL;
va = dma_zalloc_coherent(dev_hdl, size, &pa, flag);
if (!va)
return -ENOMEM;
pa_align = ALIGN(pa, align);
if (pa_align == pa) {
va_align = va;
goto same_addr_after_align;
}
dma_free_coherent(dev_hdl, size, va, pa);
va = dma_zalloc_coherent(dev_hdl, size + align, &pa, flag);
if (!va)
return -ENOMEM;
pa_align = ALIGN(pa, align);
va_align = (void *)((u64)va + (pa_align - pa));
same_addr_after_align:
addr->origin_paddr = pa;
addr->align_paddr = pa_align;
addr->origin_vaddr = va;
addr->align_vaddr = va_align;
addr->real_size = (u32)size;
return 0;
}
void sss_dma_free_coherent_align(void *dev_hdl, struct sss_dma_addr_align *addr)
{
dma_free_coherent(dev_hdl, addr->real_size, addr->origin_vaddr, addr->origin_paddr);
}
int sss_check_handler_timeout(void *priv_data, sss_wait_handler_t handler,
u32 wait_total_ms, u32 wait_once_us)
{
enum sss_process_ret ret;
unsigned long end;
u32 usleep_min = SSS_MIN_SLEEP_TIME(wait_once_us);
if (!handler)
return -EINVAL;
end = jiffies + msecs_to_jiffies(wait_total_ms);
do {
ret = handler(priv_data);
if (ret == SSS_PROCESS_OK)
return 0;
else if (ret == SSS_PROCESS_ERR)
return -EIO;
SSS_HANDLER_SLEEP(usleep_min, wait_once_us);
} while (time_before(jiffies, end));
ret = handler(priv_data);
if (ret == SSS_PROCESS_OK)
return 0;
else if (ret == SSS_PROCESS_ERR)
return -EIO;
return -ETIMEDOUT;
}
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2021 3snic Technologies Co., Ltd */
#ifndef SSS_COMMON_H
#define SSS_COMMON_H
#include <linux/types.h>
#include "sss_hw_common.h"
int sss_dma_zalloc_coherent_align(void *dev_hdl, u64 size, u64 align,
unsigned int flag, struct sss_dma_addr_align *mem_align);
void sss_dma_free_coherent_align(void *dev_hdl, struct sss_dma_addr_align *mem_align);
int sss_check_handler_timeout(void *priv_data, sss_wait_handler_t handler,
u32 wait_total_ms, u32 wait_once_us);
#endif
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2021 3snic Technologies Co., Ltd */
#define pr_fmt(fmt) KBUILD_MODNAME ": [BASE]" fmt
#include <net/addrconf.h>
#include <linux/kernel.h>
#include <linux/pci.h>
#include <linux/device.h>
#include <linux/module.h>
#include <linux/io-mapping.h>
#include <linux/interrupt.h>
#include <linux/inetdevice.h>
#include <linux/time.h>
#include <linux/timex.h>
#include <linux/rtc.h>
#include <linux/aer.h>
#include <linux/debugfs.h>
#include "sss_kernel.h"
#include "sss_version.h"
#include "sss_adapter_mgmt.h"
#include "sss_pci_id_tbl.h"
#include "sss_pci_sriov.h"
#include "sss_pci_probe.h"
#include "sss_pci_remove.h"
#include "sss_pci_shutdown.h"
#include "sss_pci_error.h"
#define SSS_DRV_VERSION SSS_VERSION_STR
#define SSS_DRV_DESC "Intelligent Network Interface Card Driver"
MODULE_AUTHOR("steven.song@3snic.com");
MODULE_DESCRIPTION("3SNIC Network Interface Card Driver");
MODULE_VERSION(SSS_DRV_VERSION);
MODULE_LICENSE("GPL");
static const struct pci_device_id g_pci_table[] = {
{PCI_VDEVICE(SSSNIC, SSS_DEV_ID_STANDARD), 0},
{PCI_VDEVICE(SSSNIC, SSS_DEV_ID_SPN120), 0},
{PCI_VDEVICE(SSSNIC, SSS_DEV_ID_VF), 0},
{0, 0}
};
MODULE_DEVICE_TABLE(pci, g_pci_table);
#ifdef HAVE_RHEL6_SRIOV_CONFIGURE
static struct pci_driver_rh g_pci_driver_rh = {
.sriov_configure = sss_pci_configure_sriov,
};
#endif
static struct pci_error_handlers g_pci_err_handler = {
.error_detected = sss_detect_pci_error,
};
static struct pci_driver g_pci_driver = {
.name = SSS_DRV_NAME,
.id_table = g_pci_table,
.probe = sss_pci_probe,
.remove = sss_pci_remove,
.shutdown = sss_pci_shutdown,
#if defined(HAVE_SRIOV_CONFIGURE)
.sriov_configure = sss_pci_configure_sriov,
#elif defined(HAVE_RHEL6_SRIOV_CONFIGURE)
.rh_reserved = &g_pci_driver_rh,
#endif
.err_handler = &g_pci_err_handler
};
static __init int sss_init_pci(void)
{
int ret;
pr_info("%s - version %s\n", SSS_DRV_DESC, SSS_DRV_VERSION);
sss_pre_init();
ret = pci_register_driver(&g_pci_driver);
if (ret != 0)
return ret;
return 0;
}
static __exit void sss_exit_pci(void)
{
pci_unregister_driver(&g_pci_driver);
}
module_init(sss_init_pci);
module_exit(sss_exit_pci);
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2021 3snic Technologies Co., Ltd */
#define pr_fmt(fmt) KBUILD_MODNAME ": [BASE]" fmt
#include <linux/kernel.h>
#include <linux/pci.h>
#include <linux/msi.h>
#include <linux/types.h>
#include <linux/delay.h>
#include <linux/module.h>
#include <linux/semaphore.h>
#include <linux/interrupt.h>
#include "sss_kernel.h"
#include "sss_hw.h"
#include "sss_csr.h"
#include "sss_hwdev.h"
#include "sss_hwdev_api.h"
#include "sss_hwif_api.h"
int sss_chip_sync_time(void *hwdev, u64 mstime)
{
int ret;
struct sss_cmd_sync_time cmd_time = {0};
u16 out_len = sizeof(cmd_time);
cmd_time.mstime = mstime;
ret = sss_sync_send_msg(hwdev, SSS_COMM_MGMT_CMD_SYNC_TIME, &cmd_time,
sizeof(cmd_time), &cmd_time, &out_len);
if (SSS_ASSERT_SEND_MSG_RETURN(ret, out_len, &cmd_time)) {
sdk_err(SSS_TO_DEV(hwdev),
"Fail to sync time, ret: %d, status: 0x%x, out_len: 0x%x\n",
ret, cmd_time.head.state, out_len);
return -EIO;
}
return 0;
}
void sss_chip_disable_mgmt_channel(void *hwdev)
{
sss_chip_set_pf_status(SSS_TO_HWIF(hwdev), SSS_PF_STATUS_INIT);
}
int sss_chip_get_board_info(void *hwdev, struct sss_board_info *board_info)
{
int ret;
struct sss_cmd_board_info cmd_info = {0};
u16 out_len = sizeof(cmd_info);
ret = sss_sync_send_msg(hwdev, SSS_COMM_MGMT_CMD_GET_BOARD_INFO,
&cmd_info, sizeof(cmd_info), &cmd_info, &out_len);
if (SSS_ASSERT_SEND_MSG_RETURN(ret, out_len, &cmd_info)) {
sdk_err(SSS_TO_DEV(hwdev),
"Fail to get board info, ret: %d, status: 0x%x, out_len: 0x%x\n",
ret, cmd_info.head.state, out_len);
return -EIO;
}
memcpy(board_info, &cmd_info.info, sizeof(*board_info));
return 0;
}
int sss_chip_do_nego_feature(void *hwdev, u8 opcode, u64 *feature, u16 feature_num)
{
int ret;
struct sss_cmd_feature_nego cmd_feature = {0};
u16 out_len = sizeof(cmd_feature);
cmd_feature.func_id = sss_get_global_func_id(hwdev);
cmd_feature.opcode = opcode;
if (opcode == SSS_MGMT_MSG_SET_CMD)
memcpy(cmd_feature.feature, feature, (feature_num * sizeof(u64)));
ret = sss_sync_send_msg(hwdev, SSS_COMM_MGMT_CMD_FEATURE_NEGO,
&cmd_feature, sizeof(cmd_feature), &cmd_feature, &out_len);
if (SSS_ASSERT_SEND_MSG_RETURN(ret, out_len, &cmd_feature)) {
sdk_err(SSS_TO_DEV(hwdev),
"Fail to nego feature, opcode: %d, ret: %d, status: 0x%x, out_len: 0x%x\n",
opcode, ret, cmd_feature.head.state, out_len);
return -EINVAL;
}
if (opcode == SSS_MGMT_MSG_GET_CMD)
memcpy(feature, cmd_feature.feature, (feature_num * sizeof(u64)));
return 0;
}
int sss_chip_set_pci_bdf_num(void *hwdev, u8 bus_id, u8 device_id, u8 func_id)
{
int ret;
struct sss_cmd_bdf_info cmd_bdf = {0};
u16 out_len = sizeof(cmd_bdf);
cmd_bdf.bus = bus_id;
cmd_bdf.device = device_id;
cmd_bdf.function = func_id;
cmd_bdf.function_id = sss_get_global_func_id(hwdev);
ret = sss_sync_send_msg(hwdev, SSS_COMM_MGMT_CMD_SEND_BDF_INFO,
&cmd_bdf, sizeof(cmd_bdf), &cmd_bdf, &out_len);
if (SSS_ASSERT_SEND_MSG_RETURN(ret, out_len, &cmd_bdf)) {
sdk_err(SSS_TO_DEV(hwdev),
"Fail to set bdf info, ret: %d, status: 0x%x, out_len: 0x%x\n",
ret, cmd_bdf.head.state, out_len);
return -EIO;
}
return 0;
}
int sss_chip_comm_channel_detect(struct sss_hwdev *hwdev)
{
int ret;
struct sss_cmd_channel_detect cmd_detect = {0};
u16 out_len = sizeof(cmd_detect);
if (!hwdev)
return -EINVAL;
cmd_detect.func_id = sss_get_global_func_id(hwdev);
ret = sss_sync_send_msg(hwdev, SSS_COMM_MGMT_CMD_CHANNEL_DETECT,
&cmd_detect, sizeof(cmd_detect), &cmd_detect, &out_len);
if (SSS_ASSERT_SEND_MSG_RETURN(ret, out_len, &cmd_detect)) {
sdk_err(hwdev->dev_hdl,
"Fail to send channel detect, ret: %d, status: 0x%x, out_size: 0x%x\n",
ret, cmd_detect.head.state, out_len);
return -EINVAL;
}
return 0;
}
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2021 3snic Technologies Co., Ltd */
#ifndef SSS_HWDEV_API_H
#define SSS_HWDEV_API_H
#include <linux/types.h>
#include "sss_hw_mbx_msg.h"
#include "sss_hwdev.h"
int sss_chip_sync_time(void *hwdev, u64 mstime);
int sss_chip_get_board_info(void *hwdev, struct sss_board_info *board_info);
void sss_chip_disable_mgmt_channel(void *hwdev);
int sss_chip_do_nego_feature(void *hwdev, u8 opcode, u64 *feature, u16 feature_num);
int sss_chip_set_pci_bdf_num(void *hwdev, u8 bus_id, u8 device_id, u8 func_id);
int sss_chip_comm_channel_detect(struct sss_hwdev *hwdev);
#endif
此差异已折叠。
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2021 3snic Technologies Co., Ltd */
#ifndef SSS_HWDEV_CAP_H
#define SSS_HWDEV_CAP_H
#include "sss_hwdev.h"
int sss_init_capability(struct sss_hwdev *dev);
void sss_deinit_capability(struct sss_hwdev *dev);
#endif
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2021 3snic Technologies Co., Ltd */
#define pr_fmt(fmt) KBUILD_MODNAME ": [BASE]" fmt
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/module.h>
#include "sss_kernel.h"
#include "sss_hw.h"
#include "sss_hwdev.h"
#include "sss_csr.h"
#include "sss_hwif_api.h"
#define SSS_DEFAULT_RX_BUF_SIZE_LEVEL ((u16)0xB)
enum sss_rx_buf_size {
SSS_RX_BUF_SIZE_32B = 0x20,
SSS_RX_BUF_SIZE_64B = 0x40,
SSS_RX_BUF_SIZE_96B = 0x60,
SSS_RX_BUF_SIZE_128B = 0x80,
SSS_RX_BUF_SIZE_192B = 0xC0,
SSS_RX_BUF_SIZE_256B = 0x100,
SSS_RX_BUF_SIZE_384B = 0x180,
SSS_RX_BUF_SIZE_512B = 0x200,
SSS_RX_BUF_SIZE_768B = 0x300,
SSS_RX_BUF_SIZE_1K = 0x400,
SSS_RX_BUF_SIZE_1_5K = 0x600,
SSS_RX_BUF_SIZE_2K = 0x800,
SSS_RX_BUF_SIZE_3K = 0xC00,
SSS_RX_BUF_SIZE_4K = 0x1000,
SSS_RX_BUF_SIZE_8K = 0x2000,
SSS_RX_BUF_SIZE_16K = 0x4000,
};
const int sss_rx_buf_size_level[] = {
SSS_RX_BUF_SIZE_32B,
SSS_RX_BUF_SIZE_64B,
SSS_RX_BUF_SIZE_96B,
SSS_RX_BUF_SIZE_128B,
SSS_RX_BUF_SIZE_192B,
SSS_RX_BUF_SIZE_256B,
SSS_RX_BUF_SIZE_384B,
SSS_RX_BUF_SIZE_512B,
SSS_RX_BUF_SIZE_768B,
SSS_RX_BUF_SIZE_1K,
SSS_RX_BUF_SIZE_1_5K,
SSS_RX_BUF_SIZE_2K,
SSS_RX_BUF_SIZE_3K,
SSS_RX_BUF_SIZE_4K,
SSS_RX_BUF_SIZE_8K,
SSS_RX_BUF_SIZE_16K,
};
static u16 sss_get_rx_buf_size_level(int buf_size)
{
u16 i;
u16 cnt = ARRAY_LEN(sss_rx_buf_size_level);
for (i = 0; i < cnt; i++) {
if (sss_rx_buf_size_level[i] == buf_size)
return i;
}
return SSS_DEFAULT_RX_BUF_SIZE_LEVEL; /* default 2K */
}
static int sss_chip_get_interrupt_cfg(void *hwdev,
struct sss_irq_cfg *intr_cfg, u16 channel)
{
int ret;
struct sss_cmd_msix_config cmd_msix = {0};
u16 out_len = sizeof(cmd_msix);
cmd_msix.opcode = SSS_MGMT_MSG_GET_CMD;
cmd_msix.func_id = sss_get_global_func_id(hwdev);
cmd_msix.msix_index = intr_cfg->msix_id;
ret = sss_sync_send_msg_ch(hwdev, SSS_COMM_MGMT_CMD_CFG_MSIX_CTRL_REG,
&cmd_msix, sizeof(cmd_msix), &cmd_msix, &out_len, channel);
if (SSS_ASSERT_SEND_MSG_RETURN(ret, out_len, &cmd_msix)) {
sdk_err(SSS_TO_DEV(hwdev),
"Fail to get intr config, ret: %d, status: 0x%x, out_len: 0x%x, channel: 0x%x\n",
ret, cmd_msix.head.state, out_len, channel);
return -EINVAL;
}
intr_cfg->lli_credit = cmd_msix.lli_credit_cnt;
intr_cfg->lli_timer = cmd_msix.lli_timer_cnt;
intr_cfg->pending = cmd_msix.pending_cnt;
intr_cfg->coalesc_timer = cmd_msix.coalesce_timer_cnt;
intr_cfg->resend_timer = cmd_msix.resend_timer_cnt;
return 0;
}
int sss_chip_set_msix_attr(void *hwdev,
struct sss_irq_cfg intr_cfg, u16 channel)
{
int ret;
struct sss_irq_cfg temp_cfg = {0};
if (!hwdev)
return -EINVAL;
temp_cfg.msix_id = intr_cfg.msix_id;
ret = sss_chip_get_interrupt_cfg(hwdev, &temp_cfg, channel);
if (ret != 0)
return -EINVAL;
if (intr_cfg.lli_set == 0) {
intr_cfg.lli_credit = temp_cfg.lli_credit;
intr_cfg.lli_timer = temp_cfg.lli_timer;
}
if (intr_cfg.coalesc_intr_set == 0) {
intr_cfg.pending = temp_cfg.pending;
intr_cfg.coalesc_timer = temp_cfg.coalesc_timer;
intr_cfg.resend_timer = temp_cfg.resend_timer;
}
return sss_chip_set_eq_msix_attr(hwdev, &intr_cfg, channel);
}
EXPORT_SYMBOL(sss_chip_set_msix_attr);
void sss_chip_clear_msix_resend_bit(void *hwdev, u16 msix_id, bool clear_en)
{
u32 val;
if (!hwdev)
return;
val = SSS_SET_MSI_CLR_INDIR(msix_id, SIMPLE_INDIR_ID) |
SSS_SET_MSI_CLR_INDIR(!!clear_en, RESEND_TIMER_CLR);
sss_chip_write_reg(SSS_TO_HWIF(hwdev), SSS_CSR_FUNC_MSI_CLR_WR_ADDR, val);
}
EXPORT_SYMBOL(sss_chip_clear_msix_resend_bit);
int sss_chip_reset_function(void *hwdev, u16 func_id, u64 flag, u16 channel)
{
int ret = 0;
struct sss_cmd_func_reset cmd_reset = {0};
u16 out_len = sizeof(cmd_reset);
if (!hwdev)
return -EINVAL;
cmd_reset.func_id = func_id;
cmd_reset.reset_flag = flag;
sdk_info(SSS_TO_DEV(hwdev), "Func reset, flag: 0x%llx, channel:0x%x\n", flag, channel);
ret = sss_sync_send_msg_ch(hwdev, SSS_COMM_MGMT_CMD_FUNC_RESET,
&cmd_reset, sizeof(cmd_reset), &cmd_reset, &out_len, channel);
if (SSS_ASSERT_SEND_MSG_RETURN(ret, out_len, &cmd_reset)) {
sdk_err(SSS_TO_DEV(hwdev),
"Fail to reset func, flag 0x%llx, ret: %d, status: 0x%x, out_len: 0x%x\n",
flag, ret, cmd_reset.head.state, out_len);
return -EIO;
}
return 0;
}
EXPORT_SYMBOL(sss_chip_reset_function);
int sss_chip_set_root_ctx(void *hwdev,
u32 rq_depth, u32 sq_depth, int rx_size, u16 channel)
{
int ret;
struct sss_cmd_root_ctxt cmd_root = {0};
u16 out_len = sizeof(cmd_root);
if (!hwdev)
return -EINVAL;
cmd_root.func_id = sss_get_global_func_id(hwdev);
if (rq_depth != 0 || sq_depth != 0 || rx_size != 0) {
cmd_root.rx_buf_sz = sss_get_rx_buf_size_level(rx_size);
cmd_root.rq_depth = (u16)ilog2(rq_depth);
cmd_root.sq_depth = (u16)ilog2(sq_depth);
cmd_root.lro_en = 1;
}
ret = sss_sync_send_msg_ch(hwdev, SSS_COMM_MGMT_CMD_SET_VAT,
&cmd_root, sizeof(cmd_root), &cmd_root, &out_len, channel);
if (SSS_ASSERT_SEND_MSG_RETURN(ret, out_len, &cmd_root)) {
sdk_err(SSS_TO_DEV(hwdev),
"Fail to set root ctx, ret: %d, status: 0x%x, out_len: 0x%x, channel: 0x%x\n",
ret, cmd_root.head.state, out_len, channel);
return -EFAULT;
}
return 0;
}
EXPORT_SYMBOL(sss_chip_set_root_ctx);
int sss_chip_clean_root_ctx(void *hwdev, u16 channel)
{
return sss_chip_set_root_ctx(hwdev, 0, 0, 0, channel);
}
EXPORT_SYMBOL(sss_chip_clean_root_ctx);
static int sss_get_fw_ver(struct sss_hwdev *hwdev,
enum sss_fw_ver_type fw_type, u8 *buf, u8 buf_size, u16 channel)
{
int ret;
struct sss_cmd_get_fw_version cmd_version = {0};
u16 out_len = sizeof(cmd_version);
if (!hwdev || !buf)
return -EINVAL;
cmd_version.fw_type = fw_type;
ret = sss_sync_send_msg_ch(hwdev, SSS_COMM_MGMT_CMD_GET_FW_VERSION,
&cmd_version, sizeof(cmd_version), &cmd_version,
&out_len, channel);
if (SSS_ASSERT_SEND_MSG_RETURN(ret, out_len, &cmd_version)) {
sdk_err(hwdev->dev_hdl,
"Fail to get fw version, ret: %d, status: 0x%x, out_len: 0x%x, channel: 0x%x\n",
ret, cmd_version.head.state, out_len, channel);
return -EIO;
}
ret = snprintf(buf, buf_size, "%s", cmd_version.ver);
if (ret < 0)
return -EINVAL;
return 0;
}
int sss_get_mgmt_version(void *hwdev, u8 *buf, u8 buf_size, u16 channel)
{
return sss_get_fw_ver(hwdev, SSS_FW_VER_TYPE_MPU, buf,
buf_size, channel);
}
EXPORT_SYMBOL(sss_get_mgmt_version);
int sss_chip_set_func_used_state(void *hwdev,
u16 service_type, bool state, u16 channel)
{
int ret;
struct sss_cmd_func_svc_used_state cmd_state = {0};
u16 out_len = sizeof(cmd_state);
if (!hwdev)
return -EINVAL;
cmd_state.func_id = sss_get_global_func_id(hwdev);
cmd_state.svc_type = service_type;
cmd_state.used_state = !!state;
ret = sss_sync_send_msg_ch(hwdev,
SSS_COMM_MGMT_CMD_SET_FUNC_SVC_USED_STATE,
&cmd_state, sizeof(cmd_state), &cmd_state, &out_len, channel);
if (SSS_ASSERT_SEND_MSG_RETURN(ret, out_len, &cmd_state)) {
sdk_err(SSS_TO_DEV(hwdev),
"Fail to set func used state, ret: %d, status: 0x%x, out_len: 0x%x, channel: 0x%x\n\n",
ret, cmd_state.head.state, out_len, channel);
return -EIO;
}
return 0;
}
EXPORT_SYMBOL(sss_chip_set_func_used_state);
bool sss_get_nic_capability(void *hwdev, struct sss_nic_service_cap *capability)
{
struct sss_hwdev *dev = hwdev;
if (!capability || !hwdev)
return false;
if (SSS_IS_NIC_TYPE(dev)) {
memcpy(capability, SSS_TO_NIC_CAP(hwdev), sizeof(*capability));
return true;
} else {
return false;
}
}
EXPORT_SYMBOL(sss_get_nic_capability);
bool sss_support_nic(void *hwdev)
{
return (hwdev && SSS_IS_NIC_TYPE((struct sss_hwdev *)hwdev));
}
EXPORT_SYMBOL(sss_support_nic);
u16 sss_get_max_sq_num(void *hwdev)
{
if (!hwdev) {
pr_err("Get max sq num: hwdev is NULL\n");
return 0;
}
return SSS_TO_MAX_SQ_NUM(hwdev);
}
EXPORT_SYMBOL(sss_get_max_sq_num);
u8 sss_get_phy_port_id(void *hwdev)
{
if (!hwdev) {
pr_err("Get phy port id: hwdev is NULL\n");
return 0;
}
return SSS_TO_PHY_PORT_ID(hwdev);
}
EXPORT_SYMBOL(sss_get_phy_port_id);
u16 sss_get_max_vf_num(void *hwdev)
{
if (!hwdev) {
pr_err("Get max vf num: hwdev is NULL\n");
return 0;
}
return SSS_TO_MAX_VF_NUM(hwdev);
}
EXPORT_SYMBOL(sss_get_max_vf_num);
int sss_get_cos_valid_bitmap(void *hwdev, u8 *func_cos_bitmap, u8 *port_cos_bitmap)
{
if (!hwdev) {
pr_err("Get cos valid bitmap: hwdev is NULL\n");
return -EINVAL;
}
*func_cos_bitmap = SSS_TO_FUNC_COS_BITMAP(hwdev);
*port_cos_bitmap = SSS_TO_PORT_COS_BITMAP(hwdev);
return 0;
}
EXPORT_SYMBOL(sss_get_cos_valid_bitmap);
u16 sss_alloc_irq(void *hwdev, enum sss_service_type service_type,
struct sss_irq_desc *alloc_array, u16 alloc_num)
{
int i;
int j;
u16 need_num = alloc_num;
u16 act_num = 0;
struct sss_irq_info *irq_info = NULL;
struct sss_irq *irq = NULL;
if (!hwdev || !alloc_array)
return 0;
irq_info = SSS_TO_IRQ_INFO(hwdev);
irq = irq_info->irq;
mutex_lock(&irq_info->irq_mutex);
if (irq_info->free_num == 0) {
sdk_err(SSS_TO_DEV(hwdev), "Fail to alloc irq, free_num is zero\n");
mutex_unlock(&irq_info->irq_mutex);
return 0;
}
if (alloc_num > irq_info->free_num) {
sdk_warn(SSS_TO_DEV(hwdev), "Adjust need_num to %u\n", irq_info->free_num);
need_num = irq_info->free_num;
}
for (i = 0; i < need_num; i++) {
for (j = 0; j < irq_info->total_num; j++) {
if (irq[j].busy != SSS_CFG_FREE)
continue;
if (irq_info->free_num == 0) {
sdk_err(SSS_TO_DEV(hwdev), "Fail to alloc irq, free_num is zero\n");
mutex_unlock(&irq_info->irq_mutex);
memset(alloc_array, 0, sizeof(*alloc_array) * alloc_num);
return 0;
}
irq[j].type = service_type;
irq[j].busy = SSS_CFG_BUSY;
alloc_array[i].irq_id = irq[j].desc.irq_id;
alloc_array[i].msix_id = irq[j].desc.msix_id;
irq_info->free_num--;
act_num++;
break;
}
}
mutex_unlock(&irq_info->irq_mutex);
return act_num;
}
EXPORT_SYMBOL(sss_alloc_irq);
void sss_free_irq(void *hwdev, enum sss_service_type service_type, u32 irq_id)
{
int i;
struct sss_irq_info *irq_info = NULL;
struct sss_irq *irq = NULL;
if (!hwdev)
return;
irq_info = SSS_TO_IRQ_INFO(hwdev);
irq = irq_info->irq;
mutex_lock(&irq_info->irq_mutex);
for (i = 0; i < irq_info->total_num; i++) {
if (irq_id != irq[i].desc.irq_id ||
service_type != irq[i].type)
continue;
if (irq[i].busy == SSS_CFG_FREE)
continue;
irq[i].busy = SSS_CFG_FREE;
irq_info->free_num++;
if (irq_info->free_num > irq_info->total_num) {
sdk_err(SSS_TO_DEV(hwdev), "Free_num out of range :[0, %u]\n",
irq_info->total_num);
mutex_unlock(&irq_info->irq_mutex);
return;
}
break;
}
if (i >= irq_info->total_num)
sdk_warn(SSS_TO_DEV(hwdev), "Irq %u don`t need to free\n", irq_id);
mutex_unlock(&irq_info->irq_mutex);
}
EXPORT_SYMBOL(sss_free_irq);
void sss_register_dev_event(void *hwdev, void *data, sss_event_handler_t callback)
{
struct sss_hwdev *dev = hwdev;
if (!hwdev) {
pr_err("Register event: hwdev is NULL\n");
return;
}
dev->event_handler = callback;
dev->event_handler_data = data;
}
EXPORT_SYMBOL(sss_register_dev_event);
void sss_unregister_dev_event(void *hwdev)
{
struct sss_hwdev *dev = hwdev;
if (!hwdev) {
pr_err("Unregister event: hwdev is NULL\n");
return;
}
dev->event_handler = NULL;
dev->event_handler_data = NULL;
}
EXPORT_SYMBOL(sss_unregister_dev_event);
int sss_get_dev_present_flag(const void *hwdev)
{
return hwdev && !!((struct sss_hwdev *)hwdev)->chip_present_flag;
}
EXPORT_SYMBOL(sss_get_dev_present_flag);
u8 sss_get_max_pf_num(void *hwdev)
{
if (!hwdev)
return 0;
return SSS_MAX_PF_NUM((struct sss_hwdev *)hwdev);
}
EXPORT_SYMBOL(sss_get_max_pf_num);
int sss_get_chip_present_state(void *hwdev, bool *present_state)
{
if (!hwdev || !present_state)
return -EINVAL;
*present_state = sss_chip_get_present_state(hwdev);
return 0;
}
EXPORT_SYMBOL(sss_get_chip_present_state);
void sss_fault_event_report(void *hwdev, u16 src, u16 level)
{
if (!hwdev)
return;
sdk_info(SSS_TO_DEV(hwdev),
"Fault event report, src: %u, level: %u\n", src, level);
}
EXPORT_SYMBOL(sss_fault_event_report);
int sss_register_service_adapter(void *hwdev, enum sss_service_type service_type,
void *service_adapter)
{
struct sss_hwdev *dev = hwdev;
if (!hwdev || !service_adapter || service_type >= SSS_SERVICE_TYPE_MAX)
return -EINVAL;
if (dev->service_adapter[service_type])
return -EINVAL;
dev->service_adapter[service_type] = service_adapter;
return 0;
}
EXPORT_SYMBOL(sss_register_service_adapter);
void sss_unregister_service_adapter(void *hwdev,
enum sss_service_type service_type)
{
struct sss_hwdev *dev = hwdev;
if (!hwdev || service_type >= SSS_SERVICE_TYPE_MAX)
return;
dev->service_adapter[service_type] = NULL;
}
EXPORT_SYMBOL(sss_unregister_service_adapter);
void *sss_get_service_adapter(void *hwdev, enum sss_service_type service_type)
{
struct sss_hwdev *dev = hwdev;
if (!hwdev || service_type >= SSS_SERVICE_TYPE_MAX)
return NULL;
return dev->service_adapter[service_type];
}
EXPORT_SYMBOL(sss_get_service_adapter);
void sss_do_event_callback(void *hwdev, struct sss_event_info *event)
{
struct sss_hwdev *dev = hwdev;
if (!hwdev) {
pr_err("Event callback: hwdev is NULL\n");
return;
}
if (!dev->event_handler) {
sdk_info(dev->dev_hdl, "Event callback: handler is NULL\n");
return;
}
dev->event_handler(dev->event_handler_data, event);
}
EXPORT_SYMBOL(sss_do_event_callback);
void sss_update_link_stats(void *hwdev, bool link_state)
{
struct sss_hwdev *dev = hwdev;
if (!hwdev)
return;
if (link_state)
atomic_inc(&dev->hw_stats.sss_link_event_stats.link_up_stats);
else
atomic_inc(&dev->hw_stats.sss_link_event_stats.link_down_stats);
}
EXPORT_SYMBOL(sss_update_link_stats);
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2021 3snic Technologies Co., Ltd */
#define pr_fmt(fmt) KBUILD_MODNAME ": [BASE]" fmt
#include <linux/types.h>
#include <linux/vmalloc.h>
#include <linux/mutex.h>
#include <linux/module.h>
#include <linux/timer.h>
#include <linux/workqueue.h>
#include "sss_kernel.h"
#include "sss_hw.h"
#include "sss_hwdev.h"
#include "sss_adapter.h"
#include "sss_hwdev_api.h"
#include "sss_hwdev_mgmt_info.h"
#include "sss_hwdev_mgmt_channel.h"
#include "sss_hwdev_cap.h"
#include "sss_hwdev_link.h"
#include "sss_hwdev_io_flush.h"
#include "sss_hwif_init.h"
#include "sss_hwif_api.h"
#include "sss_hwif_export.h"
#include "sss_hwif_mgmt_init.h"
enum sss_host_mode {
SSS_HOST_MODE_NORMAL = 0,
SSS_HOST_MODE_VM,
SSS_HOST_MODE_BM,
SSS_HOST_MODE_MAX,
};
#define SSS_HWDEV_WQ_NAME "sssnic_hardware"
#define SSS_WQ_MAX_REQ 10
#define SSS_DETECT_PCIE_LINK_DOWN_RETRY 2
#define SSS_CHN_BUSY_TIMEOUT 25
#define SSS_HEARTBEAT_TIMER_EXPIRES 5000
#define SSS_HEARTBEAT_PERIOD 1000
#define SSS_GET_PCIE_LINK_STATUS(hwdev) \
((hwdev)->heartbeat.pcie_link_down ? \
SSS_EVENT_PCIE_LINK_DOWN : SSS_EVENT_HEART_LOST)
#define SSS_SET_FUNC_HOST_MODE(hwdev, mode) \
do { \
if ((mode) >= SSS_FUNC_MOD_MIN && (mode) <= SSS_FUNC_MOD_MAX) { \
(hwdev)->func_mode = (mode); \
} else \
(hwdev)->func_mode = SSS_FUNC_MOD_NORMAL_HOST; \
} while (0)
#define SSS_SYNFW_TIME_PERIOD (60 * 60 * 1000)
#define SSS_CHANNEL_DETECT_PERIOD (5 * 1000)
#define SSS_COMM_SUPPORT_CHANNEL_DETECT(hwdev) \
((hwdev)->features[0] & SSS_COMM_F_CHANNEL_DETECT)
typedef void (*sss_set_mode_handler_t)(struct sss_hwdev *hwdev);
static struct sss_hwdev *sss_alloc_hwdev(void)
{
struct sss_hwdev *hwdev;
hwdev = kzalloc(sizeof(*hwdev), GFP_KERNEL);
if (!hwdev)
return NULL;
hwdev->chip_fault_stats = vzalloc(SSS_CHIP_FAULT_SIZE);
if (!hwdev->chip_fault_stats) {
kfree(hwdev);
return NULL;
}
return hwdev;
}
static void sss_free_hwdev(struct sss_hwdev *hwdev)
{
vfree(hwdev->chip_fault_stats);
kfree(hwdev);
}
static void sss_init_hwdev_param(struct sss_hwdev *hwdev,
struct sss_pci_adapter *adapter)
{
struct pci_dev *pdev = adapter->pcidev;
hwdev->adapter_hdl = adapter;
hwdev->pcidev_hdl = pdev;
hwdev->dev_hdl = &pdev->dev;
hwdev->chip_node = adapter->chip_node;
spin_lock_init(&hwdev->channel_lock);
}
static void sss_set_chip_present_flag(struct sss_hwdev *hwdev, bool present)
{
hwdev->chip_present_flag = !!present;
}
static bool sss_is_chip_abnormal(struct sss_hwdev *hwdev)
{
u32 pcie_status;
if (!sss_get_dev_present_flag(hwdev))
return false;
pcie_status = sss_chip_get_pcie_link_status(hwdev);
if (pcie_status == SSS_PCIE_LINK_DOWN) {
hwdev->heartbeat.pcie_link_down_cnt++;
sdk_warn(hwdev->dev_hdl, "Pcie link down\n");
if (hwdev->heartbeat.pcie_link_down_cnt >= SSS_DETECT_PCIE_LINK_DOWN_RETRY) {
sss_set_chip_present_flag(hwdev, false);
sss_force_complete_all(hwdev);
hwdev->heartbeat.pcie_link_down = true;
return true;
}
return false;
}
if (pcie_status != SSS_PCIE_LINK_UP) {
hwdev->heartbeat.heartbeat_lost = true;
return true;
}
hwdev->heartbeat.pcie_link_down_cnt = 0;
return false;
}
static void sss_update_aeq_stat(struct sss_hwdev *hwdev)
{
if (hwdev->aeq_stat.last_recv_cnt != hwdev->aeq_stat.cur_recv_cnt) {
hwdev->aeq_stat.last_recv_cnt = hwdev->aeq_stat.cur_recv_cnt;
hwdev->aeq_stat.busy_cnt = 0;
} else {
hwdev->aeq_stat.busy_cnt++;
}
}
static void sss_update_channel_status(struct sss_hwdev *hwdev)
{
struct sss_card_node *node = hwdev->chip_node;
if (!node)
return;
if (sss_get_func_type(hwdev) != SSS_FUNC_TYPE_PPF ||
!SSS_COMM_SUPPORT_CHANNEL_DETECT(hwdev) ||
atomic_read(&node->channel_timeout_cnt))
return;
if (test_bit(SSS_HW_MBX_INIT_OK, &hwdev->func_state)) {
sss_update_aeq_stat(hwdev);
if (hwdev->aeq_stat.busy_cnt > SSS_CHN_BUSY_TIMEOUT) {
sdk_err(hwdev->dev_hdl, "Detect channel busy\n");
atomic_inc(&node->channel_timeout_cnt);
}
}
}
static void sss_heartbeat_timer_handler(struct timer_list *t)
{
struct sss_hwdev *hwdev = from_timer(hwdev, t, heartbeat.heartbeat_timer);
if (sss_is_chip_abnormal(hwdev)) {
queue_work(hwdev->workq, &hwdev->heartbeat.lost_work);
} else {
mod_timer(&hwdev->heartbeat.heartbeat_timer,
jiffies + msecs_to_jiffies(SSS_HEARTBEAT_PERIOD));
}
sss_update_channel_status(hwdev);
}
static void sss_heartbeat_lost_handler(struct work_struct *work)
{
u16 fault_level;
u16 pcie_src;
struct sss_event_info event_info = {0};
struct sss_hwdev *hwdev = container_of(work, struct sss_hwdev,
heartbeat.lost_work);
atomic_inc(&hwdev->hw_stats.heart_lost_stats);
if (hwdev->event_handler) {
event_info.type = SSS_GET_PCIE_LINK_STATUS(hwdev);
event_info.service = SSS_EVENT_SRV_COMM;
hwdev->event_handler(hwdev->event_handler_data, &event_info);
}
if (hwdev->heartbeat.pcie_link_down) {
sdk_err(hwdev->dev_hdl, "Detect pcie is link down\n");
fault_level = SSS_FAULT_LEVEL_HOST;
pcie_src = SSS_FAULT_SRC_PCIE_LINK_DOWN;
} else {
sdk_err(hwdev->dev_hdl, "Heart lost report received, func_id: %d\n",
sss_get_global_func_id(hwdev));
fault_level = SSS_FAULT_LEVEL_FATAL;
pcie_src = SSS_FAULT_SRC_HOST_HEARTBEAT_LOST;
}
sss_dump_chip_err_info(hwdev);
}
static void sss_create_heartbeat_timer(struct sss_hwdev *hwdev)
{
timer_setup(&hwdev->heartbeat.heartbeat_timer, sss_heartbeat_timer_handler, 0);
hwdev->heartbeat.heartbeat_timer.expires =
jiffies + msecs_to_jiffies(SSS_HEARTBEAT_TIMER_EXPIRES);
add_timer(&hwdev->heartbeat.heartbeat_timer);
INIT_WORK(&hwdev->heartbeat.lost_work, sss_heartbeat_lost_handler);
}
static void sss_destroy_heartbeat_timer(struct sss_hwdev *hwdev)
{
destroy_work(&hwdev->heartbeat.lost_work);
del_timer_sync(&hwdev->heartbeat.heartbeat_timer);
}
static void sss_set_bm_host_mode(struct sss_hwdev *hwdev)
{
struct sss_service_cap *svc_cap = &hwdev->mgmt_info->svc_cap;
u8 host_id = SSS_GET_HWIF_PCI_INTF_ID(hwdev->hwif);
if (host_id == svc_cap->master_host_id)
SSS_SET_FUNC_HOST_MODE(hwdev, SSS_FUNC_MOD_MULTI_BM_MASTER);
else
SSS_SET_FUNC_HOST_MODE(hwdev, SSS_FUNC_MOD_MULTI_BM_SLAVE);
}
static void sss_set_vm_host_mode(struct sss_hwdev *hwdev)
{
struct sss_service_cap *svc_cap = &hwdev->mgmt_info->svc_cap;
u8 host_id = SSS_GET_HWIF_PCI_INTF_ID(hwdev->hwif);
if (host_id == svc_cap->master_host_id)
SSS_SET_FUNC_HOST_MODE(hwdev, SSS_FUNC_MOD_MULTI_VM_MASTER);
else
SSS_SET_FUNC_HOST_MODE(hwdev, SSS_FUNC_MOD_MULTI_VM_SLAVE);
}
static void sss_set_normal_host_mode(struct sss_hwdev *hwdev)
{
SSS_SET_FUNC_HOST_MODE(hwdev, SSS_FUNC_MOD_NORMAL_HOST);
}
static int sss_enable_multi_host(struct sss_hwdev *hwdev)
{
if (!SSS_IS_PPF(hwdev) || !SSS_IS_MULTI_HOST(hwdev))
return 0;
if (SSS_IS_SLAVE_HOST(hwdev))
sss_chip_set_slave_host_status(hwdev, sss_get_pcie_itf_id(hwdev), true);
return 0;
}
static int sss_disable_multi_host(struct sss_hwdev *hwdev)
{
if (!SSS_IS_PPF(hwdev) || !SSS_IS_MULTI_HOST(hwdev))
return 0;
if (SSS_IS_SLAVE_HOST(hwdev))
sss_chip_set_slave_host_status(hwdev, sss_get_pcie_itf_id(hwdev), false);
return 0;
}
static int sss_init_host_mode(struct sss_hwdev *hwdev)
{
int ret;
struct sss_service_cap *svc_cap = &hwdev->mgmt_info->svc_cap;
sss_set_mode_handler_t handler[SSS_HOST_MODE_MAX] = {
sss_set_normal_host_mode,
sss_set_vm_host_mode,
sss_set_bm_host_mode
};
if (SSS_GET_FUNC_TYPE(hwdev) == SSS_FUNC_TYPE_VF) {
SSS_SET_FUNC_HOST_MODE(hwdev, SSS_FUNC_MOD_NORMAL_HOST);
return 0;
}
if (svc_cap->srv_multi_host_mode >= SSS_HOST_MODE_MAX) {
SSS_SET_FUNC_HOST_MODE(hwdev, SSS_FUNC_MOD_NORMAL_HOST);
return 0;
}
handler[svc_cap->srv_multi_host_mode](hwdev);
ret = sss_enable_multi_host(hwdev);
if (ret != 0) {
sdk_err(hwdev->dev_hdl, "Fail to init function mode\n");
return ret;
}
return 0;
}
static void sss_deinit_host_mode(struct sss_hwdev *hwdev)
{
sss_disable_multi_host(hwdev);
}
static u64 sss_get_real_time(void)
{
struct timeval val = {0};
do_gettimeofday(&val);
return (u64)val.tv_sec * MSEC_PER_SEC +
(u64)val.tv_usec / USEC_PER_MSEC;
}
static void sss_auto_sync_time_work(struct work_struct *work)
{
struct delayed_work *delay = to_delayed_work(work);
struct sss_hwdev *hwdev = container_of(delay,
struct sss_hwdev, sync_time_task);
int ret;
ret = sss_chip_sync_time(hwdev, sss_get_real_time());
if (ret != 0)
sdk_err(hwdev->dev_hdl,
"Fail to sync UTC time to firmware, errno:%d.\n", ret);
queue_delayed_work(hwdev->workq, &hwdev->sync_time_task,
msecs_to_jiffies(SSS_SYNFW_TIME_PERIOD));
}
static void sss_auto_channel_detect_work(struct work_struct *work)
{
struct delayed_work *delay = to_delayed_work(work);
struct sss_hwdev *hwdev = container_of(delay,
struct sss_hwdev, channel_detect_task);
struct sss_card_node *chip_node = NULL;
sss_chip_comm_channel_detect(hwdev);
chip_node = hwdev->chip_node;
if (!atomic_read(&chip_node->channel_timeout_cnt))
queue_delayed_work(hwdev->workq, &hwdev->channel_detect_task,
msecs_to_jiffies(SSS_CHANNEL_DETECT_PERIOD));
}
static void sss_hwdev_init_work(struct sss_hwdev *hwdev)
{
if (SSS_GET_FUNC_TYPE(hwdev) != SSS_FUNC_TYPE_PPF)
return;
INIT_DELAYED_WORK(&hwdev->sync_time_task, sss_auto_sync_time_work);
queue_delayed_work(hwdev->workq, &hwdev->sync_time_task,
msecs_to_jiffies(SSS_SYNFW_TIME_PERIOD));
if (SSS_COMM_SUPPORT_CHANNEL_DETECT(hwdev)) {
INIT_DELAYED_WORK(&hwdev->channel_detect_task,
sss_auto_channel_detect_work);
queue_delayed_work(hwdev->workq, &hwdev->channel_detect_task,
msecs_to_jiffies(SSS_CHANNEL_DETECT_PERIOD));
}
}
static void sss_hwdev_deinit_work(struct sss_hwdev *hwdev)
{
if (SSS_GET_FUNC_TYPE(hwdev) != SSS_FUNC_TYPE_PPF)
return;
if (SSS_COMM_SUPPORT_CHANNEL_DETECT(hwdev)) {
hwdev->features[0] &= ~(SSS_COMM_F_CHANNEL_DETECT);
cancel_delayed_work_sync(&hwdev->channel_detect_task);
}
cancel_delayed_work_sync(&hwdev->sync_time_task);
}
int sss_init_hwdev(struct sss_pci_adapter *adapter)
{
struct sss_hwdev *hwdev;
int ret;
hwdev = sss_alloc_hwdev();
if (!hwdev)
return -ENOMEM;
sss_init_hwdev_param(hwdev, adapter);
adapter->hwdev = hwdev;
ret = sss_hwif_init(adapter);
if (ret != 0) {
sdk_err(hwdev->dev_hdl, "Fail to init hwif\n");
goto init_hwif_err;
}
sss_set_chip_present_flag(hwdev, true);
hwdev->workq = alloc_workqueue(SSS_HWDEV_WQ_NAME, WQ_MEM_RECLAIM, SSS_WQ_MAX_REQ);
if (!hwdev->workq) {
sdk_err(hwdev->dev_hdl, "Fail to alloc hardware workq\n");
goto alloc_workq_err;
}
sss_create_heartbeat_timer(hwdev);
ret = sss_init_mgmt_info(hwdev);
if (ret != 0) {
sdk_err(hwdev->dev_hdl, "Fail to init mgmt info\n");
goto init_mgmt_info_err;
}
ret = sss_init_mgmt_channel(hwdev);
if (ret != 0) {
sdk_err(hwdev->dev_hdl, "Fail to init mgmt channel\n");
goto init_mgmt_channel_err;
}
#ifdef HAVE_DEVLINK_FLASH_UPDATE_PARAMS
ret = sss_init_devlink(hwdev);
if (ret != 0) {
sdk_err(hwdev->dev_hdl, "Fail to init devlink\n");
goto init_devlink_err;
}
#endif
ret = sss_init_capability(hwdev);
if (ret != 0) {
sdk_err(hwdev->dev_hdl, "Fail to init capability\n");
goto init_cap_err;
}
ret = sss_init_host_mode(hwdev);
if (ret != 0) {
sdk_err(hwdev->dev_hdl, "Fail to init capability\n");
goto init_multi_host_fail;
}
sss_hwdev_init_work(hwdev);
ret = sss_chip_do_nego_feature(hwdev, SSS_MGMT_MSG_SET_CMD,
hwdev->features, SSS_MAX_FEATURE_QWORD);
if (ret != 0) {
sdk_err(hwdev->dev_hdl, "Fail to set comm features\n");
goto set_feature_err;
}
return 0;
set_feature_err:
sss_hwdev_deinit_work(hwdev);
sss_deinit_host_mode(hwdev);
init_multi_host_fail:
sss_deinit_capability(hwdev);
init_cap_err:
#ifdef HAVE_DEVLINK_FLASH_UPDATE_PARAMS
sss_deinit_devlink(hwdev);
init_devlink_err:
#endif
sss_deinit_mgmt_channel(hwdev);
init_mgmt_channel_err:
sss_deinit_mgmt_info(hwdev);
init_mgmt_info_err:
sss_destroy_heartbeat_timer(hwdev);
destroy_workqueue(hwdev->workq);
alloc_workq_err:
sss_hwif_deinit(hwdev);
init_hwif_err:
sss_free_hwdev(hwdev);
adapter->hwdev = NULL;
return -EFAULT;
}
void sss_deinit_hwdev(void *hwdev)
{
struct sss_hwdev *dev = hwdev;
u64 drv_features[SSS_MAX_FEATURE_QWORD] = {0};
sss_chip_do_nego_feature(hwdev, SSS_MGMT_MSG_SET_CMD,
drv_features, SSS_MAX_FEATURE_QWORD);
sss_hwdev_deinit_work(dev);
if (SSS_IS_MULTI_HOST(dev))
sss_disable_multi_host(dev);
sss_hwdev_flush_io(dev, SSS_CHANNEL_COMM);
sss_deinit_capability(dev);
#ifdef HAVE_DEVLINK_FLASH_UPDATE_PARAMS
sss_deinit_devlink(dev);
#endif
sss_deinit_mgmt_channel(dev);
sss_deinit_mgmt_info(dev);
sss_destroy_heartbeat_timer(hwdev);
destroy_workqueue(dev->workq);
sss_hwif_deinit(dev);
sss_free_hwdev(dev);
}
void sss_hwdev_stop(void *hwdev)
{
struct sss_hwdev *dev = hwdev;
if (!hwdev)
return;
sss_set_chip_present_flag(hwdev, false);
sdk_info(dev->dev_hdl, "Set card absent\n");
sss_force_complete_all(dev);
sdk_info(dev->dev_hdl, "All messages interacting with the chip will stop\n");
}
void sss_hwdev_detach(void *hwdev)
{
if (!sss_chip_get_present_state((struct sss_hwdev *)hwdev)) {
sss_set_chip_present_flag(hwdev, false);
sss_force_complete_all(hwdev);
}
}
void sss_hwdev_shutdown(void *hwdev)
{
struct sss_hwdev *dev = hwdev;
if (!hwdev)
return;
if (SSS_IS_SLAVE_HOST(dev))
sss_chip_set_slave_host_status(hwdev, sss_get_pcie_itf_id(hwdev), false);
}
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2021 3snic Technologies Co., Ltd */
#ifndef SSS_HWDEV_INIT_H
#define SSS_HWDEV_INIT_H
#include "sss_adapter.h"
int sss_init_hwdev(struct sss_pci_adapter *adapter);
void sss_deinit_hwdev(void *hwdev);
void sss_hwdev_detach(void *hwdev);
void sss_hwdev_stop(void *hwdev);
void sss_hwdev_shutdown(void *hwdev);
#endif
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2021 3snic Technologies Co., Ltd */
#define pr_fmt(fmt) KBUILD_MODNAME ": [BASE]" fmt
#include "sss_kernel.h"
#include "sss_hw.h"
#include "sss_hwdev.h"
#include "sss_hwif_ctrlq_init.h"
#include "sss_hwif_api.h"
#include "sss_hwif_mbx.h"
#include "sss_common.h"
#define SSS_FLR_TIMEOUT 1000
#define SSS_FLR_TIMEOUT_ONCE 10000
static enum sss_process_ret sss_check_flr_finish_handler(void *priv_data)
{
struct sss_hwif *hwif = priv_data;
enum sss_pf_status status;
status = sss_chip_get_pf_status(hwif);
if (status == SSS_PF_STATUS_FLR_FINISH_FLAG) {
sss_chip_set_pf_status(hwif, SSS_PF_STATUS_ACTIVE_FLAG);
return SSS_PROCESS_OK;
}
return SSS_PROCESS_DOING;
}
static int sss_wait_for_flr_finish(struct sss_hwif *hwif)
{
return sss_check_handler_timeout(hwif, sss_check_flr_finish_handler,
SSS_FLR_TIMEOUT, SSS_FLR_TIMEOUT_ONCE);
}
static int sss_msg_to_mgmt_no_ack(void *hwdev, u8 mod, u16 cmd,
void *buf_in, u16 in_size, u16 channel)
{
if (!hwdev)
return -EINVAL;
if (sss_get_dev_present_flag(hwdev) == 0)
return -EPERM;
return sss_send_mbx_to_mgmt_no_ack(hwdev, mod, cmd, buf_in,
in_size, channel);
}
static int sss_chip_flush_doorbell(struct sss_hwdev *hwdev, u16 channel)
{
struct sss_hwif *hwif = hwdev->hwif;
struct sss_cmd_clear_doorbell clear_db = {0};
u16 out_len = sizeof(clear_db);
int ret;
clear_db.func_id = SSS_GET_HWIF_GLOBAL_ID(hwif);
ret = sss_sync_send_msg_ch(hwdev, SSS_COMM_MGMT_CMD_FLUSH_DOORBELL,
&clear_db, sizeof(clear_db),
&clear_db, &out_len, channel);
if (SSS_ASSERT_SEND_MSG_RETURN(ret, out_len, &clear_db)) {
sdk_warn(hwdev->dev_hdl,
"Fail to flush doorbell, ret: %d, status: 0x%x, out_size: 0x%x, channel: 0x%x\n",
ret, clear_db.head.state, out_len, channel);
if (ret == 0)
return -EFAULT;
}
return ret;
}
static int sss_chip_flush_resource(struct sss_hwdev *hwdev, u16 channel)
{
struct sss_hwif *hwif = hwdev->hwif;
struct sss_cmd_clear_resource clr_res = {0};
int ret;
clr_res.func_id = SSS_GET_HWIF_GLOBAL_ID(hwif);
ret = sss_msg_to_mgmt_no_ack(hwdev, SSS_MOD_TYPE_COMM,
SSS_COMM_MGMT_CMD_START_FLUSH, &clr_res,
sizeof(clr_res), channel);
if (ret != 0) {
sdk_warn(hwdev->dev_hdl, "Fail to notice flush message, ret: %d, channel: 0x%x\n",
ret, channel);
}
return ret;
}
int sss_hwdev_flush_io(struct sss_hwdev *hwdev, u16 channel)
{
struct sss_hwif *hwif = hwdev->hwif;
int err;
int ret = 0;
if (hwdev->chip_present_flag == 0)
return 0;
if (SSS_GET_FUNC_TYPE(hwdev) != SSS_FUNC_TYPE_VF)
msleep(100);
err = sss_wait_ctrlq_stop(hwdev);
if (err != 0) {
sdk_warn(hwdev->dev_hdl, "Fail to wait ctrlq stop\n");
ret = err;
}
sss_chip_disable_doorbell(hwif);
err = sss_chip_flush_doorbell(hwdev, channel);
if (err != 0)
ret = err;
if (SSS_GET_FUNC_TYPE(hwdev) != SSS_FUNC_TYPE_VF)
sss_chip_set_pf_status(hwif, SSS_PF_STATUS_FLR_START_FLAG);
else
msleep(100);
err = sss_chip_flush_resource(hwdev, channel);
if (err != 0)
ret = err;
if (SSS_GET_FUNC_TYPE(hwdev) != SSS_FUNC_TYPE_VF) {
err = sss_wait_for_flr_finish(hwif);
if (err != 0) {
sdk_warn(hwdev->dev_hdl, "Wait firmware FLR timeout\n");
ret = err;
}
}
sss_chip_enable_doorbell(hwif);
err = sss_reinit_ctrlq_ctx(hwdev);
if (err != 0) {
sdk_warn(hwdev->dev_hdl, "Fail to reinit ctrlq ctx\n");
ret = err;
}
return ret;
}
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2021 3snic Technologies Co., Ltd */
#ifndef SSS_HWDEV_IO_FLUSH_H
#define SSS_HWDEV_IO_FLUSH_H
#include "sss_hwdev.h"
int sss_hwdev_flush_io(struct sss_hwdev *hwdev, u16 channel);
#endif
此差异已折叠。
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2021 3snic Technologies Co., Ltd */
#ifndef SSS_HWDEV_LINK_H
#define SSS_HWDEV_LINK_H
#include "sss_kernel.h"
#include "sss_hwdev.h"
#include "sss_hw_mbx_msg.h"
int sss_init_devlink(struct sss_hwdev *hwdev);
void sss_deinit_devlink(struct sss_hwdev *hwdev);
#endif
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2021 3snic Technologies Co., Ltd */
#ifndef SSS_HWDEV_MGMT_CHANNEL_H
#define SSS_HWDEV_MGMT_CHANNEL_H
#include "sss_hwdev.h"
#define SSS_STACK_DATA_LEN 1024
#define SSS_XREGS_NUM 31
#define SSS_MPU_LASTWORD_SIZE 1024
struct sss_watchdog_info {
struct sss_mgmt_msg_head head;
u32 cur_time_h;
u32 cur_time_l;
u32 task_id;
u32 rsvd;
u64 pc;
u64 elr;
u64 spsr;
u64 far;
u64 esr;
u64 xzr;
u64 x30;
u64 x29;
u64 x28;
u64 x27;
u64 x26;
u64 x25;
u64 x24;
u64 x23;
u64 x22;
u64 x21;
u64 x20;
u64 x19;
u64 x18;
u64 x17;
u64 x16;
u64 x15;
u64 x14;
u64 x13;
u64 x12;
u64 x11;
u64 x10;
u64 x09;
u64 x08;
u64 x07;
u64 x06;
u64 x05;
u64 x04;
u64 x03;
u64 x02;
u64 x01;
u64 x00;
u64 stack_top;
u64 stack_bottom;
u64 sp;
u32 cur_used;
u32 peak_used;
u32 is_overflow;
u32 stack_actlen;
u8 stack_data[SSS_STACK_DATA_LEN];
};
struct sss_cpu_tick {
u32 tick_cnt_h; /* The cycle count higher 32 bits */
u32 tick_cnt_l; /* The cycle count lower 32 bits */
};
struct sss_ax_exc_reg_info {
u64 ttbr0;
u64 ttbr1;
u64 tcr;
u64 mair;
u64 sctlr;
u64 vbar;
u64 current_el;
u64 sp;
u64 elr;
u64 spsr;
u64 far_r;
u64 esr;
u64 xzr;
u64 xregs[SSS_XREGS_NUM]; /* 0~30: x30~x0 */
};
struct sss_exc_info {
char os_ver[48]; /* OS version */
char app_ver[64]; /* Product version */
u32 exc_cause; /* Cause of exception */
u32 thread_type; /* The thread type before the exception */
u32 thread_id; /* Thread PID before exception */
u16 byte_order; /* Byte order */
u16 cpu_type; /* CPU type */
u32 cpu_id; /* CPU ID */
struct sss_cpu_tick cpu_tick; /* CPU Tick */
u32 nest_cnt; /* The exception nested count */
u32 fatal_errno; /* Fatal error code */
u64 uw_sp; /* The stack pointer before the exception */
u64 stack_bottom; /* Bottom of the stack before the exception */
/* The in-core register context information,*/
/* 82\57 must be at 152 bytes; if it has changed, */
/* the OS_EXC_REGINFO_OFFSET macro in sre_platform.eh must be updated */
struct sss_ax_exc_reg_info reg_info;
};
struct sss_lastword_info {
struct sss_mgmt_msg_head head;
struct sss_exc_info stack_info;
/* Stack details, Actual stack size(<=1024) */
u32 stack_actlen;
/* More than 1024, it will be truncated */
u8 stack_data[SSS_MPU_LASTWORD_SIZE];
};
int sss_init_mgmt_channel(struct sss_hwdev *hwdev);
void sss_deinit_mgmt_channel(struct sss_hwdev *hwdev);
#endif
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2021 3snic Technologies Co., Ltd */
#define pr_fmt(fmt) KBUILD_MODNAME ": [BASE]" fmt
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/mutex.h>
#include "sss_kernel.h"
#include "sss_hwdev.h"
#include "sss_hw_svc_cap.h"
#include "sss_hwif_irq.h"
static int sss_init_ceq_info(struct sss_hwdev *hwdev)
{
u8 i;
struct sss_eq_info *ceq_info = &hwdev->mgmt_info->eq_info;
struct sss_eq_cfg *ceq = NULL;
ceq_info->ceq_num = SSS_GET_HWIF_CEQ_NUM(hwdev->hwif);
ceq_info->remain_ceq_num = ceq_info->ceq_num;
mutex_init(&ceq_info->eq_mutex);
sdk_info(hwdev->dev_hdl, "Mgmt ceq info: ceq_num = 0x%x, remain_ceq_num = 0x%x\n",
ceq_info->ceq_num, ceq_info->remain_ceq_num);
if (ceq_info->ceq_num == 0) {
sdk_err(hwdev->dev_hdl, "Mgmt ceq info: ceq_num = 0\n");
return -EFAULT;
}
ceq = kcalloc(ceq_info->ceq_num, sizeof(*ceq), GFP_KERNEL);
if (!ceq)
return -ENOMEM;
for (i = 0; i < ceq_info->ceq_num; i++) {
ceq[i].id = i + 1;
ceq[i].free = SSS_CFG_FREE;
ceq[i].type = SSS_SERVICE_TYPE_MAX;
}
ceq_info->eq = ceq;
return 0;
}
static void sss_deinit_ceq_info(struct sss_hwdev *hwdev)
{
struct sss_eq_info *ceq_info = &hwdev->mgmt_info->eq_info;
kfree(ceq_info->eq);
}
int sss_init_mgmt_info(struct sss_hwdev *hwdev)
{
int ret;
struct sss_mgmt_info *mgmt_info;
mgmt_info = kzalloc(sizeof(*mgmt_info), GFP_KERNEL);
if (!mgmt_info)
return -ENOMEM;
mgmt_info->hwdev = hwdev;
hwdev->mgmt_info = mgmt_info;
ret = sss_init_ceq_info(hwdev);
if (ret != 0) {
sdk_err(hwdev->dev_hdl, "Fail to init ceq info, ret: %d\n", ret);
goto init_ceq_info_err;
}
ret = sss_init_irq_info(hwdev);
if (ret != 0) {
sdk_err(hwdev->dev_hdl, "Fail to init irq info, ret: %d\n", ret);
goto init_irq_info_err;
}
return 0;
init_irq_info_err:
sss_deinit_ceq_info(hwdev);
init_ceq_info_err:
kfree(mgmt_info);
hwdev->mgmt_info = NULL;
return ret;
}
void sss_deinit_mgmt_info(struct sss_hwdev *hwdev)
{
sss_deinit_irq_info(hwdev);
sss_deinit_ceq_info(hwdev);
kfree(hwdev->mgmt_info);
hwdev->mgmt_info = NULL;
}
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2021 3snic Technologies Co., Ltd */
#ifndef SSS_HWDEV_MGMT_INFO_H
#define SSS_HWDEV_MGMT_INFO_H
#include "sss_hwdev.h"
int sss_init_mgmt_info(struct sss_hwdev *dev);
void sss_deinit_mgmt_info(struct sss_hwdev *dev);
#endif
此差异已折叠。
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2021 3snic Technologies Co., Ltd */
#ifndef SSS_HWIF_ADM_H
#define SSS_HWIF_ADM_H
#include <linux/types.h>
int sss_sync_send_adm_msg(void *hwdev, u8 mod, u16 cmd, void *buf_in,
u16 in_size, void *buf_out, u16 *out_size, u32 timeout);
#endif
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2021 3snic Technologies Co., Ltd */
#ifndef SSS_HWIF_ADM_COMMON_H
#define SSS_HWIF_ADM_COMMON_H
/* ADM_STATUS_0 CSR: 0x0030+adm msg id*0x080 */
#define SSS_ADM_MSG_STATE_CI_MASK 0xFFFFFFU
#define SSS_ADM_MSG_STATE_CI_SHIFT 0
#define SSS_ADM_MSG_STATE_FSM_MASK 0xFU
#define SSS_ADM_MSG_STATE_FSM_SHIFT 24
#define SSS_ADM_MSG_STATE_CHKSUM_ERR_MASK 0x3U
#define SSS_ADM_MSG_STATE_CHKSUM_ERR_SHIFT 28
#define SSS_ADM_MSG_STATE_CPLD_ERR_MASK 0x1U
#define SSS_ADM_MSG_STATE_CPLD_ERR_SHIFT 30
#define SSS_GET_ADM_MSG_STATE(val, member) \
(((val) >> SSS_ADM_MSG_STATE_##member##_SHIFT) & \
SSS_ADM_MSG_STATE_##member##_MASK)
#endif
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2021 3snic Technologies Co., Ltd */
#ifndef SSS_HWIF_ADM_INIT_H
#define SSS_HWIF_ADM_INIT_H
#include "sss_hwdev.h"
int sss_hwif_init_adm(struct sss_hwdev *hwdev);
void sss_hwif_deinit_adm(struct sss_hwdev *hwdev);
void sss_complete_adm_event(struct sss_hwdev *hwdev);
#endif
此差异已折叠。
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2021 3snic Technologies Co., Ltd */
#ifndef SSS_HWIF_AEQ_H
#define SSS_HWIF_AEQ_H
#include "sss_hw_irq.h"
#include "sss_hw_aeq.h"
#include "sss_hwdev.h"
#include "sss_aeq_info.h"
void sss_deinit_aeq(struct sss_hwdev *hwdev);
void sss_get_aeq_irq(struct sss_hwdev *hwdev,
struct sss_irq_desc *irq_array, u16 *irq_num);
void sss_dump_aeq_info(struct sss_hwdev *hwdev);
int sss_aeq_register_hw_cb(void *hwdev, void *pri_handle,
enum sss_aeq_hw_event event, sss_aeq_hw_event_handler_t event_handler);
void sss_aeq_unregister_hw_cb(void *hwdev, enum sss_aeq_hw_event event);
int sss_aeq_register_swe_cb(void *hwdev, void *pri_handle,
enum sss_aeq_sw_event event,
sss_aeq_sw_event_handler_t sw_event_handler);
void sss_aeq_unregister_swe_cb(void *hwdev, enum sss_aeq_sw_event event);
int sss_hwif_init_aeq(struct sss_hwdev *hwdev);
void sss_hwif_deinit_aeq(struct sss_hwdev *hwdev);
int sss_init_aeq_msix_attr(struct sss_hwdev *hwdev);
u8 sss_sw_aeqe_handler(void *dev, u8 aeq_event, u8 *data);
#endif
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2021 3snic Technologies Co., Ltd */
#define pr_fmt(fmt) KBUILD_MODNAME ": [BASE]" fmt
#include <linux/types.h>
#include <linux/pci.h>
#include <linux/delay.h>
#include <linux/module.h>
#include "sss_kernel.h"
#include "sss_hw.h"
#include "sss_csr.h"
#include "sss_hwdev.h"
#include "sss_hwif_api.h"
#include "sss_hwif_export.h"
#define SSS_GET_REG_FLAG(reg) ((reg) & (~(SSS_CSR_FLAG_MASK)))
#define SSS_GET_REG_ADDR(reg) ((reg) & (SSS_CSR_FLAG_MASK))
#define SSS_PAGE_SIZE_HW(pg_size) ((u8)ilog2((u32)((pg_size) >> 12)))
#define SSS_CLEAR_SLAVE_HOST_STATUS(host_id, val) ((val) & (~(1U << (host_id))))
#define SSS_SET_SLAVE_HOST_STATUS(host_id, enable) (((u8)(enable) & 1U) << (host_id))
#define SSS_MULT_HOST_SLAVE_STATUS_ADDR (SSS_MGMT_FLAG + 0xDF30)
u32 sss_chip_read_reg(struct sss_hwif *hwif, u32 reg)
{
if (SSS_GET_REG_FLAG(reg) == SSS_MGMT_FLAG)
return be32_to_cpu(readl(hwif->mgmt_reg_base +
SSS_GET_REG_ADDR(reg)));
else
return be32_to_cpu(readl(hwif->cfg_reg_base +
SSS_GET_REG_ADDR(reg)));
}
void sss_chip_write_reg(struct sss_hwif *hwif, u32 reg, u32 val)
{
if (SSS_GET_REG_FLAG(reg) == SSS_MGMT_FLAG)
writel(cpu_to_be32(val),
hwif->mgmt_reg_base + SSS_GET_REG_ADDR(reg));
else
writel(cpu_to_be32(val),
hwif->cfg_reg_base + SSS_GET_REG_ADDR(reg));
}
bool sss_chip_get_present_state(void *hwdev)
{
u32 val;
val = sss_chip_read_reg(SSS_TO_HWIF(hwdev), SSS_CSR_HW_ATTR1_ADDR);
if (val == SSS_PCIE_LINK_DOWN) {
sdk_warn(SSS_TO_DEV(hwdev), "Card is not present\n");
return false;
}
return true;
}
u32 sss_chip_get_pcie_link_status(void *hwdev)
{
u32 val;
if (!hwdev)
return SSS_PCIE_LINK_DOWN;
val = sss_chip_read_reg(SSS_TO_HWIF(hwdev), SSS_CSR_HW_ATTR1_ADDR);
if (val == SSS_PCIE_LINK_DOWN)
return val;
return !SSS_GET_AF1(val, MGMT_INIT_STATUS);
}
void sss_chip_set_pf_status(struct sss_hwif *hwif,
enum sss_pf_status status)
{
u32 val;
if (SSS_GET_HWIF_FUNC_TYPE(hwif) == SSS_FUNC_TYPE_VF)
return;
val = sss_chip_read_reg(hwif, SSS_CSR_HW_ATTR6_ADDR);
val = SSS_CLEAR_AF6(val, PF_STATUS);
val |= SSS_SET_AF6(status, PF_STATUS);
sss_chip_write_reg(hwif, SSS_CSR_HW_ATTR6_ADDR, val);
}
enum sss_pf_status sss_chip_get_pf_status(struct sss_hwif *hwif)
{
u32 val = sss_chip_read_reg(hwif, SSS_CSR_HW_ATTR6_ADDR);
return SSS_GET_AF6(val, PF_STATUS);
}
void sss_chip_enable_doorbell(struct sss_hwif *hwif)
{
u32 addr;
u32 val;
addr = SSS_CSR_HW_ATTR4_ADDR;
val = sss_chip_read_reg(hwif, addr);
val = SSS_CLEAR_AF4(val, DOORBELL_CTRL);
val |= SSS_SET_AF4(DB_ENABLE, DOORBELL_CTRL);
sss_chip_write_reg(hwif, addr, val);
}
void sss_chip_disable_doorbell(struct sss_hwif *hwif)
{
u32 addr;
u32 val;
addr = SSS_CSR_HW_ATTR4_ADDR;
val = sss_chip_read_reg(hwif, addr);
val = SSS_CLEAR_AF4(val, DOORBELL_CTRL);
val |= SSS_SET_AF4(DB_DISABLE, DOORBELL_CTRL);
sss_chip_write_reg(hwif, addr, val);
}
void sss_free_db_id(struct sss_hwif *hwif, u32 id)
{
struct sss_db_pool *pool = &hwif->db_pool;
if (id >= pool->bit_size)
return;
spin_lock(&pool->id_lock);
clear_bit((int)id, pool->bitmap);
spin_unlock(&pool->id_lock);
}
int sss_alloc_db_id(struct sss_hwif *hwif, u32 *id)
{
struct sss_db_pool *pool = &hwif->db_pool;
u32 pg_id;
spin_lock(&pool->id_lock);
pg_id = (u32)find_first_zero_bit(pool->bitmap, pool->bit_size);
if (pg_id == pool->bit_size) {
spin_unlock(&pool->id_lock);
return -ENOMEM;
}
set_bit(pg_id, pool->bitmap);
spin_unlock(&pool->id_lock);
*id = pg_id;
return 0;
}
void sss_dump_chip_err_info(struct sss_hwdev *hwdev)
{
u32 value;
if (sss_get_func_type(hwdev) == SSS_FUNC_TYPE_VF)
return;
value = sss_chip_read_reg(hwdev->hwif, SSS_CHIP_BASE_INFO_ADDR);
sdk_warn(hwdev->dev_hdl, "Chip base info: 0x%08x\n", value);
value = sss_chip_read_reg(hwdev->hwif, SSS_MGMT_HEALTH_STATUS_ADDR);
sdk_warn(hwdev->dev_hdl, "Mgmt CPU health status: 0x%08x\n", value);
value = sss_chip_read_reg(hwdev->hwif, SSS_CHIP_ERR_STATUS0_ADDR);
sdk_warn(hwdev->dev_hdl, "Chip fatal error status0: 0x%08x\n", value);
value = sss_chip_read_reg(hwdev->hwif, SSS_CHIP_ERR_STATUS1_ADDR);
sdk_warn(hwdev->dev_hdl, "Chip fatal error status1: 0x%08x\n", value);
value = sss_chip_read_reg(hwdev->hwif, SSS_ERR_INFO0_ADDR);
sdk_warn(hwdev->dev_hdl, "Chip exception info0: 0x%08x\n", value);
value = sss_chip_read_reg(hwdev->hwif, SSS_ERR_INFO1_ADDR);
sdk_warn(hwdev->dev_hdl, "Chip exception info1: 0x%08x\n", value);
value = sss_chip_read_reg(hwdev->hwif, SSS_ERR_INFO2_ADDR);
sdk_warn(hwdev->dev_hdl, "Chip exception info2: 0x%08x\n", value);
}
u8 sss_chip_get_host_ppf_id(struct sss_hwdev *hwdev, u8 host_id)
{
u32 addr;
u32 val;
if (!hwdev)
return 0;
addr = SSS_CSR_FUNC_PPF_ELECT(host_id);
val = sss_chip_read_reg(hwdev->hwif, addr);
return SSS_GET_PPF_ELECT_PORT(val, ID);
}
static void sss_init_eq_msix_cfg(void *hwdev,
struct sss_cmd_msix_config *cmd_msix,
struct sss_irq_cfg *info)
{
cmd_msix->opcode = SSS_MGMT_MSG_SET_CMD;
cmd_msix->func_id = sss_get_global_func_id(hwdev);
cmd_msix->msix_index = (u16)info->msix_id;
cmd_msix->lli_credit_cnt = info->lli_credit;
cmd_msix->lli_timer_cnt = info->lli_timer;
cmd_msix->pending_cnt = info->pending;
cmd_msix->coalesce_timer_cnt = info->coalesc_timer;
cmd_msix->resend_timer_cnt = info->resend_timer;
}
int sss_chip_set_eq_msix_attr(void *hwdev,
struct sss_irq_cfg *intr_info, u16 ch)
{
int ret;
struct sss_cmd_msix_config cmd_msix = {0};
u16 out_len = sizeof(cmd_msix);
sss_init_eq_msix_cfg(hwdev, &cmd_msix, intr_info);
ret = sss_sync_send_msg_ch(hwdev, SSS_COMM_MGMT_CMD_CFG_MSIX_CTRL_REG,
&cmd_msix, sizeof(cmd_msix), &cmd_msix, &out_len, ch);
if (SSS_ASSERT_SEND_MSG_RETURN(ret, out_len, &cmd_msix)) {
sdk_err(SSS_TO_DEV(hwdev),
"Fail to set eq msix cfg, ret: %d, status: 0x%x, out_len: 0x%x, ch: 0x%x\n",
ret, cmd_msix.head.state, out_len, ch);
return -EINVAL;
}
return 0;
}
int sss_chip_set_wq_page_size(void *hwdev, u16 func_id, u32 page_size)
{
int ret;
struct sss_cmd_wq_page_size cmd_page = {0};
u16 out_len = sizeof(cmd_page);
cmd_page.opcode = SSS_MGMT_MSG_SET_CMD;
cmd_page.func_id = func_id;
cmd_page.page_size = SSS_PAGE_SIZE_HW(page_size);
ret = sss_sync_send_msg(hwdev, SSS_COMM_MGMT_CMD_CFG_PAGESIZE,
&cmd_page, sizeof(cmd_page), &cmd_page, &out_len);
if (SSS_ASSERT_SEND_MSG_RETURN(ret, out_len, &cmd_page)) {
sdk_err(SSS_TO_DEV(hwdev),
"Fail to set wq page size, ret: %d, status: 0x%x, out_len: 0x%0x\n",
ret, cmd_page.head.state, out_len);
return -EFAULT;
}
return 0;
}
int sss_chip_set_ceq_attr(struct sss_hwdev *hwdev, u16 qid,
u32 attr0, u32 attr1)
{
int ret;
struct sss_cmd_ceq_ctrl_reg cmd_ceq = {0};
u16 out_len = sizeof(cmd_ceq);
cmd_ceq.func_id = sss_get_global_func_id(hwdev);
cmd_ceq.qid = qid;
cmd_ceq.ctrl0 = attr0;
cmd_ceq.ctrl1 = attr1;
ret = sss_sync_send_msg(hwdev, SSS_COMM_MGMT_CMD_SET_CEQ_CTRL_REG,
&cmd_ceq, sizeof(cmd_ceq), &cmd_ceq, &out_len);
if (SSS_ASSERT_SEND_MSG_RETURN(ret, out_len, &cmd_ceq)) {
sdk_err(hwdev->dev_hdl,
"Fail to set ceq %u ctrl, ret: %d status: 0x%x, out_len: 0x%x\n",
qid, ret, cmd_ceq.head.state, out_len);
return -EFAULT;
}
return 0;
}
void sss_chip_set_slave_host_status(void *dev, u8 host_id, bool enable)
{
u32 val;
struct sss_hwdev *hwdev = dev;
if (SSS_GET_FUNC_TYPE(hwdev) != SSS_FUNC_TYPE_PPF)
return;
val = sss_chip_read_reg(hwdev->hwif, SSS_MULT_HOST_SLAVE_STATUS_ADDR);
val = SSS_CLEAR_SLAVE_HOST_STATUS(host_id, val);
val |= SSS_SET_SLAVE_HOST_STATUS(host_id, !!enable);
sss_chip_write_reg(hwdev->hwif, SSS_MULT_HOST_SLAVE_STATUS_ADDR, val);
sdk_info(hwdev->dev_hdl, "Set slave host %d status %d, reg value: 0x%x\n",
host_id, enable, val);
}
此差异已折叠。
此差异已折叠。
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2021 3snic Technologies Co., Ltd */
#ifndef SSS_HWIF_CEQ_H
#define SSS_HWIF_CEQ_H
#include "sss_hw_ceq.h"
#include "sss_ceq_info.h"
#include "sss_hwdev.h"
int sss_ceq_register_cb(void *hwdev, void *data,
enum sss_ceq_event ceq_event, sss_ceq_event_handler_t event_handler);
void sss_ceq_unregister_cb(void *hwdev, enum sss_ceq_event ceq_event);
int sss_hwif_init_ceq(struct sss_hwdev *hwdev);
void sss_hwif_deinit_ceq(struct sss_hwdev *hwdev);
void sss_dump_ceq_info(struct sss_hwdev *hwdev);
int sss_init_ceq_msix_attr(struct sss_hwdev *hwdev);
#endif
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
/* SPDX-License-Identifier: GPL-2.0 */
/* Copyright(c) 2021 3snic Technologies Co., Ltd */
#ifndef SSS_PCI_SHUTDOWN_H
#define SSS_PCI_SHUTDOWN_H
#include <linux/pci.h>
void sss_pci_shutdown(struct pci_dev *pdev);
#endif
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册