提交 5185ad61 编写于 作者: D David S. Miller

Merge tag 'mlx5-updates-2017-06-27' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux

Saeed Mahameed says:

====================
mlx5-updates-2017-06-27 (Innova IPsec offload support)

This patchset adds support for Innova IPSec network interface card.

About Innova device:
--------------------
Innova is a network card with a ConnectX chip and an FPGA chip as a
 bump-on-the-wire.

               Internal
+----------+   Link       +-----------------+
|          +--------------+      FPGA       |  +------+
| ConnectX |              |  Shell          +--+ QSFP |
|          +--------------+    +-------+    |  | Port |
+----------+      I2C     |    |  SBU  |    |  +------+
                          |    +-------+    |
                          +--+----------+---+
                             |          |
                          +--+--+   +---+---+
                          | DDR |   | Flash |
                          +-----+   +-------+

The FPGA synthesized logic is loaded from dedicated flash storage and has
 access to its own dedicated DDR RAM.
The ConnectX chip firmware programs the FPGA by accessing its configuration
space over either the slow internal I2C link or the high-speed internal link.

The FPGA logic is divided into a "Shell" and a "Sandbox Unit" (SBU).
mlx5_core driver (with CONFIG_MLX5_FPGA) handles all shell functionality,
while other components may handle the various SBU functionalities.

The driver opens high-speed reliable communication channels with the shell and
the SBU over the internal link.
These channels may be used for high-bandwidth configuration or for SBU-specific
out-of-band data paths.

About Innova IPSec device:
--------------------------
Innova IPSec is a network card that allows offloading IPSec cryptography operations
from the host CPU to the NIC. It is an Innova card with an IPSec SBU.
The hardware keeps the database of IPSec Security Associations (SADB) in the FPGA's
DDR memory.

               Internal
+----------+   Link       +-----------------+
|          +--------------+      FPGA       |  +------+
| ConnectX |              |  Shell          +--+ QSFP |
|          +--------------+    +-------+    |  | Port |
+----------+ Internal I2C |    | IPSec |    |  +------+
                          |    |  SBU  |    |
                          |    +-------+    |
                          +--+----------+---+
                             |          |
                          +--+--+   +---+---+
                          | DDR |   |       |
                          |     |   | Flash |
                          |SADB |   |       |
                          +-----+   +-------+

Modes and ciphers:
Currently the following modes and ciphers are supported:
IPv4 and IPv6
ESP tunnel and transport modes
AES 128 and 256 bit encryption, with GCM authentication (RFC4106)

IV is generated using seqiv, in sync with Linux's geniv.

More modes and ciphers may be added later.

Notes:
In the future similar functionality will be included in a single-chip NIC.

About the driver:
-----------------
Patches 1-4 prepare some existing driver code for the new feature:
  * Add support for reserved GIDs in the hardware GID table
  * Allow multiple modules to enable hardware RoCE support independently
Patches 5-6 define structs and helper functions for QP work-queues.
Patches 7-11 add various FPGA-related features required for Innova.
IPSec.
Patch 12 adds abstraction layer for Mellanox IPSec-offload capable devices.
atches 13-16 add IPSec offload support to the mlx5 netdevice.

This driver services the new IPSec offload API introduced in commit
d77e38e6 ("xfrm: Add an IPsec hardware offloading API")

Configuration Path:
If Innova IPSec device is detected, the mlx5e netdevice gets the new
NETIF_F_HW_ESP feature and the xdo callbacks, indicating ESP offload
capabilities, and also the matching TX checksum and GSO features.

The driver configures offloaded Security Associations (SAs) by sending
an ADD_SA or DEL_SA message to the IPSec SBU, which updates the SADB in DDR.
These messages and their responses are sent over a high-speed channel.
Counters for ethtool are retrieved by the driver from the SBU.

Data path:
On receive path, the SBU decrypts ESP packets which match the offloaded SADB,
but keeps them encapsulated.
The SBU injects metadata (Mellanox owned ethertype) indicating that crypto-offload
has taken place, the SA with which it was done, and the authentication result.

The ConnectX chip performs RX checksum offload on the packet, and RSS using the
ESP SPI value.  The driver detects the special ethertype, and attaches a struct
secpath to the RX SKB, including flags to indicate that crypto offload took place,
the authentication result, and which xfrm_state was used for decryption, in the
olen and ovec members. The RX SKB may have useful CHECKSUM_COMPLETE. A separate
patchset will add support for that in the xfrm stack.

On transmit path, the stack encapsulates the packet but does not encrypt it, and
indicates in the SKB's secpath that crypto offload is to be performed and the SA
to use to do so.
The driver avoids performing crypto-offload for ESP fragments, and packets with
IP options, as the SBU cannot currently do that.  For eligible packets, the driver
prepends a special ethertype with metadata instructing the hardware to perform crypto offload.
The stack builds regular (non-GSO) SKBs so that they contain a placeholder for the ESP trailer.
The driver trims it off, because the SBU automatically appends the trailer for offloaded packets.
The ConnectX chip performs TX checksum offload on inner UDP or TCP packets,
and GSO for TCP packets (duplicating the prepended metadata).
The segmented packets then undergo encryption in the SBU before going on the wire.

Performance:
We measure single stream of TCP on Intel(R) Xeon(R) CPU E5-2643 v2 @3.50GHz
Using AES-NI with ESP GSO we get constant 4.1 Gbps.
Using crypto offload we get constant 18 Gbps.

Note that these numbers require CHECKSUM_COMPLETE support in XFRM, which we submit separately.

-  Ilan Tayari
====================
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
......@@ -8327,6 +8327,16 @@ Q: http://patchwork.ozlabs.org/project/netdev/list/
F: drivers/net/ethernet/mellanox/mlx5/core/fpga/*
F: include/linux/mlx5/mlx5_ifc_fpga.h
MELLANOX ETHERNET INNOVA IPSEC DRIVER
M: Ilan Tayari <ilant@mellanox.com>
R: Boris Pismenny <borisp@mellanox.com>
L: netdev@vger.kernel.org
S: Supported
W: http://www.mellanox.com
Q: http://patchwork.ozlabs.org/project/netdev/list/
F: drivers/net/ethernet/mellanox/mlx5/core/en_ipsec/*
F: drivers/net/ethernet/mellanox/mlx5/core/ipsec*
MELLANOX ETHERNET SWITCH DRIVERS
M: Jiri Pirko <jiri@mellanox.com>
M: Ido Schimmel <idosch@mellanox.com>
......
......@@ -223,8 +223,8 @@ static int translate_eth_proto_oper(u32 eth_proto_oper, u8 *active_speed,
return 0;
}
static void mlx5_query_port_roce(struct ib_device *device, u8 port_num,
struct ib_port_attr *props)
static int mlx5_query_port_roce(struct ib_device *device, u8 port_num,
struct ib_port_attr *props)
{
struct mlx5_ib_dev *dev = to_mdev(device);
struct mlx5_core_dev *mdev = dev->mdev;
......@@ -232,12 +232,14 @@ static void mlx5_query_port_roce(struct ib_device *device, u8 port_num,
enum ib_mtu ndev_ib_mtu;
u16 qkey_viol_cntr;
u32 eth_prot_oper;
int err;
/* Possible bad flows are checked before filling out props so in case
* of an error it will still be zeroed out.
*/
if (mlx5_query_port_eth_proto_oper(mdev, &eth_prot_oper, port_num))
return;
err = mlx5_query_port_eth_proto_oper(mdev, &eth_prot_oper, port_num);
if (err)
return err;
translate_eth_proto_oper(eth_prot_oper, &props->active_speed,
&props->active_width);
......@@ -258,7 +260,7 @@ static void mlx5_query_port_roce(struct ib_device *device, u8 port_num,
ndev = mlx5_ib_get_netdev(device, port_num);
if (!ndev)
return;
return 0;
if (mlx5_lag_is_active(dev->mdev)) {
rcu_read_lock();
......@@ -281,75 +283,49 @@ static void mlx5_query_port_roce(struct ib_device *device, u8 port_num,
dev_put(ndev);
props->active_mtu = min(props->max_mtu, ndev_ib_mtu);
return 0;
}
static void ib_gid_to_mlx5_roce_addr(const union ib_gid *gid,
const struct ib_gid_attr *attr,
void *mlx5_addr)
static int set_roce_addr(struct mlx5_ib_dev *dev, u8 port_num,
unsigned int index, const union ib_gid *gid,
const struct ib_gid_attr *attr)
{
#define MLX5_SET_RA(p, f, v) MLX5_SET(roce_addr_layout, p, f, v)
char *mlx5_addr_l3_addr = MLX5_ADDR_OF(roce_addr_layout, mlx5_addr,
source_l3_address);
void *mlx5_addr_mac = MLX5_ADDR_OF(roce_addr_layout, mlx5_addr,
source_mac_47_32);
if (!gid)
return;
enum ib_gid_type gid_type = IB_GID_TYPE_IB;
u8 roce_version = 0;
u8 roce_l3_type = 0;
bool vlan = false;
u8 mac[ETH_ALEN];
u16 vlan_id = 0;
ether_addr_copy(mlx5_addr_mac, attr->ndev->dev_addr);
if (gid) {
gid_type = attr->gid_type;
ether_addr_copy(mac, attr->ndev->dev_addr);
if (is_vlan_dev(attr->ndev)) {
MLX5_SET_RA(mlx5_addr, vlan_valid, 1);
MLX5_SET_RA(mlx5_addr, vlan_id, vlan_dev_vlan_id(attr->ndev));
if (is_vlan_dev(attr->ndev)) {
vlan = true;
vlan_id = vlan_dev_vlan_id(attr->ndev);
}
}
switch (attr->gid_type) {
switch (gid_type) {
case IB_GID_TYPE_IB:
MLX5_SET_RA(mlx5_addr, roce_version, MLX5_ROCE_VERSION_1);
roce_version = MLX5_ROCE_VERSION_1;
break;
case IB_GID_TYPE_ROCE_UDP_ENCAP:
MLX5_SET_RA(mlx5_addr, roce_version, MLX5_ROCE_VERSION_2);
roce_version = MLX5_ROCE_VERSION_2;
if (ipv6_addr_v4mapped((void *)gid))
roce_l3_type = MLX5_ROCE_L3_TYPE_IPV4;
else
roce_l3_type = MLX5_ROCE_L3_TYPE_IPV6;
break;
default:
WARN_ON(true);
mlx5_ib_warn(dev, "Unexpected GID type %u\n", gid_type);
}
if (attr->gid_type != IB_GID_TYPE_IB) {
if (ipv6_addr_v4mapped((void *)gid))
MLX5_SET_RA(mlx5_addr, roce_l3_type,
MLX5_ROCE_L3_TYPE_IPV4);
else
MLX5_SET_RA(mlx5_addr, roce_l3_type,
MLX5_ROCE_L3_TYPE_IPV6);
}
if ((attr->gid_type == IB_GID_TYPE_IB) ||
!ipv6_addr_v4mapped((void *)gid))
memcpy(mlx5_addr_l3_addr, gid, sizeof(*gid));
else
memcpy(&mlx5_addr_l3_addr[12], &gid->raw[12], 4);
}
static int set_roce_addr(struct ib_device *device, u8 port_num,
unsigned int index,
const union ib_gid *gid,
const struct ib_gid_attr *attr)
{
struct mlx5_ib_dev *dev = to_mdev(device);
u32 in[MLX5_ST_SZ_DW(set_roce_address_in)] = {0};
u32 out[MLX5_ST_SZ_DW(set_roce_address_out)] = {0};
void *in_addr = MLX5_ADDR_OF(set_roce_address_in, in, roce_address);
enum rdma_link_layer ll = mlx5_ib_port_link_layer(device, port_num);
if (ll != IB_LINK_LAYER_ETHERNET)
return -EINVAL;
ib_gid_to_mlx5_roce_addr(gid, attr, in_addr);
MLX5_SET(set_roce_address_in, in, roce_address_index, index);
MLX5_SET(set_roce_address_in, in, opcode, MLX5_CMD_OP_SET_ROCE_ADDRESS);
return mlx5_cmd_exec(dev->mdev, in, sizeof(in), out, sizeof(out));
return mlx5_core_roce_gid_set(dev->mdev, index, roce_version,
roce_l3_type, gid->raw, mac, vlan,
vlan_id);
}
static int mlx5_ib_add_gid(struct ib_device *device, u8 port_num,
......@@ -357,13 +333,13 @@ static int mlx5_ib_add_gid(struct ib_device *device, u8 port_num,
const struct ib_gid_attr *attr,
__always_unused void **context)
{
return set_roce_addr(device, port_num, index, gid, attr);
return set_roce_addr(to_mdev(device), port_num, index, gid, attr);
}
static int mlx5_ib_del_gid(struct ib_device *device, u8 port_num,
unsigned int index, __always_unused void **context)
{
return set_roce_addr(device, port_num, index, NULL, NULL);
return set_roce_addr(to_mdev(device), port_num, index, NULL, NULL);
}
__be16 mlx5_get_roce_udp_sport(struct mlx5_ib_dev *dev, u8 port_num,
......@@ -978,20 +954,31 @@ static int mlx5_query_hca_port(struct ib_device *ibdev, u8 port,
int mlx5_ib_query_port(struct ib_device *ibdev, u8 port,
struct ib_port_attr *props)
{
unsigned int count;
int ret;
switch (mlx5_get_vport_access_method(ibdev)) {
case MLX5_VPORT_ACCESS_METHOD_MAD:
return mlx5_query_mad_ifc_port(ibdev, port, props);
ret = mlx5_query_mad_ifc_port(ibdev, port, props);
break;
case MLX5_VPORT_ACCESS_METHOD_HCA:
return mlx5_query_hca_port(ibdev, port, props);
ret = mlx5_query_hca_port(ibdev, port, props);
break;
case MLX5_VPORT_ACCESS_METHOD_NIC:
mlx5_query_port_roce(ibdev, port, props);
return 0;
ret = mlx5_query_port_roce(ibdev, port, props);
break;
default:
return -EINVAL;
ret = -EINVAL;
}
if (!ret && props) {
count = mlx5_core_reserved_gids_count(to_mdev(ibdev)->mdev);
props->gid_tbl_len -= count;
}
return ret;
}
static int mlx5_ib_query_gid(struct ib_device *ibdev, u8 port, int index,
......
......@@ -11,9 +11,13 @@ config MLX5_CORE
Core driver for low level functionality of the ConnectX-4 and
Connect-IB cards by Mellanox Technologies.
config MLX5_ACCEL
bool
config MLX5_FPGA
bool "Mellanox Technologies Innova support"
depends on MLX5_CORE
select MLX5_ACCEL
---help---
Build support for the Innova family of network cards by Mellanox
Technologies. Innova network cards are comprised of a ConnectX chip
......@@ -48,3 +52,15 @@ config MLX5_CORE_IPOIB
default n
---help---
MLX5 IPoIB offloads & acceleration support.
config MLX5_EN_IPSEC
bool "IPSec XFRM cryptography-offload accelaration"
depends on MLX5_ACCEL
depends on MLX5_CORE_EN
depends on XFRM_OFFLOAD
depends on INET_ESP_OFFLOAD || INET6_ESP_OFFLOAD
default n
---help---
Build support for IPsec cryptography-offload accelaration in the NIC.
Note: Support for hardware with this capability needs to be selected
for this option to become available.
......@@ -4,9 +4,12 @@ subdir-ccflags-y += -I$(src)
mlx5_core-y := main.o cmd.o debugfs.o fw.o eq.o uar.o pagealloc.o \
health.o mcg.o cq.o srq.o alloc.o qp.o port.o mr.o pd.o \
mad.o transobj.o vport.o sriov.o fs_cmd.o fs_core.o \
fs_counters.o rl.o lag.o dev.o
fs_counters.o rl.o lag.o dev.o lib/gid.o
mlx5_core-$(CONFIG_MLX5_FPGA) += fpga/cmd.o fpga/core.o
mlx5_core-$(CONFIG_MLX5_ACCEL) += accel/ipsec.o
mlx5_core-$(CONFIG_MLX5_FPGA) += fpga/cmd.o fpga/core.o fpga/conn.o fpga/sdk.o \
fpga/ipsec.o
mlx5_core-$(CONFIG_MLX5_CORE_EN) += wq.o eswitch.o eswitch_offloads.o \
en_main.o en_common.o en_fs.o en_ethtool.o en_tx.o \
......@@ -16,3 +19,6 @@ mlx5_core-$(CONFIG_MLX5_CORE_EN) += wq.o eswitch.o eswitch_offloads.o \
mlx5_core-$(CONFIG_MLX5_CORE_EN_DCB) += en_dcbnl.o
mlx5_core-$(CONFIG_MLX5_CORE_IPOIB) += ipoib/ipoib.o ipoib/ethtool.o
mlx5_core-$(CONFIG_MLX5_EN_IPSEC) += en_accel/ipsec.o en_accel/ipsec_rxtx.o \
en_accel/ipsec_stats.o
/*
* Copyright (c) 2017 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*
*/
#include <linux/mlx5/device.h>
#include "accel/ipsec.h"
#include "mlx5_core.h"
#include "fpga/ipsec.h"
void *mlx5_accel_ipsec_sa_cmd_exec(struct mlx5_core_dev *mdev,
struct mlx5_accel_ipsec_sa *cmd)
{
if (!MLX5_IPSEC_DEV(mdev))
return ERR_PTR(-EOPNOTSUPP);
return mlx5_fpga_ipsec_sa_cmd_exec(mdev, cmd);
}
int mlx5_accel_ipsec_sa_cmd_wait(void *ctx)
{
return mlx5_fpga_ipsec_sa_cmd_wait(ctx);
}
u32 mlx5_accel_ipsec_device_caps(struct mlx5_core_dev *mdev)
{
return mlx5_fpga_ipsec_device_caps(mdev);
}
unsigned int mlx5_accel_ipsec_counters_count(struct mlx5_core_dev *mdev)
{
return mlx5_fpga_ipsec_counters_count(mdev);
}
int mlx5_accel_ipsec_counters_read(struct mlx5_core_dev *mdev, u64 *counters,
unsigned int count)
{
return mlx5_fpga_ipsec_counters_read(mdev, counters, count);
}
int mlx5_accel_ipsec_init(struct mlx5_core_dev *mdev)
{
return mlx5_fpga_ipsec_init(mdev);
}
void mlx5_accel_ipsec_cleanup(struct mlx5_core_dev *mdev)
{
mlx5_fpga_ipsec_cleanup(mdev);
}
/*
* Copyright (c) 2017 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*
*/
#ifndef __MLX5_ACCEL_IPSEC_H__
#define __MLX5_ACCEL_IPSEC_H__
#ifdef CONFIG_MLX5_ACCEL
#include <linux/mlx5/driver.h>
enum {
MLX5_ACCEL_IPSEC_DEVICE = BIT(1),
MLX5_ACCEL_IPSEC_IPV6 = BIT(2),
MLX5_ACCEL_IPSEC_ESP = BIT(3),
MLX5_ACCEL_IPSEC_LSO = BIT(4),
};
#define MLX5_IPSEC_SADB_IP_AH BIT(7)
#define MLX5_IPSEC_SADB_IP_ESP BIT(6)
#define MLX5_IPSEC_SADB_SA_VALID BIT(5)
#define MLX5_IPSEC_SADB_SPI_EN BIT(4)
#define MLX5_IPSEC_SADB_DIR_SX BIT(3)
#define MLX5_IPSEC_SADB_IPV6 BIT(2)
enum {
MLX5_IPSEC_CMD_ADD_SA = 0,
MLX5_IPSEC_CMD_DEL_SA = 1,
};
enum mlx5_accel_ipsec_enc_mode {
MLX5_IPSEC_SADB_MODE_NONE = 0,
MLX5_IPSEC_SADB_MODE_AES_GCM_128_AUTH_128 = 1,
MLX5_IPSEC_SADB_MODE_AES_GCM_256_AUTH_128 = 3,
};
#define MLX5_IPSEC_DEV(mdev) (mlx5_accel_ipsec_device_caps(mdev) & \
MLX5_ACCEL_IPSEC_DEVICE)
struct mlx5_accel_ipsec_sa {
__be32 cmd;
u8 key_enc[32];
u8 key_auth[32];
__be32 sip[4];
__be32 dip[4];
union {
struct {
__be32 reserved;
u8 salt_iv[8];
__be32 salt;
} __packed gcm;
struct {
u8 salt[16];
} __packed cbc;
};
__be32 spi;
__be32 sw_sa_handle;
__be16 tfclen;
u8 enc_mode;
u8 sip_masklen;
u8 dip_masklen;
u8 flags;
u8 reserved[2];
} __packed;
/**
* mlx5_accel_ipsec_sa_cmd_exec - Execute an IPSec SADB command
* @mdev: mlx5 device
* @cmd: command to execute
* May be called from atomic context. Returns context pointer, or error
* Caller must eventually call mlx5_accel_ipsec_sa_cmd_wait from non-atomic
* context, to cleanup the context pointer
*/
void *mlx5_accel_ipsec_sa_cmd_exec(struct mlx5_core_dev *mdev,
struct mlx5_accel_ipsec_sa *cmd);
/**
* mlx5_accel_ipsec_sa_cmd_wait - Wait for command execution completion
* @context: Context pointer returned from call to mlx5_accel_ipsec_sa_cmd_exec
* Sleeps (killable) until command execution is complete.
* Returns the command result, or -EINTR if killed
*/
int mlx5_accel_ipsec_sa_cmd_wait(void *context);
u32 mlx5_accel_ipsec_device_caps(struct mlx5_core_dev *mdev);
unsigned int mlx5_accel_ipsec_counters_count(struct mlx5_core_dev *mdev);
int mlx5_accel_ipsec_counters_read(struct mlx5_core_dev *mdev, u64 *counters,
unsigned int count);
int mlx5_accel_ipsec_init(struct mlx5_core_dev *mdev);
void mlx5_accel_ipsec_cleanup(struct mlx5_core_dev *mdev);
#else
#define MLX5_IPSEC_DEV(mdev) false
static inline int mlx5_accel_ipsec_init(struct mlx5_core_dev *mdev)
{
return 0;
}
static inline void mlx5_accel_ipsec_cleanup(struct mlx5_core_dev *mdev)
{
}
#endif
#endif /* __MLX5_ACCEL_IPSEC_H__ */
......@@ -307,6 +307,7 @@ static int mlx5_internal_err_ret_value(struct mlx5_core_dev *dev, u16 op,
case MLX5_CMD_OP_SET_FLOW_TABLE_ROOT:
case MLX5_CMD_OP_DEALLOC_ENCAP_HEADER:
case MLX5_CMD_OP_DEALLOC_MODIFY_HEADER_CONTEXT:
case MLX5_CMD_OP_FPGA_DESTROY_QP:
return MLX5_CMD_STAT_OK;
case MLX5_CMD_OP_QUERY_HCA_CAP:
......@@ -419,6 +420,10 @@ static int mlx5_internal_err_ret_value(struct mlx5_core_dev *dev, u16 op,
case MLX5_CMD_OP_QUERY_FLOW_COUNTER:
case MLX5_CMD_OP_ALLOC_ENCAP_HEADER:
case MLX5_CMD_OP_ALLOC_MODIFY_HEADER_CONTEXT:
case MLX5_CMD_OP_FPGA_CREATE_QP:
case MLX5_CMD_OP_FPGA_MODIFY_QP:
case MLX5_CMD_OP_FPGA_QUERY_QP:
case MLX5_CMD_OP_FPGA_QUERY_QP_COUNTERS:
*status = MLX5_DRIVER_STATUS_ABORTED;
*synd = MLX5_DRIVER_SYND;
return -EIO;
......@@ -585,6 +590,11 @@ const char *mlx5_command_str(int command)
MLX5_COMMAND_STR_CASE(DEALLOC_ENCAP_HEADER);
MLX5_COMMAND_STR_CASE(ALLOC_MODIFY_HEADER_CONTEXT);
MLX5_COMMAND_STR_CASE(DEALLOC_MODIFY_HEADER_CONTEXT);
MLX5_COMMAND_STR_CASE(FPGA_CREATE_QP);
MLX5_COMMAND_STR_CASE(FPGA_MODIFY_QP);
MLX5_COMMAND_STR_CASE(FPGA_QUERY_QP);
MLX5_COMMAND_STR_CASE(FPGA_QUERY_QP_COUNTERS);
MLX5_COMMAND_STR_CASE(FPGA_DESTROY_QP);
default: return "unknown command opcode";
}
}
......
......@@ -328,6 +328,7 @@ struct mlx5e_sq_dma {
enum {
MLX5E_SQ_STATE_ENABLED,
MLX5E_SQ_STATE_IPSEC,
};
struct mlx5e_sq_wqe_info {
......@@ -784,6 +785,9 @@ struct mlx5e_priv {
const struct mlx5e_profile *profile;
void *ppriv;
#ifdef CONFIG_MLX5_EN_IPSEC
struct mlx5e_ipsec *ipsec;
#endif
};
struct mlx5e_profile {
......@@ -833,7 +837,6 @@ void mlx5e_dealloc_rx_wqe(struct mlx5e_rq *rq, u16 ix);
void mlx5e_dealloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix);
void mlx5e_post_rx_mpwqe(struct mlx5e_rq *rq);
void mlx5e_free_rx_mpwqe(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi);
struct mlx5_cqe64 *mlx5e_get_cqe(struct mlx5e_cq *cq);
void mlx5e_rx_am(struct mlx5e_rq *rq);
void mlx5e_rx_am_work(struct work_struct *work);
......
/*
* Copyright (c) 2017 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*
*/
#include <crypto/internal/geniv.h>
#include <crypto/aead.h>
#include <linux/inetdevice.h>
#include <linux/netdevice.h>
#include <linux/module.h>
#include "en.h"
#include "accel/ipsec.h"
#include "en_accel/ipsec.h"
#include "en_accel/ipsec_rxtx.h"
struct mlx5e_ipsec_sa_entry {
struct hlist_node hlist; /* Item in SADB_RX hashtable */
unsigned int handle; /* Handle in SADB_RX */
struct xfrm_state *x;
struct mlx5e_ipsec *ipsec;
void *context;
};
struct xfrm_state *mlx5e_ipsec_sadb_rx_lookup(struct mlx5e_ipsec *ipsec,
unsigned int handle)
{
struct mlx5e_ipsec_sa_entry *sa_entry;
struct xfrm_state *ret = NULL;
rcu_read_lock();
hash_for_each_possible_rcu(ipsec->sadb_rx, sa_entry, hlist, handle)
if (sa_entry->handle == handle) {
ret = sa_entry->x;
xfrm_state_hold(ret);
break;
}
rcu_read_unlock();
return ret;
}
static int mlx5e_ipsec_sadb_rx_add(struct mlx5e_ipsec_sa_entry *sa_entry)
{
struct mlx5e_ipsec *ipsec = sa_entry->ipsec;
unsigned long flags;
int ret;
spin_lock_irqsave(&ipsec->sadb_rx_lock, flags);
ret = ida_simple_get(&ipsec->halloc, 1, 0, GFP_KERNEL);
if (ret < 0)
goto out;
sa_entry->handle = ret;
hash_add_rcu(ipsec->sadb_rx, &sa_entry->hlist, sa_entry->handle);
ret = 0;
out:
spin_unlock_irqrestore(&ipsec->sadb_rx_lock, flags);
return ret;
}
static void mlx5e_ipsec_sadb_rx_del(struct mlx5e_ipsec_sa_entry *sa_entry)
{
struct mlx5e_ipsec *ipsec = sa_entry->ipsec;
unsigned long flags;
spin_lock_irqsave(&ipsec->sadb_rx_lock, flags);
hash_del_rcu(&sa_entry->hlist);
spin_unlock_irqrestore(&ipsec->sadb_rx_lock, flags);
}
static void mlx5e_ipsec_sadb_rx_free(struct mlx5e_ipsec_sa_entry *sa_entry)
{
struct mlx5e_ipsec *ipsec = sa_entry->ipsec;
unsigned long flags;
/* Wait for the hash_del_rcu call in sadb_rx_del to affect data path */
synchronize_rcu();
spin_lock_irqsave(&ipsec->sadb_rx_lock, flags);
ida_simple_remove(&ipsec->halloc, sa_entry->handle);
spin_unlock_irqrestore(&ipsec->sadb_rx_lock, flags);
}
static enum mlx5_accel_ipsec_enc_mode mlx5e_ipsec_enc_mode(struct xfrm_state *x)
{
unsigned int key_len = (x->aead->alg_key_len + 7) / 8 - 4;
switch (key_len) {
case 16:
return MLX5_IPSEC_SADB_MODE_AES_GCM_128_AUTH_128;
case 32:
return MLX5_IPSEC_SADB_MODE_AES_GCM_256_AUTH_128;
default:
netdev_warn(x->xso.dev, "Bad key len: %d for alg %s\n",
key_len, x->aead->alg_name);
return -1;
}
}
static void mlx5e_ipsec_build_hw_sa(u32 op, struct mlx5e_ipsec_sa_entry *sa_entry,
struct mlx5_accel_ipsec_sa *hw_sa)
{
struct xfrm_state *x = sa_entry->x;
struct aead_geniv_ctx *geniv_ctx;
unsigned int crypto_data_len;
struct crypto_aead *aead;
unsigned int key_len;
int ivsize;
memset(hw_sa, 0, sizeof(*hw_sa));
if (op == MLX5_IPSEC_CMD_ADD_SA) {
crypto_data_len = (x->aead->alg_key_len + 7) / 8;
key_len = crypto_data_len - 4; /* 4 bytes salt at end */
aead = x->data;
geniv_ctx = crypto_aead_ctx(aead);
ivsize = crypto_aead_ivsize(aead);
memcpy(&hw_sa->key_enc, x->aead->alg_key, key_len);
/* Duplicate 128 bit key twice according to HW layout */
if (key_len == 16)
memcpy(&hw_sa->key_enc[16], x->aead->alg_key, key_len);
memcpy(&hw_sa->gcm.salt_iv, geniv_ctx->salt, ivsize);
hw_sa->gcm.salt = *((__be32 *)(x->aead->alg_key + key_len));
}
hw_sa->cmd = htonl(op);
hw_sa->flags |= MLX5_IPSEC_SADB_SA_VALID | MLX5_IPSEC_SADB_SPI_EN;
if (x->props.family == AF_INET) {
hw_sa->sip[3] = x->props.saddr.a4;
hw_sa->dip[3] = x->id.daddr.a4;
hw_sa->sip_masklen = 32;
hw_sa->dip_masklen = 32;
} else {
memcpy(hw_sa->sip, x->props.saddr.a6, sizeof(hw_sa->sip));
memcpy(hw_sa->dip, x->id.daddr.a6, sizeof(hw_sa->dip));
hw_sa->sip_masklen = 128;
hw_sa->dip_masklen = 128;
hw_sa->flags |= MLX5_IPSEC_SADB_IPV6;
}
hw_sa->spi = x->id.spi;
hw_sa->sw_sa_handle = htonl(sa_entry->handle);
switch (x->id.proto) {
case IPPROTO_ESP:
hw_sa->flags |= MLX5_IPSEC_SADB_IP_ESP;
break;
case IPPROTO_AH:
hw_sa->flags |= MLX5_IPSEC_SADB_IP_AH;
break;
default:
break;
}
hw_sa->enc_mode = mlx5e_ipsec_enc_mode(x);
if (!(x->xso.flags & XFRM_OFFLOAD_INBOUND))
hw_sa->flags |= MLX5_IPSEC_SADB_DIR_SX;
}
static inline int mlx5e_xfrm_validate_state(struct xfrm_state *x)
{
struct net_device *netdev = x->xso.dev;
struct mlx5e_priv *priv;
priv = netdev_priv(netdev);
if (x->props.aalgo != SADB_AALG_NONE) {
netdev_info(netdev, "Cannot offload authenticated xfrm states\n");
return -EINVAL;
}
if (x->props.ealgo != SADB_X_EALG_AES_GCM_ICV16) {
netdev_info(netdev, "Only AES-GCM-ICV16 xfrm state may be offloaded\n");
return -EINVAL;
}
if (x->props.calgo != SADB_X_CALG_NONE) {
netdev_info(netdev, "Cannot offload compressed xfrm states\n");
return -EINVAL;
}
if (x->props.flags & XFRM_STATE_ESN) {
netdev_info(netdev, "Cannot offload ESN xfrm states\n");
return -EINVAL;
}
if (x->props.family != AF_INET &&
x->props.family != AF_INET6) {
netdev_info(netdev, "Only IPv4/6 xfrm states may be offloaded\n");
return -EINVAL;
}
if (x->props.mode != XFRM_MODE_TRANSPORT &&
x->props.mode != XFRM_MODE_TUNNEL) {
dev_info(&netdev->dev, "Only transport and tunnel xfrm states may be offloaded\n");
return -EINVAL;
}
if (x->id.proto != IPPROTO_ESP) {
netdev_info(netdev, "Only ESP xfrm state may be offloaded\n");
return -EINVAL;
}
if (x->encap) {
netdev_info(netdev, "Encapsulated xfrm state may not be offloaded\n");
return -EINVAL;
}
if (!x->aead) {
netdev_info(netdev, "Cannot offload xfrm states without aead\n");
return -EINVAL;
}
if (x->aead->alg_icv_len != 128) {
netdev_info(netdev, "Cannot offload xfrm states with AEAD ICV length other than 128bit\n");
return -EINVAL;
}
if ((x->aead->alg_key_len != 128 + 32) &&
(x->aead->alg_key_len != 256 + 32)) {
netdev_info(netdev, "Cannot offload xfrm states with AEAD key length other than 128/256 bit\n");
return -EINVAL;
}
if (x->tfcpad) {
netdev_info(netdev, "Cannot offload xfrm states with tfc padding\n");
return -EINVAL;
}
if (!x->geniv) {
netdev_info(netdev, "Cannot offload xfrm states without geniv\n");
return -EINVAL;
}
if (strcmp(x->geniv, "seqiv")) {
netdev_info(netdev, "Cannot offload xfrm states with geniv other than seqiv\n");
return -EINVAL;
}
if (x->props.family == AF_INET6 &&
!(mlx5_accel_ipsec_device_caps(priv->mdev) & MLX5_ACCEL_IPSEC_IPV6)) {
netdev_info(netdev, "IPv6 xfrm state offload is not supported by this device\n");
return -EINVAL;
}
return 0;
}
static int mlx5e_xfrm_add_state(struct xfrm_state *x)
{
struct mlx5e_ipsec_sa_entry *sa_entry = NULL;
struct net_device *netdev = x->xso.dev;
struct mlx5_accel_ipsec_sa hw_sa;
struct mlx5e_priv *priv;
void *context;
int err;
priv = netdev_priv(netdev);
err = mlx5e_xfrm_validate_state(x);
if (err)
return err;
sa_entry = kzalloc(sizeof(*sa_entry), GFP_KERNEL);
if (!sa_entry) {
err = -ENOMEM;
goto out;
}
sa_entry->x = x;
sa_entry->ipsec = priv->ipsec;
/* Add the SA to handle processed incoming packets before the add SA
* completion was received
*/
if (x->xso.flags & XFRM_OFFLOAD_INBOUND) {
err = mlx5e_ipsec_sadb_rx_add(sa_entry);
if (err) {
netdev_info(netdev, "Failed adding to SADB_RX: %d\n", err);
goto err_entry;
}
}
mlx5e_ipsec_build_hw_sa(MLX5_IPSEC_CMD_ADD_SA, sa_entry, &hw_sa);
context = mlx5_accel_ipsec_sa_cmd_exec(sa_entry->ipsec->en_priv->mdev, &hw_sa);
if (IS_ERR(context)) {
err = PTR_ERR(context);
goto err_sadb_rx;
}
err = mlx5_accel_ipsec_sa_cmd_wait(context);
if (err)
goto err_sadb_rx;
x->xso.offload_handle = (unsigned long)sa_entry;
goto out;
err_sadb_rx:
if (x->xso.flags & XFRM_OFFLOAD_INBOUND) {
mlx5e_ipsec_sadb_rx_del(sa_entry);
mlx5e_ipsec_sadb_rx_free(sa_entry);
}
err_entry:
kfree(sa_entry);
out:
return err;
}
static void mlx5e_xfrm_del_state(struct xfrm_state *x)
{
struct mlx5e_ipsec_sa_entry *sa_entry;
struct mlx5_accel_ipsec_sa hw_sa;
void *context;
if (!x->xso.offload_handle)
return;
sa_entry = (struct mlx5e_ipsec_sa_entry *)x->xso.offload_handle;
WARN_ON(sa_entry->x != x);
if (x->xso.flags & XFRM_OFFLOAD_INBOUND)
mlx5e_ipsec_sadb_rx_del(sa_entry);
mlx5e_ipsec_build_hw_sa(MLX5_IPSEC_CMD_DEL_SA, sa_entry, &hw_sa);
context = mlx5_accel_ipsec_sa_cmd_exec(sa_entry->ipsec->en_priv->mdev, &hw_sa);
if (IS_ERR(context))
return;
sa_entry->context = context;
}
static void mlx5e_xfrm_free_state(struct xfrm_state *x)
{
struct mlx5e_ipsec_sa_entry *sa_entry;
int res;
if (!x->xso.offload_handle)
return;
sa_entry = (struct mlx5e_ipsec_sa_entry *)x->xso.offload_handle;
WARN_ON(sa_entry->x != x);
res = mlx5_accel_ipsec_sa_cmd_wait(sa_entry->context);
sa_entry->context = NULL;
if (res) {
/* Leftover object will leak */
return;
}
if (x->xso.flags & XFRM_OFFLOAD_INBOUND)
mlx5e_ipsec_sadb_rx_free(sa_entry);
kfree(sa_entry);
}
int mlx5e_ipsec_init(struct mlx5e_priv *priv)
{
struct mlx5e_ipsec *ipsec = NULL;
if (!MLX5_IPSEC_DEV(priv->mdev)) {
netdev_dbg(priv->netdev, "Not an IPSec offload device\n");
return 0;
}
ipsec = kzalloc(sizeof(*ipsec), GFP_KERNEL);
if (!ipsec)
return -ENOMEM;
hash_init(ipsec->sadb_rx);
spin_lock_init(&ipsec->sadb_rx_lock);
ida_init(&ipsec->halloc);
ipsec->en_priv = priv;
ipsec->en_priv->ipsec = ipsec;
netdev_dbg(priv->netdev, "IPSec attached to netdevice\n");
return 0;
}
void mlx5e_ipsec_cleanup(struct mlx5e_priv *priv)
{
struct mlx5e_ipsec *ipsec = priv->ipsec;
if (!ipsec)
return;
ida_destroy(&ipsec->halloc);
kfree(ipsec);
priv->ipsec = NULL;
}
static bool mlx5e_ipsec_offload_ok(struct sk_buff *skb, struct xfrm_state *x)
{
if (x->props.family == AF_INET) {
/* Offload with IPv4 options is not supported yet */
if (ip_hdr(skb)->ihl > 5)
return false;
} else {
/* Offload with IPv6 extension headers is not support yet */
if (ipv6_ext_hdr(ipv6_hdr(skb)->nexthdr))
return false;
}
return true;
}
static const struct xfrmdev_ops mlx5e_ipsec_xfrmdev_ops = {
.xdo_dev_state_add = mlx5e_xfrm_add_state,
.xdo_dev_state_delete = mlx5e_xfrm_del_state,
.xdo_dev_state_free = mlx5e_xfrm_free_state,
.xdo_dev_offload_ok = mlx5e_ipsec_offload_ok,
};
void mlx5e_ipsec_build_netdev(struct mlx5e_priv *priv)
{
struct mlx5_core_dev *mdev = priv->mdev;
struct net_device *netdev = priv->netdev;
if (!priv->ipsec)
return;
if (!(mlx5_accel_ipsec_device_caps(mdev) & MLX5_ACCEL_IPSEC_ESP) ||
!MLX5_CAP_ETH(mdev, swp)) {
mlx5_core_dbg(mdev, "mlx5e: ESP and SWP offload not supported\n");
return;
}
mlx5_core_info(mdev, "mlx5e: IPSec ESP acceleration enabled\n");
netdev->xfrmdev_ops = &mlx5e_ipsec_xfrmdev_ops;
netdev->features |= NETIF_F_HW_ESP;
netdev->hw_enc_features |= NETIF_F_HW_ESP;
if (!MLX5_CAP_ETH(mdev, swp_csum)) {
mlx5_core_dbg(mdev, "mlx5e: SWP checksum not supported\n");
return;
}
netdev->features |= NETIF_F_HW_ESP_TX_CSUM;
netdev->hw_enc_features |= NETIF_F_HW_ESP_TX_CSUM;
if (!(mlx5_accel_ipsec_device_caps(mdev) & MLX5_ACCEL_IPSEC_LSO) ||
!MLX5_CAP_ETH(mdev, swp_lso)) {
mlx5_core_dbg(mdev, "mlx5e: ESP LSO not supported\n");
return;
}
mlx5_core_dbg(mdev, "mlx5e: ESP GSO capability turned on\n");
netdev->features |= NETIF_F_GSO_ESP;
netdev->hw_features |= NETIF_F_GSO_ESP;
netdev->hw_enc_features |= NETIF_F_GSO_ESP;
}
/*
* Copyright (c) 2017 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*
*/
#ifndef __MLX5E_IPSEC_H__
#define __MLX5E_IPSEC_H__
#ifdef CONFIG_MLX5_EN_IPSEC
#include <linux/mlx5/device.h>
#include <net/xfrm.h>
#include <linux/idr.h>
#define MLX5E_IPSEC_SADB_RX_BITS 10
#define MLX5E_METADATA_ETHER_TYPE (0x8CE4)
#define MLX5E_METADATA_ETHER_LEN 8
struct mlx5e_priv;
struct mlx5e_ipsec_sw_stats {
atomic64_t ipsec_rx_drop_sp_alloc;
atomic64_t ipsec_rx_drop_sadb_miss;
atomic64_t ipsec_rx_drop_syndrome;
atomic64_t ipsec_tx_drop_bundle;
atomic64_t ipsec_tx_drop_no_state;
atomic64_t ipsec_tx_drop_not_ip;
atomic64_t ipsec_tx_drop_trailer;
atomic64_t ipsec_tx_drop_metadata;
};
struct mlx5e_ipsec_stats {
u64 ipsec_dec_in_packets;
u64 ipsec_dec_out_packets;
u64 ipsec_dec_bypass_packets;
u64 ipsec_enc_in_packets;
u64 ipsec_enc_out_packets;
u64 ipsec_enc_bypass_packets;
u64 ipsec_dec_drop_packets;
u64 ipsec_dec_auth_fail_packets;
u64 ipsec_enc_drop_packets;
u64 ipsec_add_sa_success;
u64 ipsec_add_sa_fail;
u64 ipsec_del_sa_success;
u64 ipsec_del_sa_fail;
u64 ipsec_cmd_drop;
};
struct mlx5e_ipsec {
struct mlx5e_priv *en_priv;
DECLARE_HASHTABLE(sadb_rx, MLX5E_IPSEC_SADB_RX_BITS);
spinlock_t sadb_rx_lock; /* Protects sadb_rx and halloc */
struct ida halloc;
struct mlx5e_ipsec_sw_stats sw_stats;
struct mlx5e_ipsec_stats stats;
};
void mlx5e_ipsec_build_inverse_table(void);
int mlx5e_ipsec_init(struct mlx5e_priv *priv);
void mlx5e_ipsec_cleanup(struct mlx5e_priv *priv);
void mlx5e_ipsec_build_netdev(struct mlx5e_priv *priv);
int mlx5e_ipsec_get_count(struct mlx5e_priv *priv);
int mlx5e_ipsec_get_strings(struct mlx5e_priv *priv, uint8_t *data);
void mlx5e_ipsec_update_stats(struct mlx5e_priv *priv);
int mlx5e_ipsec_get_stats(struct mlx5e_priv *priv, u64 *data);
struct xfrm_state *mlx5e_ipsec_sadb_rx_lookup(struct mlx5e_ipsec *dev,
unsigned int handle);
#else
static inline void mlx5e_ipsec_build_inverse_table(void)
{
}
static inline int mlx5e_ipsec_init(struct mlx5e_priv *priv)
{
return 0;
}
static inline void mlx5e_ipsec_cleanup(struct mlx5e_priv *priv)
{
}
static inline void mlx5e_ipsec_build_netdev(struct mlx5e_priv *priv)
{
}
static inline int mlx5e_ipsec_get_count(struct mlx5e_priv *priv)
{
return 0;
}
static inline int mlx5e_ipsec_get_strings(struct mlx5e_priv *priv,
uint8_t *data)
{
return 0;
}
static inline void mlx5e_ipsec_update_stats(struct mlx5e_priv *priv)
{
}
static inline int mlx5e_ipsec_get_stats(struct mlx5e_priv *priv, u64 *data)
{
return 0;
}
#endif
#endif /* __MLX5E_IPSEC_H__ */
/*
* Copyright (c) 2017 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*
*/
#include <crypto/aead.h>
#include <net/xfrm.h>
#include <net/esp.h>
#include "en_accel/ipsec_rxtx.h"
#include "en_accel/ipsec.h"
#include "en.h"
enum {
MLX5E_IPSEC_RX_SYNDROME_DECRYPTED = 0x11,
MLX5E_IPSEC_RX_SYNDROME_AUTH_FAILED = 0x12,
};
struct mlx5e_ipsec_rx_metadata {
unsigned char reserved;
__be32 sa_handle;
} __packed;
enum {
MLX5E_IPSEC_TX_SYNDROME_OFFLOAD = 0x8,
MLX5E_IPSEC_TX_SYNDROME_OFFLOAD_WITH_LSO_TCP = 0x9,
};
struct mlx5e_ipsec_tx_metadata {
__be16 mss_inv; /* 1/MSS in 16bit fixed point, only for LSO */
__be16 seq; /* LSBs of the first TCP seq, only for LSO */
u8 esp_next_proto; /* Next protocol of ESP */
} __packed;
struct mlx5e_ipsec_metadata {
unsigned char syndrome;
union {
unsigned char raw[5];
/* from FPGA to host, on successful decrypt */
struct mlx5e_ipsec_rx_metadata rx;
/* from host to FPGA */
struct mlx5e_ipsec_tx_metadata tx;
} __packed content;
/* packet type ID field */
__be16 ethertype;
} __packed;
#define MAX_LSO_MSS 2048
/* Pre-calculated (Q0.16) fixed-point inverse 1/x function */
static __be16 mlx5e_ipsec_inverse_table[MAX_LSO_MSS];
static inline __be16 mlx5e_ipsec_mss_inv(struct sk_buff *skb)
{
return mlx5e_ipsec_inverse_table[skb_shinfo(skb)->gso_size];
}
static struct mlx5e_ipsec_metadata *mlx5e_ipsec_add_metadata(struct sk_buff *skb)
{
struct mlx5e_ipsec_metadata *mdata;
struct ethhdr *eth;
if (unlikely(skb_cow_head(skb, sizeof(*mdata))))
return ERR_PTR(-ENOMEM);
eth = (struct ethhdr *)skb_push(skb, sizeof(*mdata));
skb->mac_header -= sizeof(*mdata);
mdata = (struct mlx5e_ipsec_metadata *)(eth + 1);
memmove(skb->data, skb->data + sizeof(*mdata),
2 * ETH_ALEN);
eth->h_proto = cpu_to_be16(MLX5E_METADATA_ETHER_TYPE);
memset(mdata->content.raw, 0, sizeof(mdata->content.raw));
return mdata;
}
static int mlx5e_ipsec_remove_trailer(struct sk_buff *skb, struct xfrm_state *x)
{
unsigned int alen = crypto_aead_authsize(x->data);
struct ipv6hdr *ipv6hdr = ipv6_hdr(skb);
struct iphdr *ipv4hdr = ip_hdr(skb);
unsigned int trailer_len;
u8 plen;
int ret;
ret = skb_copy_bits(skb, skb->len - alen - 2, &plen, 1);
if (unlikely(ret))
return ret;
trailer_len = alen + plen + 2;
pskb_trim(skb, skb->len - trailer_len);
if (skb->protocol == htons(ETH_P_IP)) {
ipv4hdr->tot_len = htons(ntohs(ipv4hdr->tot_len) - trailer_len);
ip_send_check(ipv4hdr);
} else {
ipv6hdr->payload_len = htons(ntohs(ipv6hdr->payload_len) -
trailer_len);
}
return 0;
}
static void mlx5e_ipsec_set_swp(struct sk_buff *skb,
struct mlx5_wqe_eth_seg *eseg, u8 mode,
struct xfrm_offload *xo)
{
u8 proto;
/* Tunnel Mode:
* SWP: OutL3 InL3 InL4
* Pkt: MAC IP ESP IP L4
*
* Transport Mode:
* SWP: OutL3 InL4
* InL3
* Pkt: MAC IP ESP L4
*
* Offsets are in 2-byte words, counting from start of frame
*/
eseg->swp_outer_l3_offset = skb_network_offset(skb) / 2;
if (skb->protocol == htons(ETH_P_IPV6))
eseg->swp_flags |= MLX5_ETH_WQE_SWP_OUTER_L3_IPV6;
if (mode == XFRM_MODE_TUNNEL) {
eseg->swp_inner_l3_offset = skb_inner_network_offset(skb) / 2;
if (xo->proto == IPPROTO_IPV6) {
eseg->swp_flags |= MLX5_ETH_WQE_SWP_INNER_L3_IPV6;
proto = inner_ipv6_hdr(skb)->nexthdr;
} else {
proto = inner_ip_hdr(skb)->protocol;
}
} else {
eseg->swp_inner_l3_offset = skb_network_offset(skb) / 2;
if (skb->protocol == htons(ETH_P_IPV6))
eseg->swp_flags |= MLX5_ETH_WQE_SWP_INNER_L3_IPV6;
proto = xo->proto;
}
switch (proto) {
case IPPROTO_UDP:
eseg->swp_flags |= MLX5_ETH_WQE_SWP_INNER_L4_UDP;
/* Fall through */
case IPPROTO_TCP:
eseg->swp_inner_l4_offset = skb_inner_transport_offset(skb) / 2;
break;
}
}
static void mlx5e_ipsec_set_iv(struct sk_buff *skb, struct xfrm_offload *xo)
{
int iv_offset;
__be64 seqno;
/* Place the SN in the IV field */
seqno = cpu_to_be64(xo->seq.low + ((u64)xo->seq.hi << 32));
iv_offset = skb_transport_offset(skb) + sizeof(struct ip_esp_hdr);
skb_store_bits(skb, iv_offset, &seqno, 8);
}
static void mlx5e_ipsec_set_metadata(struct sk_buff *skb,
struct mlx5e_ipsec_metadata *mdata,
struct xfrm_offload *xo)
{
struct ip_esp_hdr *esph;
struct tcphdr *tcph;
if (skb_is_gso(skb)) {
/* Add LSO metadata indication */
esph = ip_esp_hdr(skb);
tcph = inner_tcp_hdr(skb);
netdev_dbg(skb->dev, " Offloading GSO packet outer L3 %u; L4 %u; Inner L3 %u; L4 %u\n",
skb->network_header,
skb->transport_header,
skb->inner_network_header,
skb->inner_transport_header);
netdev_dbg(skb->dev, " Offloading GSO packet of len %u; mss %u; TCP sp %u dp %u seq 0x%x ESP seq 0x%x\n",
skb->len, skb_shinfo(skb)->gso_size,
ntohs(tcph->source), ntohs(tcph->dest),
ntohl(tcph->seq), ntohl(esph->seq_no));
mdata->syndrome = MLX5E_IPSEC_TX_SYNDROME_OFFLOAD_WITH_LSO_TCP;
mdata->content.tx.mss_inv = mlx5e_ipsec_mss_inv(skb);
mdata->content.tx.seq = htons(ntohl(tcph->seq) & 0xFFFF);
} else {
mdata->syndrome = MLX5E_IPSEC_TX_SYNDROME_OFFLOAD;
}
mdata->content.tx.esp_next_proto = xo->proto;
netdev_dbg(skb->dev, " TX metadata syndrome %u proto %u mss_inv %04x seq %04x\n",
mdata->syndrome, mdata->content.tx.esp_next_proto,
ntohs(mdata->content.tx.mss_inv),
ntohs(mdata->content.tx.seq));
}
struct sk_buff *mlx5e_ipsec_handle_tx_skb(struct net_device *netdev,
struct mlx5e_tx_wqe *wqe,
struct sk_buff *skb)
{
struct mlx5e_priv *priv = netdev_priv(netdev);
struct xfrm_offload *xo = xfrm_offload(skb);
struct mlx5e_ipsec_metadata *mdata;
struct xfrm_state *x;
if (!xo)
return skb;
if (unlikely(skb->sp->len != 1)) {
atomic64_inc(&priv->ipsec->sw_stats.ipsec_tx_drop_bundle);
goto drop;
}
x = xfrm_input_state(skb);
if (unlikely(!x)) {
atomic64_inc(&priv->ipsec->sw_stats.ipsec_tx_drop_no_state);
goto drop;
}
if (unlikely(!x->xso.offload_handle ||
(skb->protocol != htons(ETH_P_IP) &&
skb->protocol != htons(ETH_P_IPV6)))) {
atomic64_inc(&priv->ipsec->sw_stats.ipsec_tx_drop_not_ip);
goto drop;
}
if (!skb_is_gso(skb))
if (unlikely(mlx5e_ipsec_remove_trailer(skb, x))) {
atomic64_inc(&priv->ipsec->sw_stats.ipsec_tx_drop_trailer);
goto drop;
}
mdata = mlx5e_ipsec_add_metadata(skb);
if (unlikely(IS_ERR(mdata))) {
atomic64_inc(&priv->ipsec->sw_stats.ipsec_tx_drop_metadata);
goto drop;
}
mlx5e_ipsec_set_swp(skb, &wqe->eth, x->props.mode, xo);
mlx5e_ipsec_set_iv(skb, xo);
mlx5e_ipsec_set_metadata(skb, mdata, xo);
return skb;
drop:
kfree_skb(skb);
return NULL;
}
static inline struct xfrm_state *
mlx5e_ipsec_build_sp(struct net_device *netdev, struct sk_buff *skb,
struct mlx5e_ipsec_metadata *mdata)
{
struct mlx5e_priv *priv = netdev_priv(netdev);
struct xfrm_offload *xo;
struct xfrm_state *xs;
u32 sa_handle;
skb->sp = secpath_dup(skb->sp);
if (unlikely(!skb->sp)) {
atomic64_inc(&priv->ipsec->sw_stats.ipsec_rx_drop_sp_alloc);
return NULL;
}
sa_handle = be32_to_cpu(mdata->content.rx.sa_handle);
xs = mlx5e_ipsec_sadb_rx_lookup(priv->ipsec, sa_handle);
if (unlikely(!xs)) {
atomic64_inc(&priv->ipsec->sw_stats.ipsec_rx_drop_sadb_miss);
return NULL;
}
skb->sp->xvec[skb->sp->len++] = xs;
skb->sp->olen++;
xo = xfrm_offload(skb);
xo->flags = CRYPTO_DONE;
switch (mdata->syndrome) {
case MLX5E_IPSEC_RX_SYNDROME_DECRYPTED:
xo->status = CRYPTO_SUCCESS;
break;
case MLX5E_IPSEC_RX_SYNDROME_AUTH_FAILED:
xo->status = CRYPTO_TUNNEL_ESP_AUTH_FAILED;
break;
default:
atomic64_inc(&priv->ipsec->sw_stats.ipsec_rx_drop_syndrome);
return NULL;
}
return xs;
}
struct sk_buff *mlx5e_ipsec_handle_rx_skb(struct net_device *netdev,
struct sk_buff *skb)
{
struct mlx5e_ipsec_metadata *mdata;
struct ethhdr *old_eth;
struct ethhdr *new_eth;
struct xfrm_state *xs;
__be16 *ethtype;
/* Detect inline metadata */
if (skb->len < ETH_HLEN + MLX5E_METADATA_ETHER_LEN)
return skb;
ethtype = (__be16 *)(skb->data + ETH_ALEN * 2);
if (*ethtype != cpu_to_be16(MLX5E_METADATA_ETHER_TYPE))
return skb;
/* Use the metadata */
mdata = (struct mlx5e_ipsec_metadata *)(skb->data + ETH_HLEN);
xs = mlx5e_ipsec_build_sp(netdev, skb, mdata);
if (unlikely(!xs)) {
kfree_skb(skb);
return NULL;
}
/* Remove the metadata from the buffer */
old_eth = (struct ethhdr *)skb->data;
new_eth = (struct ethhdr *)(skb->data + MLX5E_METADATA_ETHER_LEN);
memmove(new_eth, old_eth, 2 * ETH_ALEN);
/* Ethertype is already in its new place */
skb_pull_inline(skb, MLX5E_METADATA_ETHER_LEN);
return skb;
}
bool mlx5e_ipsec_feature_check(struct sk_buff *skb, struct net_device *netdev,
netdev_features_t features)
{
struct xfrm_state *x;
if (skb->sp && skb->sp->len) {
x = skb->sp->xvec[0];
if (x && x->xso.offload_handle)
return true;
}
return false;
}
void mlx5e_ipsec_build_inverse_table(void)
{
u16 mss_inv;
u32 mss;
/* Calculate 1/x inverse table for use in GSO data path.
* Using this table, we provide the IPSec accelerator with the value of
* 1/gso_size so that it can infer the position of each segment inside
* the GSO, and increment the ESP sequence number, and generate the IV.
* The HW needs this value in Q0.16 fixed-point number format
*/
mlx5e_ipsec_inverse_table[1] = htons(0xFFFF);
for (mss = 2; mss < MAX_LSO_MSS; mss++) {
mss_inv = ((1ULL << 32) / mss) >> 16;
mlx5e_ipsec_inverse_table[mss] = htons(mss_inv);
}
}
/*
* Copyright (c) 2017 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*
*/
#ifndef __MLX5E_IPSEC_RXTX_H__
#define __MLX5E_IPSEC_RXTX_H__
#ifdef CONFIG_MLX5_EN_IPSEC
#include <linux/skbuff.h>
#include "en.h"
struct sk_buff *mlx5e_ipsec_handle_rx_skb(struct net_device *netdev,
struct sk_buff *skb);
void mlx5e_ipsec_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe);
void mlx5e_ipsec_inverse_table_init(void);
bool mlx5e_ipsec_feature_check(struct sk_buff *skb, struct net_device *netdev,
netdev_features_t features);
struct sk_buff *mlx5e_ipsec_handle_tx_skb(struct net_device *netdev,
struct mlx5e_tx_wqe *wqe,
struct sk_buff *skb);
#endif /* CONFIG_MLX5_EN_IPSEC */
#endif /* __MLX5E_IPSEC_RXTX_H__ */
/*
* Copyright (c) 2017 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*
*/
#include <linux/ethtool.h>
#include <net/sock.h>
#include "en.h"
#include "accel/ipsec.h"
#include "fpga/sdk.h"
#include "en_accel/ipsec.h"
static const struct counter_desc mlx5e_ipsec_hw_stats_desc[] = {
{ MLX5E_DECLARE_STAT(struct mlx5e_ipsec_stats, ipsec_dec_in_packets) },
{ MLX5E_DECLARE_STAT(struct mlx5e_ipsec_stats, ipsec_dec_out_packets) },
{ MLX5E_DECLARE_STAT(struct mlx5e_ipsec_stats, ipsec_dec_bypass_packets) },
{ MLX5E_DECLARE_STAT(struct mlx5e_ipsec_stats, ipsec_enc_in_packets) },
{ MLX5E_DECLARE_STAT(struct mlx5e_ipsec_stats, ipsec_enc_out_packets) },
{ MLX5E_DECLARE_STAT(struct mlx5e_ipsec_stats, ipsec_enc_bypass_packets) },
{ MLX5E_DECLARE_STAT(struct mlx5e_ipsec_stats, ipsec_dec_drop_packets) },
{ MLX5E_DECLARE_STAT(struct mlx5e_ipsec_stats, ipsec_dec_auth_fail_packets) },
{ MLX5E_DECLARE_STAT(struct mlx5e_ipsec_stats, ipsec_enc_drop_packets) },
{ MLX5E_DECLARE_STAT(struct mlx5e_ipsec_stats, ipsec_add_sa_success) },
{ MLX5E_DECLARE_STAT(struct mlx5e_ipsec_stats, ipsec_add_sa_fail) },
{ MLX5E_DECLARE_STAT(struct mlx5e_ipsec_stats, ipsec_del_sa_success) },
{ MLX5E_DECLARE_STAT(struct mlx5e_ipsec_stats, ipsec_del_sa_fail) },
{ MLX5E_DECLARE_STAT(struct mlx5e_ipsec_stats, ipsec_cmd_drop) },
};
static const struct counter_desc mlx5e_ipsec_sw_stats_desc[] = {
{ MLX5E_DECLARE_STAT(struct mlx5e_ipsec_sw_stats, ipsec_rx_drop_sp_alloc) },
{ MLX5E_DECLARE_STAT(struct mlx5e_ipsec_sw_stats, ipsec_rx_drop_sadb_miss) },
{ MLX5E_DECLARE_STAT(struct mlx5e_ipsec_sw_stats, ipsec_rx_drop_syndrome) },
{ MLX5E_DECLARE_STAT(struct mlx5e_ipsec_sw_stats, ipsec_tx_drop_bundle) },
{ MLX5E_DECLARE_STAT(struct mlx5e_ipsec_sw_stats, ipsec_tx_drop_no_state) },
{ MLX5E_DECLARE_STAT(struct mlx5e_ipsec_sw_stats, ipsec_tx_drop_not_ip) },
{ MLX5E_DECLARE_STAT(struct mlx5e_ipsec_sw_stats, ipsec_tx_drop_trailer) },
{ MLX5E_DECLARE_STAT(struct mlx5e_ipsec_sw_stats, ipsec_tx_drop_metadata) },
};
#define MLX5E_READ_CTR_ATOMIC64(ptr, dsc, i) \
atomic64_read((atomic64_t *)((char *)(ptr) + (dsc)[i].offset))
#define NUM_IPSEC_HW_COUNTERS ARRAY_SIZE(mlx5e_ipsec_hw_stats_desc)
#define NUM_IPSEC_SW_COUNTERS ARRAY_SIZE(mlx5e_ipsec_sw_stats_desc)
#define NUM_IPSEC_COUNTERS (NUM_IPSEC_HW_COUNTERS + NUM_IPSEC_SW_COUNTERS)
int mlx5e_ipsec_get_count(struct mlx5e_priv *priv)
{
if (!priv->ipsec)
return 0;
return NUM_IPSEC_COUNTERS;
}
int mlx5e_ipsec_get_strings(struct mlx5e_priv *priv, uint8_t *data)
{
unsigned int i, idx = 0;
if (!priv->ipsec)
return 0;
for (i = 0; i < NUM_IPSEC_HW_COUNTERS; i++)
strcpy(data + (idx++) * ETH_GSTRING_LEN,
mlx5e_ipsec_hw_stats_desc[i].format);
for (i = 0; i < NUM_IPSEC_SW_COUNTERS; i++)
strcpy(data + (idx++) * ETH_GSTRING_LEN,
mlx5e_ipsec_sw_stats_desc[i].format);
return NUM_IPSEC_COUNTERS;
}
void mlx5e_ipsec_update_stats(struct mlx5e_priv *priv)
{
int ret;
if (!priv->ipsec)
return;
ret = mlx5_accel_ipsec_counters_read(priv->mdev, (u64 *)&priv->ipsec->stats,
NUM_IPSEC_HW_COUNTERS);
if (ret)
memset(&priv->ipsec->stats, 0, sizeof(priv->ipsec->stats));
}
int mlx5e_ipsec_get_stats(struct mlx5e_priv *priv, u64 *data)
{
int i, idx = 0;
if (!priv->ipsec)
return 0;
for (i = 0; i < NUM_IPSEC_HW_COUNTERS; i++)
data[idx++] = MLX5E_READ_CTR64_CPU(&priv->ipsec->stats,
mlx5e_ipsec_hw_stats_desc, i);
for (i = 0; i < NUM_IPSEC_SW_COUNTERS; i++)
data[idx++] = MLX5E_READ_CTR_ATOMIC64(&priv->ipsec->sw_stats,
mlx5e_ipsec_sw_stats_desc, i);
return NUM_IPSEC_COUNTERS;
}
......@@ -31,6 +31,7 @@
*/
#include "en.h"
#include "en_accel/ipsec.h"
void mlx5e_ethtool_get_drvinfo(struct mlx5e_priv *priv,
struct ethtool_drvinfo *drvinfo)
......@@ -186,7 +187,8 @@ int mlx5e_ethtool_get_sset_count(struct mlx5e_priv *priv, int sset)
MLX5E_NUM_SQ_STATS(priv) +
MLX5E_NUM_PFC_COUNTERS(priv) +
ARRAY_SIZE(mlx5e_pme_status_desc) +
ARRAY_SIZE(mlx5e_pme_error_desc);
ARRAY_SIZE(mlx5e_pme_error_desc) +
mlx5e_ipsec_get_count(priv);
case ETH_SS_PRIV_FLAGS:
return ARRAY_SIZE(mlx5e_priv_flags);
......@@ -275,6 +277,9 @@ static void mlx5e_fill_stats_strings(struct mlx5e_priv *priv, uint8_t *data)
for (i = 0; i < ARRAY_SIZE(mlx5e_pme_error_desc); i++)
strcpy(data + (idx++) * ETH_GSTRING_LEN, mlx5e_pme_error_desc[i].format);
/* IPSec counters */
idx += mlx5e_ipsec_get_strings(priv, data + idx * ETH_GSTRING_LEN);
if (!test_bit(MLX5E_STATE_OPENED, &priv->state))
return;
......@@ -403,6 +408,9 @@ void mlx5e_ethtool_get_ethtool_stats(struct mlx5e_priv *priv,
data[idx++] = MLX5E_READ_CTR64_CPU(mlx5_priv->pme_stats.error_counters,
mlx5e_pme_error_desc, i);
/* IPSec counters */
idx += mlx5e_ipsec_get_stats(priv, data + idx);
if (!test_bit(MLX5E_STATE_OPENED, &priv->state))
return;
......
......@@ -39,6 +39,9 @@
#include "en.h"
#include "en_tc.h"
#include "en_rep.h"
#include "en_accel/ipsec.h"
#include "en_accel/ipsec_rxtx.h"
#include "accel/ipsec.h"
#include "vxlan.h"
struct mlx5e_rq_param {
......@@ -115,7 +118,7 @@ void mlx5e_set_rq_type_params(struct mlx5_core_dev *mdev,
static void mlx5e_set_rq_params(struct mlx5_core_dev *mdev, struct mlx5e_params *params)
{
u8 rq_type = mlx5e_check_fragmented_striding_rq_cap(mdev) &&
!params->xdp_prog ?
!params->xdp_prog && !MLX5_IPSEC_DEV(mdev) ?
MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ :
MLX5_WQ_TYPE_LINKED_LIST;
mlx5e_set_rq_type_params(mdev, params, rq_type);
......@@ -328,8 +331,10 @@ static void mlx5e_update_pcie_counters(struct mlx5e_priv *priv)
void mlx5e_update_stats(struct mlx5e_priv *priv, bool full)
{
if (full)
if (full) {
mlx5e_update_pcie_counters(priv);
mlx5e_ipsec_update_stats(priv);
}
mlx5e_update_pport_counters(priv, full);
mlx5e_update_vport_counters(priv);
mlx5e_update_q_counter(priv);
......@@ -592,6 +597,13 @@ static int mlx5e_alloc_rq(struct mlx5e_channel *c,
rq->dealloc_wqe = mlx5e_dealloc_rx_mpwqe;
rq->handle_rx_cqe = c->priv->profile->rx_handlers.handle_rx_cqe_mpwqe;
#ifdef CONFIG_MLX5_EN_IPSEC
if (MLX5_IPSEC_DEV(mdev)) {
err = -EINVAL;
netdev_err(c->netdev, "MPWQE RQ with IPSec offload not supported\n");
goto err_rq_wq_destroy;
}
#endif
if (!rq->handle_rx_cqe) {
err = -EINVAL;
netdev_err(c->netdev, "RX handler of MPWQE RQ is not set, err %d\n", err);
......@@ -624,7 +636,12 @@ static int mlx5e_alloc_rq(struct mlx5e_channel *c,
rq->alloc_wqe = mlx5e_alloc_rx_wqe;
rq->dealloc_wqe = mlx5e_dealloc_rx_wqe;
rq->handle_rx_cqe = c->priv->profile->rx_handlers.handle_rx_cqe;
#ifdef CONFIG_MLX5_EN_IPSEC
if (c->priv->ipsec)
rq->handle_rx_cqe = mlx5e_ipsec_handle_rx_cqe;
else
#endif
rq->handle_rx_cqe = c->priv->profile->rx_handlers.handle_rx_cqe;
if (!rq->handle_rx_cqe) {
kfree(rq->wqe.frag_info);
err = -EINVAL;
......@@ -635,6 +652,10 @@ static int mlx5e_alloc_rq(struct mlx5e_channel *c,
rq->buff.wqe_sz = params->lro_en ?
params->lro_wqe_sz :
MLX5E_SW2HW_MTU(c->priv, c->netdev->mtu);
#ifdef CONFIG_MLX5_EN_IPSEC
if (MLX5_IPSEC_DEV(mdev))
rq->buff.wqe_sz += MLX5E_METADATA_ETHER_LEN;
#endif
rq->wqe.page_reuse = !params->xdp_prog && !params->lro_en;
byte_count = rq->buff.wqe_sz;
......@@ -1095,6 +1116,8 @@ static int mlx5e_alloc_txqsq(struct mlx5e_channel *c,
sq->uar_map = mdev->mlx5e_res.bfreg.map;
sq->max_inline = params->tx_max_inline;
sq->min_inline_mode = params->tx_min_inline_mode;
if (MLX5_IPSEC_DEV(c->priv->mdev))
set_bit(MLX5E_SQ_STATE_IPSEC, &sq->state);
param->wq.db_numa_node = cpu_to_node(c->cpu);
err = mlx5_wq_cyc_create(mdev, &param->wq, sqc_wq, &sq->wq, &sq->wq_ctrl);
......@@ -1914,6 +1937,7 @@ static void mlx5e_build_sq_param(struct mlx5e_priv *priv,
mlx5e_build_sq_param_common(priv, param);
MLX5_SET(wq, wq, log_wq_sz, params->log_sq_size);
MLX5_SET(sqc, sqc, allow_swp, !!MLX5_IPSEC_DEV(priv->mdev));
}
static void mlx5e_build_common_cq_param(struct mlx5e_priv *priv,
......@@ -3508,6 +3532,11 @@ static netdev_features_t mlx5e_features_check(struct sk_buff *skb,
features = vlan_features_check(skb, features);
features = vxlan_features_check(skb, features);
#ifdef CONFIG_MLX5_EN_IPSEC
if (mlx5e_ipsec_feature_check(skb, netdev, features))
return features;
#endif
/* Validate if the tunneled packet is being offloaded by HW */
if (skb->encapsulation &&
(features & NETIF_F_CSUM_MASK || features & NETIF_F_GSO_MASK))
......@@ -3555,6 +3584,12 @@ static int mlx5e_xdp_set(struct net_device *netdev, struct bpf_prog *prog)
goto unlock;
}
if ((netdev->features & NETIF_F_HW_ESP) && prog) {
netdev_warn(netdev, "can't set XDP with IPSec offload\n");
err = -EINVAL;
goto unlock;
}
was_opened = test_bit(MLX5E_STATE_OPENED, &priv->state);
/* no need for full reset when exchanging programs */
reset = (!priv->channels.params.xdp_prog || !prog);
......@@ -4046,6 +4081,8 @@ static void mlx5e_build_nic_netdev(struct net_device *netdev)
if (MLX5_CAP_GEN(mdev, vport_group_manager))
netdev->switchdev_ops = &mlx5e_switchdev_ops;
#endif
mlx5e_ipsec_build_netdev(priv);
}
static void mlx5e_create_q_counter(struct mlx5e_priv *priv)
......@@ -4074,14 +4111,19 @@ static void mlx5e_nic_init(struct mlx5_core_dev *mdev,
void *ppriv)
{
struct mlx5e_priv *priv = netdev_priv(netdev);
int err;
mlx5e_build_nic_netdev_priv(mdev, netdev, profile, ppriv);
err = mlx5e_ipsec_init(priv);
if (err)
mlx5_core_err(mdev, "IPSec initialization failed, %d\n", err);
mlx5e_build_nic_netdev(netdev);
mlx5e_vxlan_init(priv);
}
static void mlx5e_nic_cleanup(struct mlx5e_priv *priv)
{
mlx5e_ipsec_cleanup(priv);
mlx5e_vxlan_cleanup(priv);
if (priv->channels.params.xdp_prog)
......@@ -4473,6 +4515,7 @@ static struct mlx5_interface mlx5e_interface = {
void mlx5e_init(void)
{
mlx5e_ipsec_build_inverse_table();
mlx5e_build_ptys2ethtool_map();
mlx5_register_interface(&mlx5e_interface);
}
......
......@@ -41,6 +41,7 @@
#include "eswitch.h"
#include "en_rep.h"
#include "ipoib/ipoib.h"
#include "en_accel/ipsec_rxtx.h"
static inline bool mlx5e_rx_hw_stamp(struct mlx5e_tstamp *tstamp)
{
......@@ -996,7 +997,7 @@ int mlx5e_poll_rx_cq(struct mlx5e_cq *cq, int budget)
work_done += mlx5e_decompress_cqes_cont(rq, cq, 0, budget);
for (; work_done < budget; work_done++) {
struct mlx5_cqe64 *cqe = mlx5e_get_cqe(cq);
struct mlx5_cqe64 *cqe = mlx5_cqwq_get_cqe(&cq->wq);
if (!cqe)
break;
......@@ -1050,7 +1051,7 @@ bool mlx5e_poll_xdpsq_cq(struct mlx5e_cq *cq)
u16 wqe_counter;
bool last_wqe;
cqe = mlx5e_get_cqe(cq);
cqe = mlx5_cqwq_get_cqe(&cq->wq);
if (!cqe)
break;
......@@ -1183,3 +1184,43 @@ void mlx5i_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
}
#endif /* CONFIG_MLX5_CORE_IPOIB */
#ifdef CONFIG_MLX5_EN_IPSEC
void mlx5e_ipsec_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
{
struct mlx5e_wqe_frag_info *wi;
struct mlx5e_rx_wqe *wqe;
__be16 wqe_counter_be;
struct sk_buff *skb;
u16 wqe_counter;
u32 cqe_bcnt;
wqe_counter_be = cqe->wqe_counter;
wqe_counter = be16_to_cpu(wqe_counter_be);
wqe = mlx5_wq_ll_get_wqe(&rq->wq, wqe_counter);
wi = &rq->wqe.frag_info[wqe_counter];
cqe_bcnt = be32_to_cpu(cqe->byte_cnt);
skb = skb_from_cqe(rq, cqe, wi, cqe_bcnt);
if (unlikely(!skb)) {
/* a DROP, save the page-reuse checks */
mlx5e_free_rx_wqe(rq, wi);
goto wq_ll_pop;
}
skb = mlx5e_ipsec_handle_rx_skb(rq->netdev, skb);
if (unlikely(!skb)) {
mlx5e_free_rx_wqe(rq, wi);
goto wq_ll_pop;
}
mlx5e_complete_rx_cqe(rq, cqe, cqe_bcnt, skb);
napi_gro_receive(rq->cq.napi, skb);
mlx5e_free_rx_wqe_reuse(rq, wi);
wq_ll_pop:
mlx5_wq_ll_pop(&rq->wq, wqe_counter_be,
&wqe->next.next_wqe_index);
}
#endif /* CONFIG_MLX5_EN_IPSEC */
......@@ -34,6 +34,7 @@
#include <linux/if_vlan.h>
#include "en.h"
#include "ipoib/ipoib.h"
#include "en_accel/ipsec_rxtx.h"
#define MLX5E_SQ_NOPS_ROOM MLX5_SEND_WQE_MAX_WQEBBS
#define MLX5E_SQ_STOP_ROOM (MLX5_SEND_WQE_MAX_WQEBBS +\
......@@ -299,12 +300,9 @@ mlx5e_txwqe_complete(struct mlx5e_txqsq *sq, struct sk_buff *skb,
}
}
static netdev_tx_t mlx5e_sq_xmit(struct mlx5e_txqsq *sq, struct sk_buff *skb)
static netdev_tx_t mlx5e_sq_xmit(struct mlx5e_txqsq *sq, struct sk_buff *skb,
struct mlx5e_tx_wqe *wqe, u16 pi)
{
struct mlx5_wq_cyc *wq = &sq->wq;
u16 pi = sq->pc & wq->sz_m1;
struct mlx5e_tx_wqe *wqe = mlx5_wq_cyc_get_wqe(wq, pi);
struct mlx5e_tx_wqe_info *wi = &sq->db.wqe_info[pi];
struct mlx5_wqe_ctrl_seg *cseg = &wqe->ctrl;
......@@ -319,8 +317,6 @@ static netdev_tx_t mlx5e_sq_xmit(struct mlx5e_txqsq *sq, struct sk_buff *skb)
u16 ds_cnt;
u16 ihs;
memset(wqe, 0, sizeof(*wqe));
mlx5e_txwqe_build_eseg_csum(sq, skb, eseg);
if (skb_is_gso(skb)) {
......@@ -375,8 +371,21 @@ netdev_tx_t mlx5e_xmit(struct sk_buff *skb, struct net_device *dev)
{
struct mlx5e_priv *priv = netdev_priv(dev);
struct mlx5e_txqsq *sq = priv->txq2sq[skb_get_queue_mapping(skb)];
struct mlx5_wq_cyc *wq = &sq->wq;
u16 pi = sq->pc & wq->sz_m1;
struct mlx5e_tx_wqe *wqe = mlx5_wq_cyc_get_wqe(wq, pi);
memset(wqe, 0, sizeof(*wqe));
#ifdef CONFIG_MLX5_EN_IPSEC
if (sq->state & BIT(MLX5E_SQ_STATE_IPSEC)) {
skb = mlx5e_ipsec_handle_tx_skb(dev, wqe, skb);
if (unlikely(!skb))
return NETDEV_TX_OK;
}
#endif
return mlx5e_sq_xmit(sq, skb);
return mlx5e_sq_xmit(sq, skb, wqe, pi);
}
bool mlx5e_poll_tx_cq(struct mlx5e_cq *cq, int napi_budget)
......@@ -409,7 +418,7 @@ bool mlx5e_poll_tx_cq(struct mlx5e_cq *cq, int napi_budget)
u16 wqe_counter;
bool last_wqe;
cqe = mlx5e_get_cqe(cq);
cqe = mlx5_cqwq_get_cqe(&cq->wq);
if (!cqe)
break;
......
......@@ -32,23 +32,6 @@
#include "en.h"
struct mlx5_cqe64 *mlx5e_get_cqe(struct mlx5e_cq *cq)
{
struct mlx5_cqwq *wq = &cq->wq;
u32 ci = mlx5_cqwq_get_ci(wq);
struct mlx5_cqe64 *cqe = mlx5_cqwq_get_wqe(wq, ci);
u8 cqe_ownership_bit = cqe->op_own & MLX5_CQE_OWNER_MASK;
u8 sw_ownership_val = mlx5_cqwq_get_wrap_cnt(wq) & 1;
if (cqe_ownership_bit != sw_ownership_val)
return NULL;
/* ensure cqe content is read after cqe ownership bit */
dma_rmb();
return cqe;
}
static inline void mlx5e_poll_ico_single_cqe(struct mlx5e_cq *cq,
struct mlx5e_icosq *sq,
struct mlx5_cqe64 *cqe,
......@@ -89,7 +72,7 @@ static void mlx5e_poll_ico_cq(struct mlx5e_cq *cq)
if (unlikely(!test_bit(MLX5E_SQ_STATE_ENABLED, &sq->state)))
return;
cqe = mlx5e_get_cqe(cq);
cqe = mlx5_cqwq_get_cqe(&cq->wq);
if (likely(!cqe))
return;
......
......@@ -33,10 +33,44 @@
#include <linux/etherdevice.h>
#include <linux/mlx5/cmd.h>
#include <linux/mlx5/driver.h>
#include <linux/mlx5/device.h>
#include "mlx5_core.h"
#include "fpga/cmd.h"
#define MLX5_FPGA_ACCESS_REG_SZ (MLX5_ST_SZ_DW(fpga_access_reg) + \
MLX5_FPGA_ACCESS_REG_SIZE_MAX)
int mlx5_fpga_access_reg(struct mlx5_core_dev *dev, u8 size, u64 addr,
void *buf, bool write)
{
u32 in[MLX5_FPGA_ACCESS_REG_SZ] = {0};
u32 out[MLX5_FPGA_ACCESS_REG_SZ];
int err;
if (size & 3)
return -EINVAL;
if (addr & 3)
return -EINVAL;
if (size > MLX5_FPGA_ACCESS_REG_SIZE_MAX)
return -EINVAL;
MLX5_SET(fpga_access_reg, in, size, size);
MLX5_SET64(fpga_access_reg, in, address, addr);
if (write)
memcpy(MLX5_ADDR_OF(fpga_access_reg, in, data), buf, size);
err = mlx5_core_access_reg(dev, in, sizeof(in), out, sizeof(out),
MLX5_REG_FPGA_ACCESS_REG, 0, write);
if (err)
return err;
if (!write)
memcpy(buf, MLX5_ADDR_OF(fpga_access_reg, out, data), size);
return 0;
}
int mlx5_fpga_caps(struct mlx5_core_dev *dev, u32 *caps)
{
u32 in[MLX5_ST_SZ_DW(fpga_cap)] = {0};
......@@ -46,6 +80,49 @@ int mlx5_fpga_caps(struct mlx5_core_dev *dev, u32 *caps)
MLX5_REG_FPGA_CAP, 0, 0);
}
int mlx5_fpga_ctrl_op(struct mlx5_core_dev *dev, u8 op)
{
u32 in[MLX5_ST_SZ_DW(fpga_ctrl)] = {0};
u32 out[MLX5_ST_SZ_DW(fpga_ctrl)];
MLX5_SET(fpga_ctrl, in, operation, op);
return mlx5_core_access_reg(dev, in, sizeof(in), out, sizeof(out),
MLX5_REG_FPGA_CTRL, 0, true);
}
int mlx5_fpga_sbu_caps(struct mlx5_core_dev *dev, void *caps, int size)
{
unsigned int cap_size = MLX5_CAP_FPGA(dev, sandbox_extended_caps_len);
u64 addr = MLX5_CAP64_FPGA(dev, sandbox_extended_caps_addr);
unsigned int read;
int ret = 0;
if (cap_size > size) {
mlx5_core_warn(dev, "Not enough buffer %u for FPGA SBU caps %u",
size, cap_size);
return -EINVAL;
}
while (cap_size > 0) {
read = min_t(unsigned int, cap_size,
MLX5_FPGA_ACCESS_REG_SIZE_MAX);
ret = mlx5_fpga_access_reg(dev, read, addr, caps, false);
if (ret) {
mlx5_core_warn(dev, "Error reading FPGA SBU caps %u bytes at address 0x%llx: %d",
read, addr, ret);
return ret;
}
cap_size -= read;
addr += read;
caps += read;
}
return ret;
}
int mlx5_fpga_query(struct mlx5_core_dev *dev, struct mlx5_fpga_query *query)
{
u32 in[MLX5_ST_SZ_DW(fpga_ctrl)] = {0};
......@@ -62,3 +139,100 @@ int mlx5_fpga_query(struct mlx5_core_dev *dev, struct mlx5_fpga_query *query)
query->oper_image = MLX5_GET(fpga_ctrl, out, flash_select_oper);
return 0;
}
int mlx5_fpga_create_qp(struct mlx5_core_dev *dev, void *fpga_qpc,
u32 *fpga_qpn)
{
u32 in[MLX5_ST_SZ_DW(fpga_create_qp_in)] = {0};
u32 out[MLX5_ST_SZ_DW(fpga_create_qp_out)];
int ret;
MLX5_SET(fpga_create_qp_in, in, opcode, MLX5_CMD_OP_FPGA_CREATE_QP);
memcpy(MLX5_ADDR_OF(fpga_create_qp_in, in, fpga_qpc), fpga_qpc,
MLX5_FLD_SZ_BYTES(fpga_create_qp_in, fpga_qpc));
ret = mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out));
if (ret)
return ret;
memcpy(fpga_qpc, MLX5_ADDR_OF(fpga_create_qp_out, out, fpga_qpc),
MLX5_FLD_SZ_BYTES(fpga_create_qp_out, fpga_qpc));
*fpga_qpn = MLX5_GET(fpga_create_qp_out, out, fpga_qpn);
return ret;
}
int mlx5_fpga_modify_qp(struct mlx5_core_dev *dev, u32 fpga_qpn,
enum mlx5_fpga_qpc_field_select fields,
void *fpga_qpc)
{
u32 in[MLX5_ST_SZ_DW(fpga_modify_qp_in)] = {0};
u32 out[MLX5_ST_SZ_DW(fpga_modify_qp_out)];
MLX5_SET(fpga_modify_qp_in, in, opcode, MLX5_CMD_OP_FPGA_MODIFY_QP);
MLX5_SET(fpga_modify_qp_in, in, field_select, fields);
MLX5_SET(fpga_modify_qp_in, in, fpga_qpn, fpga_qpn);
memcpy(MLX5_ADDR_OF(fpga_modify_qp_in, in, fpga_qpc), fpga_qpc,
MLX5_FLD_SZ_BYTES(fpga_modify_qp_in, fpga_qpc));
return mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out));
}
int mlx5_fpga_query_qp(struct mlx5_core_dev *dev,
u32 fpga_qpn, void *fpga_qpc)
{
u32 in[MLX5_ST_SZ_DW(fpga_query_qp_in)] = {0};
u32 out[MLX5_ST_SZ_DW(fpga_query_qp_out)];
int ret;
MLX5_SET(fpga_query_qp_in, in, opcode, MLX5_CMD_OP_FPGA_QUERY_QP);
MLX5_SET(fpga_query_qp_in, in, fpga_qpn, fpga_qpn);
ret = mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out));
if (ret)
return ret;
memcpy(fpga_qpc, MLX5_ADDR_OF(fpga_query_qp_out, in, fpga_qpc),
MLX5_FLD_SZ_BYTES(fpga_query_qp_out, fpga_qpc));
return ret;
}
int mlx5_fpga_destroy_qp(struct mlx5_core_dev *dev, u32 fpga_qpn)
{
u32 in[MLX5_ST_SZ_DW(fpga_destroy_qp_in)] = {0};
u32 out[MLX5_ST_SZ_DW(fpga_destroy_qp_out)];
MLX5_SET(fpga_destroy_qp_in, in, opcode, MLX5_CMD_OP_FPGA_DESTROY_QP);
MLX5_SET(fpga_destroy_qp_in, in, fpga_qpn, fpga_qpn);
return mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out));
}
int mlx5_fpga_query_qp_counters(struct mlx5_core_dev *dev, u32 fpga_qpn,
bool clear, struct mlx5_fpga_qp_counters *data)
{
u32 in[MLX5_ST_SZ_DW(fpga_query_qp_counters_in)] = {0};
u32 out[MLX5_ST_SZ_DW(fpga_query_qp_counters_out)];
int ret;
MLX5_SET(fpga_query_qp_counters_in, in, opcode,
MLX5_CMD_OP_FPGA_QUERY_QP_COUNTERS);
MLX5_SET(fpga_query_qp_counters_in, in, clear, clear);
MLX5_SET(fpga_query_qp_counters_in, in, fpga_qpn, fpga_qpn);
ret = mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out));
if (ret)
return ret;
data->rx_ack_packets = MLX5_GET64(fpga_query_qp_counters_out, out,
rx_ack_packets);
data->rx_send_packets = MLX5_GET64(fpga_query_qp_counters_out, out,
rx_send_packets);
data->tx_ack_packets = MLX5_GET64(fpga_query_qp_counters_out, out,
tx_ack_packets);
data->tx_send_packets = MLX5_GET64(fpga_query_qp_counters_out, out,
tx_send_packets);
data->rx_total_drop = MLX5_GET64(fpga_query_qp_counters_out, out,
rx_total_drop);
return ret;
}
......@@ -53,7 +53,32 @@ struct mlx5_fpga_query {
enum mlx5_fpga_status status;
};
enum mlx5_fpga_qpc_field_select {
MLX5_FPGA_QPC_STATE = BIT(0),
};
struct mlx5_fpga_qp_counters {
u64 rx_ack_packets;
u64 rx_send_packets;
u64 tx_ack_packets;
u64 tx_send_packets;
u64 rx_total_drop;
};
int mlx5_fpga_caps(struct mlx5_core_dev *dev, u32 *caps);
int mlx5_fpga_query(struct mlx5_core_dev *dev, struct mlx5_fpga_query *query);
int mlx5_fpga_ctrl_op(struct mlx5_core_dev *dev, u8 op);
int mlx5_fpga_access_reg(struct mlx5_core_dev *dev, u8 size, u64 addr,
void *buf, bool write);
int mlx5_fpga_sbu_caps(struct mlx5_core_dev *dev, void *caps, int size);
int mlx5_fpga_create_qp(struct mlx5_core_dev *dev, void *fpga_qpc,
u32 *fpga_qpn);
int mlx5_fpga_modify_qp(struct mlx5_core_dev *dev, u32 fpga_qpn,
enum mlx5_fpga_qpc_field_select fields, void *fpga_qpc);
int mlx5_fpga_query_qp(struct mlx5_core_dev *dev, u32 fpga_qpn, void *fpga_qpc);
int mlx5_fpga_query_qp_counters(struct mlx5_core_dev *dev, u32 fpga_qpn,
bool clear, struct mlx5_fpga_qp_counters *data);
int mlx5_fpga_destroy_qp(struct mlx5_core_dev *dev, u32 fpga_qpn);
#endif /* __MLX5_FPGA_H__ */
此差异已折叠。
/*
* Copyright (c) 2017 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*
*/
#ifndef __MLX5_FPGA_CONN_H__
#define __MLX5_FPGA_CONN_H__
#include <linux/mlx5/cq.h>
#include <linux/mlx5/qp.h>
#include "fpga/core.h"
#include "fpga/sdk.h"
#include "wq.h"
struct mlx5_fpga_conn {
struct mlx5_fpga_device *fdev;
void (*recv_cb)(void *cb_arg, struct mlx5_fpga_dma_buf *buf);
void *cb_arg;
/* FPGA QP */
u32 fpga_qpc[MLX5_ST_SZ_DW(fpga_qpc)];
u32 fpga_qpn;
/* CQ */
struct {
struct mlx5_cqwq wq;
struct mlx5_frag_wq_ctrl wq_ctrl;
struct mlx5_core_cq mcq;
struct tasklet_struct tasklet;
} cq;
/* QP */
struct {
bool active;
int sgid_index;
struct mlx5_wq_qp wq;
struct mlx5_wq_ctrl wq_ctrl;
struct mlx5_core_qp mqp;
struct {
spinlock_t lock; /* Protects all SQ state */
unsigned int pc;
unsigned int cc;
unsigned int size;
struct mlx5_fpga_dma_buf **bufs;
struct list_head backlog;
} sq;
struct {
unsigned int pc;
unsigned int cc;
unsigned int size;
struct mlx5_fpga_dma_buf **bufs;
} rq;
} qp;
};
int mlx5_fpga_conn_device_init(struct mlx5_fpga_device *fdev);
void mlx5_fpga_conn_device_cleanup(struct mlx5_fpga_device *fdev);
struct mlx5_fpga_conn *
mlx5_fpga_conn_create(struct mlx5_fpga_device *fdev,
struct mlx5_fpga_conn_attr *attr,
enum mlx5_ifc_fpga_qp_type qp_type);
void mlx5_fpga_conn_destroy(struct mlx5_fpga_conn *conn);
int mlx5_fpga_conn_send(struct mlx5_fpga_conn *conn,
struct mlx5_fpga_dma_buf *buf);
#endif /* __MLX5_FPGA_CONN_H__ */
......@@ -35,7 +35,9 @@
#include <linux/mlx5/driver.h>
#include "mlx5_core.h"
#include "lib/mlx5.h"
#include "fpga/core.h"
#include "fpga/conn.h"
static const char *const mlx5_fpga_error_strings[] = {
"Null Syndrome",
......@@ -100,10 +102,34 @@ static int mlx5_fpga_device_load_check(struct mlx5_fpga_device *fdev)
return 0;
}
int mlx5_fpga_device_brb(struct mlx5_fpga_device *fdev)
{
int err;
struct mlx5_core_dev *mdev = fdev->mdev;
err = mlx5_fpga_ctrl_op(mdev, MLX5_FPGA_CTRL_OPERATION_SANDBOX_BYPASS_ON);
if (err) {
mlx5_fpga_err(fdev, "Failed to set bypass on: %d\n", err);
return err;
}
err = mlx5_fpga_ctrl_op(mdev, MLX5_FPGA_CTRL_OPERATION_RESET_SANDBOX);
if (err) {
mlx5_fpga_err(fdev, "Failed to reset SBU: %d\n", err);
return err;
}
err = mlx5_fpga_ctrl_op(mdev, MLX5_FPGA_CTRL_OPERATION_SANDBOX_BYPASS_OFF);
if (err) {
mlx5_fpga_err(fdev, "Failed to set bypass off: %d\n", err);
return err;
}
return 0;
}
int mlx5_fpga_device_start(struct mlx5_core_dev *mdev)
{
struct mlx5_fpga_device *fdev = mdev->fpga;
unsigned long flags;
unsigned int max_num_qps;
int err;
if (!fdev)
......@@ -123,6 +149,28 @@ int mlx5_fpga_device_start(struct mlx5_core_dev *mdev)
mlx5_fpga_image_name(fdev->last_oper_image),
MLX5_CAP_FPGA(fdev->mdev, image_version));
max_num_qps = MLX5_CAP_FPGA(mdev, shell_caps.max_num_qps);
err = mlx5_core_reserve_gids(mdev, max_num_qps);
if (err)
goto out;
err = mlx5_fpga_conn_device_init(fdev);
if (err)
goto err_rsvd_gid;
if (fdev->last_oper_image == MLX5_FPGA_IMAGE_USER) {
err = mlx5_fpga_device_brb(fdev);
if (err)
goto err_conn_init;
}
goto out;
err_conn_init:
mlx5_fpga_conn_device_cleanup(fdev);
err_rsvd_gid:
mlx5_core_unreserve_gids(mdev, max_num_qps);
out:
spin_lock_irqsave(&fdev->state_lock, flags);
fdev->state = err ? MLX5_FPGA_STATUS_FAILURE : MLX5_FPGA_STATUS_SUCCESS;
......@@ -130,7 +178,7 @@ int mlx5_fpga_device_start(struct mlx5_core_dev *mdev)
return err;
}
int mlx5_fpga_device_init(struct mlx5_core_dev *mdev)
int mlx5_fpga_init(struct mlx5_core_dev *mdev)
{
struct mlx5_fpga_device *fdev = NULL;
......@@ -151,9 +199,42 @@ int mlx5_fpga_device_init(struct mlx5_core_dev *mdev)
return 0;
}
void mlx5_fpga_device_cleanup(struct mlx5_core_dev *mdev)
void mlx5_fpga_device_stop(struct mlx5_core_dev *mdev)
{
struct mlx5_fpga_device *fdev = mdev->fpga;
unsigned int max_num_qps;
unsigned long flags;
int err;
if (!fdev)
return;
spin_lock_irqsave(&fdev->state_lock, flags);
if (fdev->state != MLX5_FPGA_STATUS_SUCCESS) {
spin_unlock_irqrestore(&fdev->state_lock, flags);
return;
}
fdev->state = MLX5_FPGA_STATUS_NONE;
spin_unlock_irqrestore(&fdev->state_lock, flags);
if (fdev->last_oper_image == MLX5_FPGA_IMAGE_USER) {
err = mlx5_fpga_ctrl_op(mdev, MLX5_FPGA_CTRL_OPERATION_SANDBOX_BYPASS_ON);
if (err)
mlx5_fpga_err(fdev, "Failed to re-set SBU bypass on: %d\n",
err);
}
mlx5_fpga_conn_device_cleanup(fdev);
max_num_qps = MLX5_CAP_FPGA(mdev, shell_caps.max_num_qps);
mlx5_core_unreserve_gids(mdev, max_num_qps);
}
void mlx5_fpga_cleanup(struct mlx5_core_dev *mdev)
{
kfree(mdev->fpga);
struct mlx5_fpga_device *fdev = mdev->fpga;
mlx5_fpga_device_stop(mdev);
kfree(fdev);
mdev->fpga = NULL;
}
......
......@@ -44,6 +44,15 @@ struct mlx5_fpga_device {
enum mlx5_fpga_status state;
enum mlx5_fpga_image last_admin_image;
enum mlx5_fpga_image last_oper_image;
/* QP Connection resources */
struct {
u32 pdn;
struct mlx5_core_mkey mkey;
struct mlx5_uars_page *uar;
} conn_res;
struct mlx5_fpga_ipsec *ipsec;
};
#define mlx5_fpga_dbg(__adev, format, ...) \
......@@ -68,19 +77,20 @@ struct mlx5_fpga_device {
#define mlx5_fpga_info(__adev, format, ...) \
dev_info(&(__adev)->mdev->pdev->dev, "FPGA: " format, ##__VA_ARGS__)
int mlx5_fpga_device_init(struct mlx5_core_dev *mdev);
void mlx5_fpga_device_cleanup(struct mlx5_core_dev *mdev);
int mlx5_fpga_init(struct mlx5_core_dev *mdev);
void mlx5_fpga_cleanup(struct mlx5_core_dev *mdev);
int mlx5_fpga_device_start(struct mlx5_core_dev *mdev);
void mlx5_fpga_device_stop(struct mlx5_core_dev *mdev);
void mlx5_fpga_event(struct mlx5_core_dev *mdev, u8 event, void *data);
#else
static inline int mlx5_fpga_device_init(struct mlx5_core_dev *mdev)
static inline int mlx5_fpga_init(struct mlx5_core_dev *mdev)
{
return 0;
}
static inline void mlx5_fpga_device_cleanup(struct mlx5_core_dev *mdev)
static inline void mlx5_fpga_cleanup(struct mlx5_core_dev *mdev)
{
}
......@@ -89,6 +99,10 @@ static inline int mlx5_fpga_device_start(struct mlx5_core_dev *mdev)
return 0;
}
static inline void mlx5_fpga_device_stop(struct mlx5_core_dev *mdev)
{
}
static inline void mlx5_fpga_event(struct mlx5_core_dev *mdev, u8 event,
void *data)
{
......
/*
* Copyright (c) 2017 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*
*/
#include <linux/mlx5/driver.h>
#include "mlx5_core.h"
#include "fpga/ipsec.h"
#include "fpga/sdk.h"
#include "fpga/core.h"
#define SBU_QP_QUEUE_SIZE 8
enum mlx5_ipsec_response_syndrome {
MLX5_IPSEC_RESPONSE_SUCCESS = 0,
MLX5_IPSEC_RESPONSE_ILLEGAL_REQUEST = 1,
MLX5_IPSEC_RESPONSE_SADB_ISSUE = 2,
MLX5_IPSEC_RESPONSE_WRITE_RESPONSE_ISSUE = 3,
};
enum mlx5_fpga_ipsec_sacmd_status {
MLX5_FPGA_IPSEC_SACMD_PENDING,
MLX5_FPGA_IPSEC_SACMD_SEND_FAIL,
MLX5_FPGA_IPSEC_SACMD_COMPLETE,
};
struct mlx5_ipsec_command_context {
struct mlx5_fpga_dma_buf buf;
struct mlx5_accel_ipsec_sa sa;
enum mlx5_fpga_ipsec_sacmd_status status;
int status_code;
struct completion complete;
struct mlx5_fpga_device *dev;
struct list_head list; /* Item in pending_cmds */
};
struct mlx5_ipsec_sadb_resp {
__be32 syndrome;
__be32 sw_sa_handle;
u8 reserved[24];
} __packed;
struct mlx5_fpga_ipsec {
struct list_head pending_cmds;
spinlock_t pending_cmds_lock; /* Protects pending_cmds */
u32 caps[MLX5_ST_SZ_DW(ipsec_extended_cap)];
struct mlx5_fpga_conn *conn;
};
static bool mlx5_fpga_is_ipsec_device(struct mlx5_core_dev *mdev)
{
if (!mdev->fpga || !MLX5_CAP_GEN(mdev, fpga))
return false;
if (MLX5_CAP_FPGA(mdev, ieee_vendor_id) !=
MLX5_FPGA_CAP_SANDBOX_VENDOR_ID_MLNX)
return false;
if (MLX5_CAP_FPGA(mdev, sandbox_product_id) !=
MLX5_FPGA_CAP_SANDBOX_PRODUCT_ID_IPSEC)
return false;
return true;
}
static void mlx5_fpga_ipsec_send_complete(struct mlx5_fpga_conn *conn,
struct mlx5_fpga_device *fdev,
struct mlx5_fpga_dma_buf *buf,
u8 status)
{
struct mlx5_ipsec_command_context *context;
if (status) {
context = container_of(buf, struct mlx5_ipsec_command_context,
buf);
mlx5_fpga_warn(fdev, "IPSec command send failed with status %u\n",
status);
context->status = MLX5_FPGA_IPSEC_SACMD_SEND_FAIL;
complete(&context->complete);
}
}
static inline int syndrome_to_errno(enum mlx5_ipsec_response_syndrome syndrome)
{
switch (syndrome) {
case MLX5_IPSEC_RESPONSE_SUCCESS:
return 0;
case MLX5_IPSEC_RESPONSE_SADB_ISSUE:
return -EEXIST;
case MLX5_IPSEC_RESPONSE_ILLEGAL_REQUEST:
return -EINVAL;
case MLX5_IPSEC_RESPONSE_WRITE_RESPONSE_ISSUE:
return -EIO;
}
return -EIO;
}
static void mlx5_fpga_ipsec_recv(void *cb_arg, struct mlx5_fpga_dma_buf *buf)
{
struct mlx5_ipsec_sadb_resp *resp = buf->sg[0].data;
struct mlx5_ipsec_command_context *context;
enum mlx5_ipsec_response_syndrome syndrome;
struct mlx5_fpga_device *fdev = cb_arg;
unsigned long flags;
if (buf->sg[0].size < sizeof(*resp)) {
mlx5_fpga_warn(fdev, "Short receive from FPGA IPSec: %u < %zu bytes\n",
buf->sg[0].size, sizeof(*resp));
return;
}
mlx5_fpga_dbg(fdev, "mlx5_ipsec recv_cb syndrome %08x sa_id %x\n",
ntohl(resp->syndrome), ntohl(resp->sw_sa_handle));
spin_lock_irqsave(&fdev->ipsec->pending_cmds_lock, flags);
context = list_first_entry_or_null(&fdev->ipsec->pending_cmds,
struct mlx5_ipsec_command_context,
list);
if (context)
list_del(&context->list);
spin_unlock_irqrestore(&fdev->ipsec->pending_cmds_lock, flags);
if (!context) {
mlx5_fpga_warn(fdev, "Received IPSec offload response without pending command request\n");
return;
}
mlx5_fpga_dbg(fdev, "Handling response for %p\n", context);
if (context->sa.sw_sa_handle != resp->sw_sa_handle) {
mlx5_fpga_err(fdev, "mismatch SA handle. cmd 0x%08x vs resp 0x%08x\n",
ntohl(context->sa.sw_sa_handle),
ntohl(resp->sw_sa_handle));
return;
}
syndrome = ntohl(resp->syndrome);
context->status_code = syndrome_to_errno(syndrome);
context->status = MLX5_FPGA_IPSEC_SACMD_COMPLETE;
if (context->status_code)
mlx5_fpga_warn(fdev, "IPSec SADB command failed with syndrome %08x\n",
syndrome);
complete(&context->complete);
}
void *mlx5_fpga_ipsec_sa_cmd_exec(struct mlx5_core_dev *mdev,
struct mlx5_accel_ipsec_sa *cmd)
{
struct mlx5_ipsec_command_context *context;
struct mlx5_fpga_device *fdev = mdev->fpga;
unsigned long flags;
int res = 0;
BUILD_BUG_ON((sizeof(struct mlx5_accel_ipsec_sa) & 3) != 0);
if (!fdev || !fdev->ipsec)
return ERR_PTR(-EOPNOTSUPP);
context = kzalloc(sizeof(*context), GFP_ATOMIC);
if (!context)
return ERR_PTR(-ENOMEM);
memcpy(&context->sa, cmd, sizeof(*cmd));
context->buf.complete = mlx5_fpga_ipsec_send_complete;
context->buf.sg[0].size = sizeof(context->sa);
context->buf.sg[0].data = &context->sa;
init_completion(&context->complete);
context->dev = fdev;
spin_lock_irqsave(&fdev->ipsec->pending_cmds_lock, flags);
list_add_tail(&context->list, &fdev->ipsec->pending_cmds);
spin_unlock_irqrestore(&fdev->ipsec->pending_cmds_lock, flags);
context->status = MLX5_FPGA_IPSEC_SACMD_PENDING;
res = mlx5_fpga_sbu_conn_sendmsg(fdev->ipsec->conn, &context->buf);
if (res) {
mlx5_fpga_warn(fdev, "Failure sending IPSec command: %d\n",
res);
spin_lock_irqsave(&fdev->ipsec->pending_cmds_lock, flags);
list_del(&context->list);
spin_unlock_irqrestore(&fdev->ipsec->pending_cmds_lock, flags);
kfree(context);
return ERR_PTR(res);
}
/* Context will be freed by wait func after completion */
return context;
}
int mlx5_fpga_ipsec_sa_cmd_wait(void *ctx)
{
struct mlx5_ipsec_command_context *context = ctx;
int res;
res = wait_for_completion_killable(&context->complete);
if (res) {
mlx5_fpga_warn(context->dev, "Failure waiting for IPSec command response\n");
return -EINTR;
}
if (context->status == MLX5_FPGA_IPSEC_SACMD_COMPLETE)
res = context->status_code;
else
res = -EIO;
kfree(context);
return res;
}
u32 mlx5_fpga_ipsec_device_caps(struct mlx5_core_dev *mdev)
{
struct mlx5_fpga_device *fdev = mdev->fpga;
u32 ret = 0;
if (mlx5_fpga_is_ipsec_device(mdev))
ret |= MLX5_ACCEL_IPSEC_DEVICE;
else
return ret;
if (!fdev->ipsec)
return ret;
if (MLX5_GET(ipsec_extended_cap, fdev->ipsec->caps, esp))
ret |= MLX5_ACCEL_IPSEC_ESP;
if (MLX5_GET(ipsec_extended_cap, fdev->ipsec->caps, ipv6))
ret |= MLX5_ACCEL_IPSEC_IPV6;
if (MLX5_GET(ipsec_extended_cap, fdev->ipsec->caps, lso))
ret |= MLX5_ACCEL_IPSEC_LSO;
return ret;
}
unsigned int mlx5_fpga_ipsec_counters_count(struct mlx5_core_dev *mdev)
{
struct mlx5_fpga_device *fdev = mdev->fpga;
if (!fdev || !fdev->ipsec)
return 0;
return MLX5_GET(ipsec_extended_cap, fdev->ipsec->caps,
number_of_ipsec_counters);
}
int mlx5_fpga_ipsec_counters_read(struct mlx5_core_dev *mdev, u64 *counters,
unsigned int counters_count)
{
struct mlx5_fpga_device *fdev = mdev->fpga;
unsigned int i;
u32 *data;
u32 count;
u64 addr;
int ret;
if (!fdev || !fdev->ipsec)
return 0;
addr = (u64)MLX5_GET(ipsec_extended_cap, fdev->ipsec->caps,
ipsec_counters_addr_low) +
((u64)MLX5_GET(ipsec_extended_cap, fdev->ipsec->caps,
ipsec_counters_addr_high) << 32);
count = mlx5_fpga_ipsec_counters_count(mdev);
data = kzalloc(sizeof(u32) * count * 2, GFP_KERNEL);
if (!data) {
ret = -ENOMEM;
goto out;
}
ret = mlx5_fpga_mem_read(fdev, count * sizeof(u64), addr, data,
MLX5_FPGA_ACCESS_TYPE_DONTCARE);
if (ret < 0) {
mlx5_fpga_err(fdev, "Failed to read IPSec counters from HW: %d\n",
ret);
goto out;
}
ret = 0;
if (count > counters_count)
count = counters_count;
/* Each counter is low word, then high. But each word is big-endian */
for (i = 0; i < count; i++)
counters[i] = (u64)ntohl(data[i * 2]) |
((u64)ntohl(data[i * 2 + 1]) << 32);
out:
kfree(data);
return ret;
}
int mlx5_fpga_ipsec_init(struct mlx5_core_dev *mdev)
{
struct mlx5_fpga_conn_attr init_attr = {0};
struct mlx5_fpga_device *fdev = mdev->fpga;
struct mlx5_fpga_conn *conn;
int err;
if (!mlx5_fpga_is_ipsec_device(mdev))
return 0;
fdev->ipsec = kzalloc(sizeof(*fdev->ipsec), GFP_KERNEL);
if (!fdev->ipsec)
return -ENOMEM;
err = mlx5_fpga_get_sbu_caps(fdev, sizeof(fdev->ipsec->caps),
fdev->ipsec->caps);
if (err) {
mlx5_fpga_err(fdev, "Failed to retrieve IPSec extended capabilities: %d\n",
err);
goto error;
}
INIT_LIST_HEAD(&fdev->ipsec->pending_cmds);
spin_lock_init(&fdev->ipsec->pending_cmds_lock);
init_attr.rx_size = SBU_QP_QUEUE_SIZE;
init_attr.tx_size = SBU_QP_QUEUE_SIZE;
init_attr.recv_cb = mlx5_fpga_ipsec_recv;
init_attr.cb_arg = fdev;
conn = mlx5_fpga_sbu_conn_create(fdev, &init_attr);
if (IS_ERR(conn)) {
err = PTR_ERR(conn);
mlx5_fpga_err(fdev, "Error creating IPSec command connection %d\n",
err);
goto error;
}
fdev->ipsec->conn = conn;
return 0;
error:
kfree(fdev->ipsec);
fdev->ipsec = NULL;
return err;
}
void mlx5_fpga_ipsec_cleanup(struct mlx5_core_dev *mdev)
{
struct mlx5_fpga_device *fdev = mdev->fpga;
if (!mlx5_fpga_is_ipsec_device(mdev))
return;
mlx5_fpga_sbu_conn_destroy(fdev->ipsec->conn);
kfree(fdev->ipsec);
fdev->ipsec = NULL;
}
/*
* Copyright (c) 2017 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*
*/
#ifndef __MLX5_FPGA_IPSEC_H__
#define __MLX5_FPGA_IPSEC_H__
#include "accel/ipsec.h"
#ifdef CONFIG_MLX5_FPGA
void *mlx5_fpga_ipsec_sa_cmd_exec(struct mlx5_core_dev *mdev,
struct mlx5_accel_ipsec_sa *cmd);
int mlx5_fpga_ipsec_sa_cmd_wait(void *context);
u32 mlx5_fpga_ipsec_device_caps(struct mlx5_core_dev *mdev);
unsigned int mlx5_fpga_ipsec_counters_count(struct mlx5_core_dev *mdev);
int mlx5_fpga_ipsec_counters_read(struct mlx5_core_dev *mdev, u64 *counters,
unsigned int counters_count);
int mlx5_fpga_ipsec_init(struct mlx5_core_dev *mdev);
void mlx5_fpga_ipsec_cleanup(struct mlx5_core_dev *mdev);
#else
static inline void *mlx5_fpga_ipsec_sa_cmd_exec(struct mlx5_core_dev *mdev,
struct mlx5_accel_ipsec_sa *cmd)
{
return ERR_PTR(-EOPNOTSUPP);
}
static inline int mlx5_fpga_ipsec_sa_cmd_wait(void *context)
{
return -EOPNOTSUPP;
}
static inline u32 mlx5_fpga_ipsec_device_caps(struct mlx5_core_dev *mdev)
{
return 0;
}
static inline unsigned int
mlx5_fpga_ipsec_counters_count(struct mlx5_core_dev *mdev)
{
return 0;
}
static inline int mlx5_fpga_ipsec_counters_read(struct mlx5_core_dev *mdev,
u64 *counters)
{
return 0;
}
static inline int mlx5_fpga_ipsec_init(struct mlx5_core_dev *mdev)
{
return 0;
}
static inline void mlx5_fpga_ipsec_cleanup(struct mlx5_core_dev *mdev)
{
}
#endif /* CONFIG_MLX5_FPGA */
#endif /* __MLX5_FPGA_SADB_H__ */
/*
* Copyright (c) 2017 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*
*/
#include <linux/mlx5/device.h>
#include "fpga/core.h"
#include "fpga/conn.h"
#include "fpga/sdk.h"
struct mlx5_fpga_conn *
mlx5_fpga_sbu_conn_create(struct mlx5_fpga_device *fdev,
struct mlx5_fpga_conn_attr *attr)
{
return mlx5_fpga_conn_create(fdev, attr, MLX5_FPGA_QPC_QP_TYPE_SANDBOX_QP);
}
EXPORT_SYMBOL(mlx5_fpga_sbu_conn_create);
void mlx5_fpga_sbu_conn_destroy(struct mlx5_fpga_conn *conn)
{
mlx5_fpga_conn_destroy(conn);
}
EXPORT_SYMBOL(mlx5_fpga_sbu_conn_destroy);
int mlx5_fpga_sbu_conn_sendmsg(struct mlx5_fpga_conn *conn,
struct mlx5_fpga_dma_buf *buf)
{
return mlx5_fpga_conn_send(conn, buf);
}
EXPORT_SYMBOL(mlx5_fpga_sbu_conn_sendmsg);
static int mlx5_fpga_mem_read_i2c(struct mlx5_fpga_device *fdev, size_t size,
u64 addr, u8 *buf)
{
size_t max_size = MLX5_FPGA_ACCESS_REG_SIZE_MAX;
size_t bytes_done = 0;
u8 actual_size;
int err;
if (!fdev->mdev)
return -ENOTCONN;
while (bytes_done < size) {
actual_size = min(max_size, (size - bytes_done));
err = mlx5_fpga_access_reg(fdev->mdev, actual_size,
addr + bytes_done,
buf + bytes_done, false);
if (err) {
mlx5_fpga_err(fdev, "Failed to read over I2C: %d\n",
err);
break;
}
bytes_done += actual_size;
}
return err;
}
static int mlx5_fpga_mem_write_i2c(struct mlx5_fpga_device *fdev, size_t size,
u64 addr, u8 *buf)
{
size_t max_size = MLX5_FPGA_ACCESS_REG_SIZE_MAX;
size_t bytes_done = 0;
u8 actual_size;
int err;
if (!fdev->mdev)
return -ENOTCONN;
while (bytes_done < size) {
actual_size = min(max_size, (size - bytes_done));
err = mlx5_fpga_access_reg(fdev->mdev, actual_size,
addr + bytes_done,
buf + bytes_done, true);
if (err) {
mlx5_fpga_err(fdev, "Failed to write FPGA crspace\n");
break;
}
bytes_done += actual_size;
}
return err;
}
int mlx5_fpga_mem_read(struct mlx5_fpga_device *fdev, size_t size, u64 addr,
void *buf, enum mlx5_fpga_access_type access_type)
{
int ret;
switch (access_type) {
case MLX5_FPGA_ACCESS_TYPE_I2C:
ret = mlx5_fpga_mem_read_i2c(fdev, size, addr, buf);
if (ret)
return ret;
break;
default:
mlx5_fpga_warn(fdev, "Unexpected read access_type %u\n",
access_type);
return -EACCES;
}
return size;
}
EXPORT_SYMBOL(mlx5_fpga_mem_read);
int mlx5_fpga_mem_write(struct mlx5_fpga_device *fdev, size_t size, u64 addr,
void *buf, enum mlx5_fpga_access_type access_type)
{
int ret;
switch (access_type) {
case MLX5_FPGA_ACCESS_TYPE_I2C:
ret = mlx5_fpga_mem_write_i2c(fdev, size, addr, buf);
if (ret)
return ret;
break;
default:
mlx5_fpga_warn(fdev, "Unexpected write access_type %u\n",
access_type);
return -EACCES;
}
return size;
}
EXPORT_SYMBOL(mlx5_fpga_mem_write);
int mlx5_fpga_get_sbu_caps(struct mlx5_fpga_device *fdev, int size, void *buf)
{
return mlx5_fpga_sbu_caps(fdev->mdev, buf, size);
}
EXPORT_SYMBOL(mlx5_fpga_get_sbu_caps);
/*
* Copyright (c) 2017 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*
*/
#ifndef MLX5_FPGA_SDK_H
#define MLX5_FPGA_SDK_H
#include <linux/types.h>
#include <linux/dma-direction.h>
/**
* DOC: Innova SDK
* This header defines the in-kernel API for Innova FPGA client drivers.
*/
enum mlx5_fpga_access_type {
MLX5_FPGA_ACCESS_TYPE_I2C = 0x0,
MLX5_FPGA_ACCESS_TYPE_DONTCARE = 0x0,
};
struct mlx5_fpga_conn;
struct mlx5_fpga_device;
/**
* struct mlx5_fpga_dma_entry - A scatter-gather DMA entry
*/
struct mlx5_fpga_dma_entry {
/** @data: Virtual address pointer to the data */
void *data;
/** @size: Size in bytes of the data */
unsigned int size;
/** @dma_addr: Private member. Physical DMA-mapped address of the data */
dma_addr_t dma_addr;
};
/**
* struct mlx5_fpga_dma_buf - A packet buffer
* May contain up to 2 scatter-gather data entries
*/
struct mlx5_fpga_dma_buf {
/** @dma_dir: DMA direction */
enum dma_data_direction dma_dir;
/** @sg: Scatter-gather entries pointing to the data in memory */
struct mlx5_fpga_dma_entry sg[2];
/** @list: Item in SQ backlog, for TX packets */
struct list_head list;
/**
* @complete: Completion routine, for TX packets
* @conn: FPGA Connection this packet was sent to
* @fdev: FPGA device this packet was sent to
* @buf: The packet buffer
* @status: 0 if successful, or an error code otherwise
*/
void (*complete)(struct mlx5_fpga_conn *conn,
struct mlx5_fpga_device *fdev,
struct mlx5_fpga_dma_buf *buf, u8 status);
};
/**
* struct mlx5_fpga_conn_attr - FPGA connection attributes
* Describes the attributes of a connection
*/
struct mlx5_fpga_conn_attr {
/** @tx_size: Size of connection TX queue, in packets */
unsigned int tx_size;
/** @rx_size: Size of connection RX queue, in packets */
unsigned int rx_size;
/**
* @recv_cb: Callback function which is called for received packets
* @cb_arg: The value provided in mlx5_fpga_conn_attr.cb_arg
* @buf: A buffer containing a received packet
*
* buf is guaranteed to only contain a single scatter-gather entry.
* The size of the actual packet received is specified in buf.sg[0].size
* When this callback returns, the packet buffer may be re-used for
* subsequent receives.
*/
void (*recv_cb)(void *cb_arg, struct mlx5_fpga_dma_buf *buf);
void *cb_arg;
};
/**
* mlx5_fpga_sbu_conn_create() - Initialize a new FPGA SBU connection
* @fdev: The FPGA device
* @attr: Attributes of the new connection
*
* Sets up a new FPGA SBU connection with the specified attributes.
* The receive callback function may be called for incoming messages even
* before this function returns.
*
* The caller must eventually destroy the connection by calling
* mlx5_fpga_sbu_conn_destroy.
*
* Return: A new connection, or ERR_PTR() error value otherwise.
*/
struct mlx5_fpga_conn *
mlx5_fpga_sbu_conn_create(struct mlx5_fpga_device *fdev,
struct mlx5_fpga_conn_attr *attr);
/**
* mlx5_fpga_sbu_conn_destroy() - Destroy an FPGA SBU connection
* @conn: The FPGA SBU connection to destroy
*
* Cleans up an FPGA SBU connection which was previously created with
* mlx5_fpga_sbu_conn_create.
*/
void mlx5_fpga_sbu_conn_destroy(struct mlx5_fpga_conn *conn);
/**
* mlx5_fpga_sbu_conn_sendmsg() - Queue the transmission of a packet
* @fdev: An FPGA SBU connection
* @buf: The packet buffer
*
* Queues a packet for transmission over an FPGA SBU connection.
* The buffer should not be modified or freed until completion.
* Upon completion, the buf's complete() callback is invoked, indicating the
* success or error status of the transmission.
*
* Return: 0 if successful, or an error value otherwise.
*/
int mlx5_fpga_sbu_conn_sendmsg(struct mlx5_fpga_conn *conn,
struct mlx5_fpga_dma_buf *buf);
/**
* mlx5_fpga_mem_read() - Read from FPGA memory address space
* @fdev: The FPGA device
* @size: Size of chunk to read, in bytes
* @addr: Starting address to read from, in FPGA address space
* @buf: Buffer to read into
* @access_type: Method for reading
*
* Reads from the specified address into the specified buffer.
* The address may point to configuration space or to DDR.
* Large reads may be performed internally as several non-atomic operations.
* This function may sleep, so should not be called from atomic contexts.
*
* Return: 0 if successful, or an error value otherwise.
*/
int mlx5_fpga_mem_read(struct mlx5_fpga_device *fdev, size_t size, u64 addr,
void *buf, enum mlx5_fpga_access_type access_type);
/**
* mlx5_fpga_mem_write() - Write to FPGA memory address space
* @fdev: The FPGA device
* @size: Size of chunk to write, in bytes
* @addr: Starting address to write to, in FPGA address space
* @buf: Buffer which contains data to write
* @access_type: Method for writing
*
* Writes the specified buffer data to FPGA memory at the specified address.
* The address may point to configuration space or to DDR.
* Large writes may be performed internally as several non-atomic operations.
* This function may sleep, so should not be called from atomic contexts.
*
* Return: 0 if successful, or an error value otherwise.
*/
int mlx5_fpga_mem_write(struct mlx5_fpga_device *fdev, size_t size, u64 addr,
void *buf, enum mlx5_fpga_access_type access_type);
/**
* mlx5_fpga_get_sbu_caps() - Read the SBU capabilities
* @fdev: The FPGA device
* @size: Size of the buffer to read into
* @buf: Buffer to read the capabilities into
*
* Reads the FPGA SBU capabilities into the specified buffer.
* The format of the capabilities buffer is SBU-dependent.
*
* Return: 0 if successful
* -EINVAL if the buffer is not large enough to contain SBU caps
* or any other error value otherwise.
*/
int mlx5_fpga_get_sbu_caps(struct mlx5_fpga_device *fdev, int size, void *buf);
#endif /* MLX5_FPGA_SDK_H */
/*
* Copyright (c) 2017, Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#include <linux/mlx5/driver.h>
#include <linux/etherdevice.h>
#include <linux/idr.h>
#include "mlx5_core.h"
void mlx5_init_reserved_gids(struct mlx5_core_dev *dev)
{
unsigned int tblsz = MLX5_CAP_ROCE(dev, roce_address_table_size);
ida_init(&dev->roce.reserved_gids.ida);
dev->roce.reserved_gids.start = tblsz;
dev->roce.reserved_gids.count = 0;
}
void mlx5_cleanup_reserved_gids(struct mlx5_core_dev *dev)
{
WARN_ON(!ida_is_empty(&dev->roce.reserved_gids.ida));
dev->roce.reserved_gids.start = 0;
dev->roce.reserved_gids.count = 0;
ida_destroy(&dev->roce.reserved_gids.ida);
}
int mlx5_core_reserve_gids(struct mlx5_core_dev *dev, unsigned int count)
{
if (test_bit(MLX5_INTERFACE_STATE_UP, &dev->intf_state)) {
mlx5_core_err(dev, "Cannot reserve GIDs when interfaces are up\n");
return -EPERM;
}
if (dev->roce.reserved_gids.start < count) {
mlx5_core_warn(dev, "GID table exhausted attempting to reserve %d more GIDs\n",
count);
return -ENOMEM;
}
if (dev->roce.reserved_gids.count + count > MLX5_MAX_RESERVED_GIDS) {
mlx5_core_warn(dev, "Unable to reserve %d more GIDs\n", count);
return -ENOMEM;
}
dev->roce.reserved_gids.start -= count;
dev->roce.reserved_gids.count += count;
mlx5_core_dbg(dev, "Reserved %u GIDs starting at %u\n",
dev->roce.reserved_gids.count,
dev->roce.reserved_gids.start);
return 0;
}
void mlx5_core_unreserve_gids(struct mlx5_core_dev *dev, unsigned int count)
{
WARN(test_bit(MLX5_INTERFACE_STATE_UP, &dev->intf_state), "Unreserving GIDs when interfaces are up");
WARN(count > dev->roce.reserved_gids.count, "Unreserving %u GIDs when only %u reserved",
count, dev->roce.reserved_gids.count);
dev->roce.reserved_gids.start += count;
dev->roce.reserved_gids.count -= count;
mlx5_core_dbg(dev, "%u GIDs starting at %u left reserved\n",
dev->roce.reserved_gids.count,
dev->roce.reserved_gids.start);
}
int mlx5_core_reserved_gid_alloc(struct mlx5_core_dev *dev, int *gid_index)
{
int end = dev->roce.reserved_gids.start +
dev->roce.reserved_gids.count;
int index = 0;
index = ida_simple_get(&dev->roce.reserved_gids.ida,
dev->roce.reserved_gids.start, end,
GFP_KERNEL);
if (index < 0)
return index;
mlx5_core_dbg(dev, "Allodating reserved GID %u\n", index);
*gid_index = index;
return 0;
}
void mlx5_core_reserved_gid_free(struct mlx5_core_dev *dev, int gid_index)
{
mlx5_core_dbg(dev, "Freeing reserved GID %u\n", gid_index);
ida_simple_remove(&dev->roce.reserved_gids.ida, gid_index);
}
unsigned int mlx5_core_reserved_gids_count(struct mlx5_core_dev *dev)
{
return dev->roce.reserved_gids.count;
}
EXPORT_SYMBOL_GPL(mlx5_core_reserved_gids_count);
int mlx5_core_roce_gid_set(struct mlx5_core_dev *dev, unsigned int index,
u8 roce_version, u8 roce_l3_type, const u8 *gid,
const u8 *mac, bool vlan, u16 vlan_id)
{
#define MLX5_SET_RA(p, f, v) MLX5_SET(roce_addr_layout, p, f, v)
u32 in[MLX5_ST_SZ_DW(set_roce_address_in)] = {0};
u32 out[MLX5_ST_SZ_DW(set_roce_address_out)] = {0};
void *in_addr = MLX5_ADDR_OF(set_roce_address_in, in, roce_address);
char *addr_l3_addr = MLX5_ADDR_OF(roce_addr_layout, in_addr,
source_l3_address);
void *addr_mac = MLX5_ADDR_OF(roce_addr_layout, in_addr,
source_mac_47_32);
int gidsz = MLX5_FLD_SZ_BYTES(roce_addr_layout, source_l3_address);
if (MLX5_CAP_GEN(dev, port_type) != MLX5_CAP_PORT_TYPE_ETH)
return -EINVAL;
if (gid) {
if (vlan) {
MLX5_SET_RA(in_addr, vlan_valid, 1);
MLX5_SET_RA(in_addr, vlan_id, vlan_id);
}
ether_addr_copy(addr_mac, mac);
MLX5_SET_RA(in_addr, roce_version, roce_version);
MLX5_SET_RA(in_addr, roce_l3_type, roce_l3_type);
memcpy(addr_l3_addr, gid, gidsz);
}
MLX5_SET(set_roce_address_in, in, roce_address_index, index);
MLX5_SET(set_roce_address_in, in, opcode, MLX5_CMD_OP_SET_ROCE_ADDRESS);
return mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out));
}
EXPORT_SYMBOL(mlx5_core_roce_gid_set);
/*
* Copyright (c) 2017, Mellanox Technologies, Ltd. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#ifndef __LIB_MLX5_H__
#define __LIB_MLX5_H__
void mlx5_init_reserved_gids(struct mlx5_core_dev *dev);
void mlx5_cleanup_reserved_gids(struct mlx5_core_dev *dev);
int mlx5_core_reserve_gids(struct mlx5_core_dev *dev, unsigned int count);
void mlx5_core_unreserve_gids(struct mlx5_core_dev *dev, unsigned int count);
int mlx5_core_reserved_gid_alloc(struct mlx5_core_dev *dev, int *gid_index);
void mlx5_core_reserved_gid_free(struct mlx5_core_dev *dev, int gid_index);
#endif
......@@ -56,7 +56,9 @@
#ifdef CONFIG_MLX5_CORE_EN
#include "eswitch.h"
#endif
#include "lib/mlx5.h"
#include "fpga/core.h"
#include "accel/ipsec.h"
MODULE_AUTHOR("Eli Cohen <eli@mellanox.com>");
MODULE_DESCRIPTION("Mellanox Connect-IB, ConnectX-4 core driver");
......@@ -936,6 +938,8 @@ static int mlx5_init_once(struct mlx5_core_dev *dev, struct mlx5_priv *priv)
mlx5_init_mkey_table(dev);
mlx5_init_reserved_gids(dev);
err = mlx5_init_rl_table(dev);
if (err) {
dev_err(&pdev->dev, "Failed to init rate limiting\n");
......@@ -956,8 +960,16 @@ static int mlx5_init_once(struct mlx5_core_dev *dev, struct mlx5_priv *priv)
goto err_eswitch_cleanup;
}
err = mlx5_fpga_init(dev);
if (err) {
dev_err(&pdev->dev, "Failed to init fpga device %d\n", err);
goto err_sriov_cleanup;
}
return 0;
err_sriov_cleanup:
mlx5_sriov_cleanup(dev);
err_eswitch_cleanup:
#ifdef CONFIG_MLX5_CORE_EN
mlx5_eswitch_cleanup(dev->priv.eswitch);
......@@ -981,11 +993,13 @@ static int mlx5_init_once(struct mlx5_core_dev *dev, struct mlx5_priv *priv)
static void mlx5_cleanup_once(struct mlx5_core_dev *dev)
{
mlx5_fpga_cleanup(dev);
mlx5_sriov_cleanup(dev);
#ifdef CONFIG_MLX5_CORE_EN
mlx5_eswitch_cleanup(dev->priv.eswitch);
#endif
mlx5_cleanup_rl_table(dev);
mlx5_cleanup_reserved_gids(dev);
mlx5_cleanup_mkey_table(dev);
mlx5_cleanup_srq_table(dev);
mlx5_cleanup_qp_table(dev);
......@@ -1117,16 +1131,10 @@ static int mlx5_load_one(struct mlx5_core_dev *dev, struct mlx5_priv *priv,
goto err_disable_msix;
}
err = mlx5_fpga_device_init(dev);
if (err) {
dev_err(&pdev->dev, "fpga device init failed %d\n", err);
goto err_put_uars;
}
err = mlx5_start_eqs(dev);
if (err) {
dev_err(&pdev->dev, "Failed to start pages and async EQs\n");
goto err_fpga_init;
goto err_put_uars;
}
err = alloc_comp_eqs(dev);
......@@ -1160,7 +1168,12 @@ static int mlx5_load_one(struct mlx5_core_dev *dev, struct mlx5_priv *priv,
err = mlx5_fpga_device_start(dev);
if (err) {
dev_err(&pdev->dev, "fpga device start failed %d\n", err);
goto err_reg_dev;
goto err_fpga_start;
}
err = mlx5_accel_ipsec_init(dev);
if (err) {
dev_err(&pdev->dev, "IPSec device start failed %d\n", err);
goto err_ipsec_start;
}
if (mlx5_device_registered(dev)) {
......@@ -1181,6 +1194,11 @@ static int mlx5_load_one(struct mlx5_core_dev *dev, struct mlx5_priv *priv,
return 0;
err_reg_dev:
mlx5_accel_ipsec_cleanup(dev);
err_ipsec_start:
mlx5_fpga_device_stop(dev);
err_fpga_start:
mlx5_sriov_detach(dev);
err_sriov:
......@@ -1198,9 +1216,6 @@ static int mlx5_load_one(struct mlx5_core_dev *dev, struct mlx5_priv *priv,
err_stop_eqs:
mlx5_stop_eqs(dev);
err_fpga_init:
mlx5_fpga_device_cleanup(dev);
err_put_uars:
mlx5_put_uars_page(dev, priv->uar);
......@@ -1254,9 +1269,15 @@ static int mlx5_unload_one(struct mlx5_core_dev *dev, struct mlx5_priv *priv,
goto out;
}
clear_bit(MLX5_INTERFACE_STATE_UP, &dev->intf_state);
set_bit(MLX5_INTERFACE_STATE_DOWN, &dev->intf_state);
if (mlx5_device_registered(dev))
mlx5_detach_device(dev);
mlx5_accel_ipsec_cleanup(dev);
mlx5_fpga_device_stop(dev);
mlx5_sriov_detach(dev);
#ifdef CONFIG_MLX5_CORE_EN
mlx5_eswitch_detach(dev->priv.eswitch);
......@@ -1265,7 +1286,6 @@ static int mlx5_unload_one(struct mlx5_core_dev *dev, struct mlx5_priv *priv,
mlx5_irq_clear_affinity_hints(dev);
free_comp_eqs(dev);
mlx5_stop_eqs(dev);
mlx5_fpga_device_cleanup(dev);
mlx5_put_uars_page(dev, priv->uar);
mlx5_disable_msix(dev);
if (cleanup)
......@@ -1282,8 +1302,6 @@ static int mlx5_unload_one(struct mlx5_core_dev *dev, struct mlx5_priv *priv,
mlx5_cmd_cleanup(dev);
out:
clear_bit(MLX5_INTERFACE_STATE_UP, &dev->intf_state);
set_bit(MLX5_INTERFACE_STATE_DOWN, &dev->intf_state);
mutex_unlock(&dev->intf_state_mutex);
return err;
}
......
......@@ -926,12 +926,16 @@ static int mlx5_nic_vport_update_roce_state(struct mlx5_core_dev *mdev,
int mlx5_nic_vport_enable_roce(struct mlx5_core_dev *mdev)
{
if (atomic_inc_return(&mdev->roce.roce_en) != 1)
return 0;
return mlx5_nic_vport_update_roce_state(mdev, MLX5_VPORT_ROCE_ENABLED);
}
EXPORT_SYMBOL_GPL(mlx5_nic_vport_enable_roce);
int mlx5_nic_vport_disable_roce(struct mlx5_core_dev *mdev)
{
if (atomic_dec_return(&mdev->roce.roce_en) != 0)
return 0;
return mlx5_nic_vport_update_roce_state(mdev, MLX5_VPORT_ROCE_DISABLED);
}
EXPORT_SYMBOL_GPL(mlx5_nic_vport_disable_roce);
......
......@@ -54,6 +54,12 @@ static u32 mlx5_wq_cyc_get_byte_size(struct mlx5_wq_cyc *wq)
return mlx5_wq_cyc_get_size(wq) << wq->log_stride;
}
static u32 mlx5_wq_qp_get_byte_size(struct mlx5_wq_qp *wq)
{
return mlx5_wq_cyc_get_byte_size(&wq->rq) +
mlx5_wq_cyc_get_byte_size(&wq->sq);
}
static u32 mlx5_cqwq_get_byte_size(struct mlx5_cqwq *wq)
{
return mlx5_cqwq_get_size(wq) << wq->log_stride;
......@@ -99,6 +105,46 @@ int mlx5_wq_cyc_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
return err;
}
int mlx5_wq_qp_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
void *qpc, struct mlx5_wq_qp *wq,
struct mlx5_wq_ctrl *wq_ctrl)
{
int err;
wq->rq.log_stride = MLX5_GET(qpc, qpc, log_rq_stride) + 4;
wq->rq.sz_m1 = (1 << MLX5_GET(qpc, qpc, log_rq_size)) - 1;
wq->sq.log_stride = ilog2(MLX5_SEND_WQE_BB);
wq->sq.sz_m1 = (1 << MLX5_GET(qpc, qpc, log_sq_size)) - 1;
err = mlx5_db_alloc_node(mdev, &wq_ctrl->db, param->db_numa_node);
if (err) {
mlx5_core_warn(mdev, "mlx5_db_alloc_node() failed, %d\n", err);
return err;
}
err = mlx5_buf_alloc_node(mdev, mlx5_wq_qp_get_byte_size(wq),
&wq_ctrl->buf, param->buf_numa_node);
if (err) {
mlx5_core_warn(mdev, "mlx5_buf_alloc_node() failed, %d\n", err);
goto err_db_free;
}
wq->rq.buf = wq_ctrl->buf.direct.buf;
wq->sq.buf = wq->rq.buf + mlx5_wq_cyc_get_byte_size(&wq->rq);
wq->rq.db = &wq_ctrl->db.db[MLX5_RCV_DBR];
wq->sq.db = &wq_ctrl->db.db[MLX5_SND_DBR];
wq_ctrl->mdev = mdev;
return 0;
err_db_free:
mlx5_db_free(mdev, &wq_ctrl->db);
return err;
}
int mlx5_cqwq_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
void *cqc, struct mlx5_cqwq *wq,
struct mlx5_frag_wq_ctrl *wq_ctrl)
......
......@@ -34,6 +34,8 @@
#define __MLX5_WQ_H__
#include <linux/mlx5/mlx5_ifc.h>
#include <linux/mlx5/cq.h>
#include <linux/mlx5/qp.h>
struct mlx5_wq_param {
int linear;
......@@ -60,6 +62,11 @@ struct mlx5_wq_cyc {
u8 log_stride;
};
struct mlx5_wq_qp {
struct mlx5_wq_cyc rq;
struct mlx5_wq_cyc sq;
};
struct mlx5_cqwq {
struct mlx5_frag_buf frag_buf;
__be32 *db;
......@@ -87,6 +94,10 @@ int mlx5_wq_cyc_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
struct mlx5_wq_ctrl *wq_ctrl);
u32 mlx5_wq_cyc_get_size(struct mlx5_wq_cyc *wq);
int mlx5_wq_qp_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
void *qpc, struct mlx5_wq_qp *wq,
struct mlx5_wq_ctrl *wq_ctrl);
int mlx5_cqwq_create(struct mlx5_core_dev *mdev, struct mlx5_wq_param *param,
void *cqc, struct mlx5_cqwq *wq,
struct mlx5_frag_wq_ctrl *wq_ctrl);
......@@ -146,6 +157,22 @@ static inline void mlx5_cqwq_update_db_record(struct mlx5_cqwq *wq)
*wq->db = cpu_to_be32(wq->cc & 0xffffff);
}
static inline struct mlx5_cqe64 *mlx5_cqwq_get_cqe(struct mlx5_cqwq *wq)
{
u32 ci = mlx5_cqwq_get_ci(wq);
struct mlx5_cqe64 *cqe = mlx5_cqwq_get_wqe(wq, ci);
u8 cqe_ownership_bit = cqe->op_own & MLX5_CQE_OWNER_MASK;
u8 sw_ownership_val = mlx5_cqwq_get_wrap_cnt(wq) & 1;
if (cqe_ownership_bit != sw_ownership_val)
return NULL;
/* ensure cqe content is read after cqe ownership bit */
dma_rmb();
return cqe;
}
static inline int mlx5_wq_ll_is_full(struct mlx5_wq_ll *wq)
{
return wq->cur_sz == wq->sz_m1;
......
......@@ -1103,6 +1103,9 @@ enum mlx5_mcam_feature_groups {
#define MLX5_CAP_FPGA(mdev, cap) \
MLX5_GET(fpga_cap, (mdev)->caps.hca_cur[MLX5_CAP_FPGA], cap)
#define MLX5_CAP64_FPGA(mdev, cap) \
MLX5_GET64(fpga_cap, (mdev)->caps.hca_cur[MLX5_CAP_FPGA], cap)
enum {
MLX5_CMD_STAT_OK = 0x0,
MLX5_CMD_STAT_INT_ERR = 0x1,
......
......@@ -44,6 +44,7 @@
#include <linux/workqueue.h>
#include <linux/mempool.h>
#include <linux/interrupt.h>
#include <linux/idr.h>
#include <linux/mlx5/device.h>
#include <linux/mlx5/doorbell.h>
......@@ -110,6 +111,7 @@ enum {
MLX5_REG_DCBX_APP = 0x4021,
MLX5_REG_FPGA_CAP = 0x4022,
MLX5_REG_FPGA_CTRL = 0x4023,
MLX5_REG_FPGA_ACCESS_REG = 0x4024,
MLX5_REG_PCAP = 0x5001,
MLX5_REG_PMTU = 0x5003,
MLX5_REG_PTYS = 0x5004,
......@@ -737,6 +739,14 @@ struct mlx5e_resources {
struct mlx5_sq_bfreg bfreg;
};
#define MLX5_MAX_RESERVED_GIDS 8
struct mlx5_rsvd_gids {
unsigned int start;
unsigned int count;
struct ida ida;
};
struct mlx5_core_dev {
struct pci_dev *pdev;
/* sync pci state */
......@@ -766,6 +776,10 @@ struct mlx5_core_dev {
atomic_t num_qps;
u32 issi;
struct mlx5e_resources mlx5e_res;
struct {
struct mlx5_rsvd_gids reserved_gids;
atomic_t roce_en;
} roce;
#ifdef CONFIG_MLX5_FPGA
struct mlx5_fpga_device *fpga;
#endif
......@@ -1045,6 +1059,11 @@ int mlx5_alloc_bfreg(struct mlx5_core_dev *mdev, struct mlx5_sq_bfreg *bfreg,
bool map_wc, bool fast_path);
void mlx5_free_bfreg(struct mlx5_core_dev *mdev, struct mlx5_sq_bfreg *bfreg);
unsigned int mlx5_core_reserved_gids_count(struct mlx5_core_dev *dev);
int mlx5_core_roce_gid_set(struct mlx5_core_dev *dev, unsigned int index,
u8 roce_version, u8 roce_l3_type, const u8 *gid,
const u8 *mac, bool vlan, u16 vlan_id);
static inline int fw_initializing(struct mlx5_core_dev *dev)
{
return ioread32be(&dev->iseg->initializing) >> 31;
......
......@@ -232,6 +232,11 @@ enum {
MLX5_CMD_OP_DEALLOC_ENCAP_HEADER = 0x93e,
MLX5_CMD_OP_ALLOC_MODIFY_HEADER_CONTEXT = 0x940,
MLX5_CMD_OP_DEALLOC_MODIFY_HEADER_CONTEXT = 0x941,
MLX5_CMD_OP_FPGA_CREATE_QP = 0x960,
MLX5_CMD_OP_FPGA_MODIFY_QP = 0x961,
MLX5_CMD_OP_FPGA_QUERY_QP = 0x962,
MLX5_CMD_OP_FPGA_DESTROY_QP = 0x963,
MLX5_CMD_OP_FPGA_QUERY_QP_COUNTERS = 0x964,
MLX5_CMD_OP_MAX
};
......@@ -600,7 +605,10 @@ struct mlx5_ifc_per_protocol_networking_offload_caps_bits {
u8 tunnel_statless_gre[0x1];
u8 tunnel_stateless_vxlan[0x1];
u8 reserved_at_20[0x20];
u8 swp[0x1];
u8 swp_csum[0x1];
u8 swp_lso[0x1];
u8 reserved_at_23[0x1d];
u8 reserved_at_40[0x10];
u8 lro_min_mss_size[0x10];
......@@ -2433,7 +2441,8 @@ struct mlx5_ifc_sqc_bits {
u8 min_wqe_inline_mode[0x3];
u8 state[0x4];
u8 reg_umr[0x1];
u8 reserved_at_d[0x13];
u8 allow_swp[0x1];
u8 reserved_at_e[0x12];
u8 reserved_at_20[0x8];
u8 user_index[0x18];
......@@ -8304,6 +8313,7 @@ union mlx5_ifc_ports_control_registers_document_bits {
struct mlx5_ifc_sltp_reg_bits sltp_reg;
struct mlx5_ifc_mtpps_reg_bits mtpps_reg;
struct mlx5_ifc_mtppse_reg_bits mtppse_reg;
struct mlx5_ifc_fpga_access_reg_bits fpga_access_reg;
struct mlx5_ifc_fpga_ctrl_bits fpga_ctrl_bits;
struct mlx5_ifc_fpga_cap_bits fpga_cap_bits;
struct mlx5_ifc_mcqi_reg_bits mcqi_reg;
......
此差异已折叠。
......@@ -225,10 +225,20 @@ enum {
MLX5_ETH_WQE_INSERT_VLAN = 1 << 15,
};
enum {
MLX5_ETH_WQE_SWP_INNER_L3_IPV6 = 1 << 0,
MLX5_ETH_WQE_SWP_INNER_L4_UDP = 1 << 1,
MLX5_ETH_WQE_SWP_OUTER_L3_IPV6 = 1 << 4,
MLX5_ETH_WQE_SWP_OUTER_L4_UDP = 1 << 5,
};
struct mlx5_wqe_eth_seg {
u8 rsvd0[4];
u8 swp_outer_l4_offset;
u8 swp_outer_l3_offset;
u8 swp_inner_l4_offset;
u8 swp_inner_l3_offset;
u8 cs_flags;
u8 rsvd1;
u8 swp_flags;
__be16 mss;
__be32 rsvd2;
union {
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册