提交 1738cd3e 编写于 作者: N Netanel Belgazal 提交者: David S. Miller

net: ena: Add a driver for Amazon Elastic Network Adapters (ENA)

This is a driver for the ENA family of networking devices.
Signed-off-by: NNetanel Belgazal <netanel@annapurnalabs.com>
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
上级 4330ea79
......@@ -74,6 +74,8 @@ dns_resolver.txt
- The DNS resolver module allows kernel servies to make DNS queries.
driver.txt
- Softnet driver issues.
ena.txt
- info on Amazon's Elastic Network Adapter (ENA)
e100.txt
- info on Intel's EtherExpress PRO/100 line of 10/100 boards
e1000.txt
......
Linux kernel driver for Elastic Network Adapter (ENA) family:
=============================================================
Overview:
=========
ENA is a networking interface designed to make good use of modern CPU
features and system architectures.
The ENA device exposes a lightweight management interface with a
minimal set of memory mapped registers and extendable command set
through an Admin Queue.
The driver supports a range of ENA devices, is link-speed independent
(i.e., the same driver is used for 10GbE, 25GbE, 40GbE, etc.), and has
a negotiated and extendable feature set.
Some ENA devices support SR-IOV. This driver is used for both the
SR-IOV Physical Function (PF) and Virtual Function (VF) devices.
ENA devices enable high speed and low overhead network traffic
processing by providing multiple Tx/Rx queue pairs (the maximum number
is advertised by the device via the Admin Queue), a dedicated MSI-X
interrupt vector per Tx/Rx queue pair, adaptive interrupt moderation,
and CPU cacheline optimized data placement.
The ENA driver supports industry standard TCP/IP offload features such
as checksum offload and TCP transmit segmentation offload (TSO).
Receive-side scaling (RSS) is supported for multi-core scaling.
The ENA driver and its corresponding devices implement health
monitoring mechanisms such as watchdog, enabling the device and driver
to recover in a manner transparent to the application, as well as
debug logs.
Some of the ENA devices support a working mode called Low-latency
Queue (LLQ), which saves several more microseconds.
Supported PCI vendor ID/device IDs:
===================================
1d0f:0ec2 - ENA PF
1d0f:1ec2 - ENA PF with LLQ support
1d0f:ec20 - ENA VF
1d0f:ec21 - ENA VF with LLQ support
ENA Source Code Directory Structure:
====================================
ena_com.[ch] - Management communication layer. This layer is
responsible for the handling all the management
(admin) communication between the device and the
driver.
ena_eth_com.[ch] - Tx/Rx data path.
ena_admin_defs.h - Definition of ENA management interface.
ena_eth_io_defs.h - Definition of ENA data path interface.
ena_common_defs.h - Common definitions for ena_com layer.
ena_regs_defs.h - Definition of ENA PCI memory-mapped (MMIO) registers.
ena_netdev.[ch] - Main Linux kernel driver.
ena_syfsfs.[ch] - Sysfs files.
ena_ethtool.c - ethtool callbacks.
ena_pci_id_tbl.h - Supported device IDs.
Management Interface:
=====================
ENA management interface is exposed by means of:
- PCIe Configuration Space
- Device Registers
- Admin Queue (AQ) and Admin Completion Queue (ACQ)
- Asynchronous Event Notification Queue (AENQ)
ENA device MMIO Registers are accessed only during driver
initialization and are not involved in further normal device
operation.
AQ is used for submitting management commands, and the
results/responses are reported asynchronously through ACQ.
ENA introduces a very small set of management commands with room for
vendor-specific extensions. Most of the management operations are
framed in a generic Get/Set feature command.
The following admin queue commands are supported:
- Create I/O submission queue
- Create I/O completion queue
- Destroy I/O submission queue
- Destroy I/O completion queue
- Get feature
- Set feature
- Configure AENQ
- Get statistics
Refer to ena_admin_defs.h for the list of supported Get/Set Feature
properties.
The Asynchronous Event Notification Queue (AENQ) is a uni-directional
queue used by the ENA device to send to the driver events that cannot
be reported using ACQ. AENQ events are subdivided into groups. Each
group may have multiple syndromes, as shown below
The events are:
Group Syndrome
Link state change - X -
Fatal error - X -
Notification Suspend traffic
Notification Resume traffic
Keep-Alive - X -
ACQ and AENQ share the same MSI-X vector.
Keep-Alive is a special mechanism that allows monitoring of the
device's health. The driver maintains a watchdog (WD) handler which,
if fired, logs the current state and statistics then resets and
restarts the ENA device and driver. A Keep-Alive event is delivered by
the device every second. The driver re-arms the WD upon reception of a
Keep-Alive event. A missed Keep-Alive event causes the WD handler to
fire.
Data Path Interface:
====================
I/O operations are based on Tx and Rx Submission Queues (Tx SQ and Rx
SQ correspondingly). Each SQ has a completion queue (CQ) associated
with it.
The SQs and CQs are implemented as descriptor rings in contiguous
physical memory.
The ENA driver supports two Queue Operation modes for Tx SQs:
- Regular mode
* In this mode the Tx SQs reside in the host's memory. The ENA
device fetches the ENA Tx descriptors and packet data from host
memory.
- Low Latency Queue (LLQ) mode or "push-mode".
* In this mode the driver pushes the transmit descriptors and the
first 128 bytes of the packet directly to the ENA device memory
space. The rest of the packet payload is fetched by the
device. For this operation mode, the driver uses a dedicated PCI
device memory BAR, which is mapped with write-combine capability.
The Rx SQs support only the regular mode.
Note: Not all ENA devices support LLQ, and this feature is negotiated
with the device upon initialization. If the ENA device does not
support LLQ mode, the driver falls back to the regular mode.
The driver supports multi-queue for both Tx and Rx. This has various
benefits:
- Reduced CPU/thread/process contention on a given Ethernet interface.
- Cache miss rate on completion is reduced, particularly for data
cache lines that hold the sk_buff structures.
- Increased process-level parallelism when handling received packets.
- Increased data cache hit rate, by steering kernel processing of
packets to the CPU, where the application thread consuming the
packet is running.
- In hardware interrupt re-direction.
Interrupt Modes:
================
The driver assigns a single MSI-X vector per queue pair (for both Tx
and Rx directions). The driver assigns an additional dedicated MSI-X vector
for management (for ACQ and AENQ).
Management interrupt registration is performed when the Linux kernel
probes the adapter, and it is de-registered when the adapter is
removed. I/O queue interrupt registration is performed when the Linux
interface of the adapter is opened, and it is de-registered when the
interface is closed.
The management interrupt is named:
ena-mgmnt@pci:<PCI domain:bus:slot.function>
and for each queue pair, an interrupt is named:
<interface name>-Tx-Rx-<queue index>
The ENA device operates in auto-mask and auto-clear interrupt
modes. That is, once MSI-X is delivered to the host, its Cause bit is
automatically cleared and the interrupt is masked. The interrupt is
unmasked by the driver after NAPI processing is complete.
Interrupt Moderation:
=====================
ENA driver and device can operate in conventional or adaptive interrupt
moderation mode.
In conventional mode the driver instructs device to postpone interrupt
posting according to static interrupt delay value. The interrupt delay
value can be configured through ethtool(8). The following ethtool
parameters are supported by the driver: tx-usecs, rx-usecs
In adaptive interrupt moderation mode the interrupt delay value is
updated by the driver dynamically and adjusted every NAPI cycle
according to the traffic nature.
By default ENA driver applies adaptive coalescing on Rx traffic and
conventional coalescing on Tx traffic.
Adaptive coalescing can be switched on/off through ethtool(8)
adaptive_rx on|off parameter.
The driver chooses interrupt delay value according to the number of
bytes and packets received between interrupt unmasking and interrupt
posting. The driver uses interrupt delay table that subdivides the
range of received bytes/packets into 5 levels and assigns interrupt
delay value to each level.
The user can enable/disable adaptive moderation, modify the interrupt
delay table and restore its default values through sysfs.
The rx_copybreak is initialized by default to ENA_DEFAULT_RX_COPYBREAK
and can be configured by the ETHTOOL_STUNABLE command of the
SIOCETHTOOL ioctl.
SKB:
The driver-allocated SKB for frames received from Rx handling using
NAPI context. The allocation method depends on the size of the packet.
If the frame length is larger than rx_copybreak, napi_get_frags()
is used, otherwise netdev_alloc_skb_ip_align() is used, the buffer
content is copied (by CPU) to the SKB, and the buffer is recycled.
Statistics:
===========
The user can obtain ENA device and driver statistics using ethtool.
The driver can collect regular or extended statistics (including
per-queue stats) from the device.
In addition the driver logs the stats to syslog upon device reset.
MTU:
====
The driver supports an arbitrarily large MTU with a maximum that is
negotiated with the device. The driver configures MTU using the
SetFeature command (ENA_ADMIN_MTU property). The user can change MTU
via ip(8) and similar legacy tools.
Stateless Offloads:
===================
The ENA driver supports:
- TSO over IPv4/IPv6
- TSO with ECN
- IPv4 header checksum offload
- TCP/UDP over IPv4/IPv6 checksum offloads
RSS:
====
- The ENA device supports RSS that allows flexible Rx traffic
steering.
- Toeplitz and CRC32 hash functions are supported.
- Different combinations of L2/L3/L4 fields can be configured as
inputs for hash functions.
- The driver configures RSS settings using the AQ SetFeature command
(ENA_ADMIN_RSS_HASH_FUNCTION, ENA_ADMIN_RSS_HASH_INPUT and
ENA_ADMIN_RSS_REDIRECTION_TABLE_CONFIG properties).
- If the NETIF_F_RXHASH flag is set, the 32-bit result of the hash
function delivered in the Rx CQ descriptor is set in the received
SKB.
- The user can provide a hash key, hash function, and configure the
indirection table through ethtool(8).
DATA PATH:
==========
Tx:
---
end_start_xmit() is called by the stack. This function does the following:
- Maps data buffers (skb->data and frags).
- Populates ena_buf for the push buffer (if the driver and device are
in push mode.)
- Prepares ENA bufs for the remaining frags.
- Allocates a new request ID from the empty req_id ring. The request
ID is the index of the packet in the Tx info. This is used for
out-of-order TX completions.
- Adds the packet to the proper place in the Tx ring.
- Calls ena_com_prepare_tx(), an ENA communication layer that converts
the ena_bufs to ENA descriptors (and adds meta ENA descriptors as
needed.)
* This function also copies the ENA descriptors and the push buffer
to the Device memory space (if in push mode.)
- Writes doorbell to the ENA device.
- When the ENA device finishes sending the packet, a completion
interrupt is raised.
- The interrupt handler schedules NAPI.
- The ena_clean_tx_irq() function is called. This function handles the
completion descriptors generated by the ENA, with a single
completion descriptor per completed packet.
* req_id is retrieved from the completion descriptor. The tx_info of
the packet is retrieved via the req_id. The data buffers are
unmapped and req_id is returned to the empty req_id ring.
* The function stops when the completion descriptors are completed or
the budget is reached.
Rx:
---
- When a packet is received from the ENA device.
- The interrupt handler schedules NAPI.
- The ena_clean_rx_irq() function is called. This function calls
ena_rx_pkt(), an ENA communication layer function, which returns the
number of descriptors used for a new unhandled packet, and zero if
no new packet is found.
- Then it calls the ena_clean_rx_irq() function.
- ena_eth_rx_skb() checks packet length:
* If the packet is small (len < rx_copybreak), the driver allocates
a SKB for the new packet, and copies the packet payload into the
SKB data buffer.
- In this way the original data buffer is not passed to the stack
and is reused for future Rx packets.
* Otherwise the function unmaps the Rx buffer, then allocates the
new SKB structure and hooks the Rx buffer to the SKB frags.
- The new SKB is updated with the necessary information (protocol,
checksum hw verify result, etc.), and then passed to the network
stack, using the NAPI interface function napi_gro_receive().
......@@ -636,6 +636,15 @@ F: drivers/tty/serial/altera_jtaguart.c
F: include/linux/altera_uart.h
F: include/linux/altera_jtaguart.h
AMAZON ETHERNET DRIVERS
M: Netanel Belgazal <netanel@annapurnalabs.com>
R: Saeed Bishara <saeed@annapurnalabs.com>
R: Zorik Machulsky <zorik@annapurnalabs.com>
L: netdev@vger.kernel.org
S: Supported
F: Documentation/networking/ena.txt
F: drivers/net/ethernet/amazon/
AMD CRYPTOGRAPHIC COPROCESSOR (CCP) DRIVER
M: Tom Lendacky <thomas.lendacky@amd.com>
M: Gary Hook <gary.hook@amd.com>
......
......@@ -24,6 +24,7 @@ source "drivers/net/ethernet/agere/Kconfig"
source "drivers/net/ethernet/allwinner/Kconfig"
source "drivers/net/ethernet/alteon/Kconfig"
source "drivers/net/ethernet/altera/Kconfig"
source "drivers/net/ethernet/amazon/Kconfig"
source "drivers/net/ethernet/amd/Kconfig"
source "drivers/net/ethernet/apm/Kconfig"
source "drivers/net/ethernet/apple/Kconfig"
......
......@@ -10,6 +10,7 @@ obj-$(CONFIG_NET_VENDOR_AGERE) += agere/
obj-$(CONFIG_NET_VENDOR_ALLWINNER) += allwinner/
obj-$(CONFIG_NET_VENDOR_ALTEON) += alteon/
obj-$(CONFIG_ALTERA_TSE) += altera/
obj-$(CONFIG_NET_VENDOR_AMAZON) += amazon/
obj-$(CONFIG_NET_VENDOR_AMD) += amd/
obj-$(CONFIG_NET_XGENE) += apm/
obj-$(CONFIG_NET_VENDOR_APPLE) += apple/
......
#
# Amazon network device configuration
#
config NET_VENDOR_AMAZON
bool "Amazon Devices"
default y
---help---
If you have a network (Ethernet) device belonging to this class, say Y.
Note that the answer to this question doesn't directly affect the
kernel: saying N will just cause the configurator to skip all
the questions about Amazon devices. If you say Y, you will be asked
for your specific device in the following questions.
if NET_VENDOR_AMAZON
config ENA_ETHERNET
tristate "Elastic Network Adapter (ENA) support"
depends on (PCI_MSI && X86)
---help---
This driver supports Elastic Network Adapter (ENA)"
To compile this driver as a module, choose M here.
The module will be called ena.
endif #NET_VENDOR_AMAZON
#
# Makefile for the Amazon network device drivers.
#
obj-$(CONFIG_ENA_ETHERNET) += ena/
#
# Makefile for the Elastic Network Adapter (ENA) device drivers.
#
obj-$(CONFIG_ENA_ETHERNET) += ena.o
ena-y := ena_netdev.o ena_com.o ena_eth_com.o ena_ethtool.o
此差异已折叠。
此差异已折叠。
此差异已折叠。
/*
* Copyright 2015 - 2016 Amazon.com, Inc. or its affiliates.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#ifndef _ENA_COMMON_H_
#define _ENA_COMMON_H_
#define ENA_COMMON_SPEC_VERSION_MAJOR 0 /* */
#define ENA_COMMON_SPEC_VERSION_MINOR 10 /* */
/* ENA operates with 48-bit memory addresses. ena_mem_addr_t */
struct ena_common_mem_addr {
u32 mem_addr_low;
u16 mem_addr_high;
/* MBZ */
u16 reserved16;
};
#endif /*_ENA_COMMON_H_ */
/*
* Copyright 2015 Amazon.com, Inc. or its affiliates.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#include "ena_eth_com.h"
static inline struct ena_eth_io_rx_cdesc_base *ena_com_get_next_rx_cdesc(
struct ena_com_io_cq *io_cq)
{
struct ena_eth_io_rx_cdesc_base *cdesc;
u16 expected_phase, head_masked;
u16 desc_phase;
head_masked = io_cq->head & (io_cq->q_depth - 1);
expected_phase = io_cq->phase;
cdesc = (struct ena_eth_io_rx_cdesc_base *)(io_cq->cdesc_addr.virt_addr
+ (head_masked * io_cq->cdesc_entry_size_in_bytes));
desc_phase = (cdesc->status & ENA_ETH_IO_RX_CDESC_BASE_PHASE_MASK) >>
ENA_ETH_IO_RX_CDESC_BASE_PHASE_SHIFT;
if (desc_phase != expected_phase)
return NULL;
return cdesc;
}
static inline void ena_com_cq_inc_head(struct ena_com_io_cq *io_cq)
{
io_cq->head++;
/* Switch phase bit in case of wrap around */
if (unlikely((io_cq->head & (io_cq->q_depth - 1)) == 0))
io_cq->phase ^= 1;
}
static inline void *get_sq_desc(struct ena_com_io_sq *io_sq)
{
u16 tail_masked;
u32 offset;
tail_masked = io_sq->tail & (io_sq->q_depth - 1);
offset = tail_masked * io_sq->desc_entry_size;
return (void *)((uintptr_t)io_sq->desc_addr.virt_addr + offset);
}
static inline void ena_com_copy_curr_sq_desc_to_dev(struct ena_com_io_sq *io_sq)
{
u16 tail_masked = io_sq->tail & (io_sq->q_depth - 1);
u32 offset = tail_masked * io_sq->desc_entry_size;
/* In case this queue isn't a LLQ */
if (io_sq->mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_HOST)
return;
memcpy_toio(io_sq->desc_addr.pbuf_dev_addr + offset,
io_sq->desc_addr.virt_addr + offset,
io_sq->desc_entry_size);
}
static inline void ena_com_sq_update_tail(struct ena_com_io_sq *io_sq)
{
io_sq->tail++;
/* Switch phase bit in case of wrap around */
if (unlikely((io_sq->tail & (io_sq->q_depth - 1)) == 0))
io_sq->phase ^= 1;
}
static inline int ena_com_write_header(struct ena_com_io_sq *io_sq,
u8 *head_src, u16 header_len)
{
u16 tail_masked = io_sq->tail & (io_sq->q_depth - 1);
u8 __iomem *dev_head_addr =
io_sq->header_addr + (tail_masked * io_sq->tx_max_header_size);
if (io_sq->mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_HOST)
return 0;
if (unlikely(!io_sq->header_addr)) {
pr_err("Push buffer header ptr is NULL\n");
return -EINVAL;
}
memcpy_toio(dev_head_addr, head_src, header_len);
return 0;
}
static inline struct ena_eth_io_rx_cdesc_base *
ena_com_rx_cdesc_idx_to_ptr(struct ena_com_io_cq *io_cq, u16 idx)
{
idx &= (io_cq->q_depth - 1);
return (struct ena_eth_io_rx_cdesc_base *)
((uintptr_t)io_cq->cdesc_addr.virt_addr +
idx * io_cq->cdesc_entry_size_in_bytes);
}
static inline u16 ena_com_cdesc_rx_pkt_get(struct ena_com_io_cq *io_cq,
u16 *first_cdesc_idx)
{
struct ena_eth_io_rx_cdesc_base *cdesc;
u16 count = 0, head_masked;
u32 last = 0;
do {
cdesc = ena_com_get_next_rx_cdesc(io_cq);
if (!cdesc)
break;
ena_com_cq_inc_head(io_cq);
count++;
last = (cdesc->status & ENA_ETH_IO_RX_CDESC_BASE_LAST_MASK) >>
ENA_ETH_IO_RX_CDESC_BASE_LAST_SHIFT;
} while (!last);
if (last) {
*first_cdesc_idx = io_cq->cur_rx_pkt_cdesc_start_idx;
count += io_cq->cur_rx_pkt_cdesc_count;
head_masked = io_cq->head & (io_cq->q_depth - 1);
io_cq->cur_rx_pkt_cdesc_count = 0;
io_cq->cur_rx_pkt_cdesc_start_idx = head_masked;
pr_debug("ena q_id: %d packets were completed. first desc idx %u descs# %d\n",
io_cq->qid, *first_cdesc_idx, count);
} else {
io_cq->cur_rx_pkt_cdesc_count += count;
count = 0;
}
return count;
}
static inline bool ena_com_meta_desc_changed(struct ena_com_io_sq *io_sq,
struct ena_com_tx_ctx *ena_tx_ctx)
{
int rc;
if (ena_tx_ctx->meta_valid) {
rc = memcmp(&io_sq->cached_tx_meta,
&ena_tx_ctx->ena_meta,
sizeof(struct ena_com_tx_meta));
if (unlikely(rc != 0))
return true;
}
return false;
}
static inline void ena_com_create_and_store_tx_meta_desc(struct ena_com_io_sq *io_sq,
struct ena_com_tx_ctx *ena_tx_ctx)
{
struct ena_eth_io_tx_meta_desc *meta_desc = NULL;
struct ena_com_tx_meta *ena_meta = &ena_tx_ctx->ena_meta;
meta_desc = get_sq_desc(io_sq);
memset(meta_desc, 0x0, sizeof(struct ena_eth_io_tx_meta_desc));
meta_desc->len_ctrl |= ENA_ETH_IO_TX_META_DESC_META_DESC_MASK;
meta_desc->len_ctrl |= ENA_ETH_IO_TX_META_DESC_EXT_VALID_MASK;
/* bits 0-9 of the mss */
meta_desc->word2 |= (ena_meta->mss <<
ENA_ETH_IO_TX_META_DESC_MSS_LO_SHIFT) &
ENA_ETH_IO_TX_META_DESC_MSS_LO_MASK;
/* bits 10-13 of the mss */
meta_desc->len_ctrl |= ((ena_meta->mss >> 10) <<
ENA_ETH_IO_TX_META_DESC_MSS_HI_SHIFT) &
ENA_ETH_IO_TX_META_DESC_MSS_HI_MASK;
/* Extended meta desc */
meta_desc->len_ctrl |= ENA_ETH_IO_TX_META_DESC_ETH_META_TYPE_MASK;
meta_desc->len_ctrl |= ENA_ETH_IO_TX_META_DESC_META_STORE_MASK;
meta_desc->len_ctrl |= (io_sq->phase <<
ENA_ETH_IO_TX_META_DESC_PHASE_SHIFT) &
ENA_ETH_IO_TX_META_DESC_PHASE_MASK;
meta_desc->len_ctrl |= ENA_ETH_IO_TX_META_DESC_FIRST_MASK;
meta_desc->word2 |= ena_meta->l3_hdr_len &
ENA_ETH_IO_TX_META_DESC_L3_HDR_LEN_MASK;
meta_desc->word2 |= (ena_meta->l3_hdr_offset <<
ENA_ETH_IO_TX_META_DESC_L3_HDR_OFF_SHIFT) &
ENA_ETH_IO_TX_META_DESC_L3_HDR_OFF_MASK;
meta_desc->word2 |= (ena_meta->l4_hdr_len <<
ENA_ETH_IO_TX_META_DESC_L4_HDR_LEN_IN_WORDS_SHIFT) &
ENA_ETH_IO_TX_META_DESC_L4_HDR_LEN_IN_WORDS_MASK;
meta_desc->len_ctrl |= ENA_ETH_IO_TX_META_DESC_META_STORE_MASK;
/* Cached the meta desc */
memcpy(&io_sq->cached_tx_meta, ena_meta,
sizeof(struct ena_com_tx_meta));
ena_com_copy_curr_sq_desc_to_dev(io_sq);
ena_com_sq_update_tail(io_sq);
}
static inline void ena_com_rx_set_flags(struct ena_com_rx_ctx *ena_rx_ctx,
struct ena_eth_io_rx_cdesc_base *cdesc)
{
ena_rx_ctx->l3_proto = cdesc->status &
ENA_ETH_IO_RX_CDESC_BASE_L3_PROTO_IDX_MASK;
ena_rx_ctx->l4_proto =
(cdesc->status & ENA_ETH_IO_RX_CDESC_BASE_L4_PROTO_IDX_MASK) >>
ENA_ETH_IO_RX_CDESC_BASE_L4_PROTO_IDX_SHIFT;
ena_rx_ctx->l3_csum_err =
(cdesc->status & ENA_ETH_IO_RX_CDESC_BASE_L3_CSUM_ERR_MASK) >>
ENA_ETH_IO_RX_CDESC_BASE_L3_CSUM_ERR_SHIFT;
ena_rx_ctx->l4_csum_err =
(cdesc->status & ENA_ETH_IO_RX_CDESC_BASE_L4_CSUM_ERR_MASK) >>
ENA_ETH_IO_RX_CDESC_BASE_L4_CSUM_ERR_SHIFT;
ena_rx_ctx->hash = cdesc->hash;
ena_rx_ctx->frag =
(cdesc->status & ENA_ETH_IO_RX_CDESC_BASE_IPV4_FRAG_MASK) >>
ENA_ETH_IO_RX_CDESC_BASE_IPV4_FRAG_SHIFT;
pr_debug("ena_rx_ctx->l3_proto %d ena_rx_ctx->l4_proto %d\nena_rx_ctx->l3_csum_err %d ena_rx_ctx->l4_csum_err %d\nhash frag %d frag: %d cdesc_status: %x\n",
ena_rx_ctx->l3_proto, ena_rx_ctx->l4_proto,
ena_rx_ctx->l3_csum_err, ena_rx_ctx->l4_csum_err,
ena_rx_ctx->hash, ena_rx_ctx->frag, cdesc->status);
}
/*****************************************************************************/
/***************************** API **********************************/
/*****************************************************************************/
int ena_com_prepare_tx(struct ena_com_io_sq *io_sq,
struct ena_com_tx_ctx *ena_tx_ctx,
int *nb_hw_desc)
{
struct ena_eth_io_tx_desc *desc = NULL;
struct ena_com_buf *ena_bufs = ena_tx_ctx->ena_bufs;
void *push_header = ena_tx_ctx->push_header;
u16 header_len = ena_tx_ctx->header_len;
u16 num_bufs = ena_tx_ctx->num_bufs;
int total_desc, i, rc;
bool have_meta;
u64 addr_hi;
WARN(io_sq->direction != ENA_COM_IO_QUEUE_DIRECTION_TX, "wrong Q type");
/* num_bufs +1 for potential meta desc */
if (ena_com_sq_empty_space(io_sq) < (num_bufs + 1)) {
pr_err("Not enough space in the tx queue\n");
return -ENOMEM;
}
if (unlikely(header_len > io_sq->tx_max_header_size)) {
pr_err("header size is too large %d max header: %d\n",
header_len, io_sq->tx_max_header_size);
return -EINVAL;
}
/* start with pushing the header (if needed) */
rc = ena_com_write_header(io_sq, push_header, header_len);
if (unlikely(rc))
return rc;
have_meta = ena_tx_ctx->meta_valid && ena_com_meta_desc_changed(io_sq,
ena_tx_ctx);
if (have_meta)
ena_com_create_and_store_tx_meta_desc(io_sq, ena_tx_ctx);
/* If the caller doesn't want send packets */
if (unlikely(!num_bufs && !header_len)) {
*nb_hw_desc = have_meta ? 0 : 1;
return 0;
}
desc = get_sq_desc(io_sq);
memset(desc, 0x0, sizeof(struct ena_eth_io_tx_desc));
/* Set first desc when we don't have meta descriptor */
if (!have_meta)
desc->len_ctrl |= ENA_ETH_IO_TX_DESC_FIRST_MASK;
desc->buff_addr_hi_hdr_sz |= (header_len <<
ENA_ETH_IO_TX_DESC_HEADER_LENGTH_SHIFT) &
ENA_ETH_IO_TX_DESC_HEADER_LENGTH_MASK;
desc->len_ctrl |= (io_sq->phase << ENA_ETH_IO_TX_DESC_PHASE_SHIFT) &
ENA_ETH_IO_TX_DESC_PHASE_MASK;
desc->len_ctrl |= ENA_ETH_IO_TX_DESC_COMP_REQ_MASK;
/* Bits 0-9 */
desc->meta_ctrl |= (ena_tx_ctx->req_id <<
ENA_ETH_IO_TX_DESC_REQ_ID_LO_SHIFT) &
ENA_ETH_IO_TX_DESC_REQ_ID_LO_MASK;
desc->meta_ctrl |= (ena_tx_ctx->df <<
ENA_ETH_IO_TX_DESC_DF_SHIFT) &
ENA_ETH_IO_TX_DESC_DF_MASK;
/* Bits 10-15 */
desc->len_ctrl |= ((ena_tx_ctx->req_id >> 10) <<
ENA_ETH_IO_TX_DESC_REQ_ID_HI_SHIFT) &
ENA_ETH_IO_TX_DESC_REQ_ID_HI_MASK;
if (ena_tx_ctx->meta_valid) {
desc->meta_ctrl |= (ena_tx_ctx->tso_enable <<
ENA_ETH_IO_TX_DESC_TSO_EN_SHIFT) &
ENA_ETH_IO_TX_DESC_TSO_EN_MASK;
desc->meta_ctrl |= ena_tx_ctx->l3_proto &
ENA_ETH_IO_TX_DESC_L3_PROTO_IDX_MASK;
desc->meta_ctrl |= (ena_tx_ctx->l4_proto <<
ENA_ETH_IO_TX_DESC_L4_PROTO_IDX_SHIFT) &
ENA_ETH_IO_TX_DESC_L4_PROTO_IDX_MASK;
desc->meta_ctrl |= (ena_tx_ctx->l3_csum_enable <<
ENA_ETH_IO_TX_DESC_L3_CSUM_EN_SHIFT) &
ENA_ETH_IO_TX_DESC_L3_CSUM_EN_MASK;
desc->meta_ctrl |= (ena_tx_ctx->l4_csum_enable <<
ENA_ETH_IO_TX_DESC_L4_CSUM_EN_SHIFT) &
ENA_ETH_IO_TX_DESC_L4_CSUM_EN_MASK;
desc->meta_ctrl |= (ena_tx_ctx->l4_csum_partial <<
ENA_ETH_IO_TX_DESC_L4_CSUM_PARTIAL_SHIFT) &
ENA_ETH_IO_TX_DESC_L4_CSUM_PARTIAL_MASK;
}
for (i = 0; i < num_bufs; i++) {
/* The first desc share the same desc as the header */
if (likely(i != 0)) {
ena_com_copy_curr_sq_desc_to_dev(io_sq);
ena_com_sq_update_tail(io_sq);
desc = get_sq_desc(io_sq);
memset(desc, 0x0, sizeof(struct ena_eth_io_tx_desc));
desc->len_ctrl |= (io_sq->phase <<
ENA_ETH_IO_TX_DESC_PHASE_SHIFT) &
ENA_ETH_IO_TX_DESC_PHASE_MASK;
}
desc->len_ctrl |= ena_bufs->len &
ENA_ETH_IO_TX_DESC_LENGTH_MASK;
addr_hi = ((ena_bufs->paddr &
GENMASK_ULL(io_sq->dma_addr_bits - 1, 32)) >> 32);
desc->buff_addr_lo = (u32)ena_bufs->paddr;
desc->buff_addr_hi_hdr_sz |= addr_hi &
ENA_ETH_IO_TX_DESC_ADDR_HI_MASK;
ena_bufs++;
}
/* set the last desc indicator */
desc->len_ctrl |= ENA_ETH_IO_TX_DESC_LAST_MASK;
ena_com_copy_curr_sq_desc_to_dev(io_sq);
ena_com_sq_update_tail(io_sq);
total_desc = max_t(u16, num_bufs, 1);
total_desc += have_meta ? 1 : 0;
*nb_hw_desc = total_desc;
return 0;
}
int ena_com_rx_pkt(struct ena_com_io_cq *io_cq,
struct ena_com_io_sq *io_sq,
struct ena_com_rx_ctx *ena_rx_ctx)
{
struct ena_com_rx_buf_info *ena_buf = &ena_rx_ctx->ena_bufs[0];
struct ena_eth_io_rx_cdesc_base *cdesc = NULL;
u16 cdesc_idx = 0;
u16 nb_hw_desc;
u16 i;
WARN(io_cq->direction != ENA_COM_IO_QUEUE_DIRECTION_RX, "wrong Q type");
nb_hw_desc = ena_com_cdesc_rx_pkt_get(io_cq, &cdesc_idx);
if (nb_hw_desc == 0) {
ena_rx_ctx->descs = nb_hw_desc;
return 0;
}
pr_debug("fetch rx packet: queue %d completed desc: %d\n", io_cq->qid,
nb_hw_desc);
if (unlikely(nb_hw_desc > ena_rx_ctx->max_bufs)) {
pr_err("Too many RX cdescs (%d) > MAX(%d)\n", nb_hw_desc,
ena_rx_ctx->max_bufs);
return -ENOSPC;
}
for (i = 0; i < nb_hw_desc; i++) {
cdesc = ena_com_rx_cdesc_idx_to_ptr(io_cq, cdesc_idx + i);
ena_buf->len = cdesc->length;
ena_buf->req_id = cdesc->req_id;
ena_buf++;
}
/* Update SQ head ptr */
io_sq->next_to_comp += nb_hw_desc;
pr_debug("[%s][QID#%d] Updating SQ head to: %d\n", __func__, io_sq->qid,
io_sq->next_to_comp);
/* Get rx flags from the last pkt */
ena_com_rx_set_flags(ena_rx_ctx, cdesc);
ena_rx_ctx->descs = nb_hw_desc;
return 0;
}
int ena_com_add_single_rx_desc(struct ena_com_io_sq *io_sq,
struct ena_com_buf *ena_buf,
u16 req_id)
{
struct ena_eth_io_rx_desc *desc;
WARN(io_sq->direction != ENA_COM_IO_QUEUE_DIRECTION_RX, "wrong Q type");
if (unlikely(ena_com_sq_empty_space(io_sq) == 0))
return -ENOSPC;
desc = get_sq_desc(io_sq);
memset(desc, 0x0, sizeof(struct ena_eth_io_rx_desc));
desc->length = ena_buf->len;
desc->ctrl |= ENA_ETH_IO_RX_DESC_FIRST_MASK;
desc->ctrl |= ENA_ETH_IO_RX_DESC_LAST_MASK;
desc->ctrl |= io_sq->phase & ENA_ETH_IO_RX_DESC_PHASE_MASK;
desc->ctrl |= ENA_ETH_IO_RX_DESC_COMP_REQ_MASK;
desc->req_id = req_id;
desc->buff_addr_lo = (u32)ena_buf->paddr;
desc->buff_addr_hi =
((ena_buf->paddr & GENMASK_ULL(io_sq->dma_addr_bits - 1, 32)) >> 32);
ena_com_sq_update_tail(io_sq);
return 0;
}
int ena_com_tx_comp_req_id_get(struct ena_com_io_cq *io_cq, u16 *req_id)
{
u8 expected_phase, cdesc_phase;
struct ena_eth_io_tx_cdesc *cdesc;
u16 masked_head;
masked_head = io_cq->head & (io_cq->q_depth - 1);
expected_phase = io_cq->phase;
cdesc = (struct ena_eth_io_tx_cdesc *)
((uintptr_t)io_cq->cdesc_addr.virt_addr +
(masked_head * io_cq->cdesc_entry_size_in_bytes));
/* When the current completion descriptor phase isn't the same as the
* expected, it mean that the device still didn't update
* this completion.
*/
cdesc_phase = cdesc->flags & ENA_ETH_IO_TX_CDESC_PHASE_MASK;
if (cdesc_phase != expected_phase)
return -EAGAIN;
ena_com_cq_inc_head(io_cq);
*req_id = cdesc->req_id;
return 0;
}
/*
* Copyright 2015 Amazon.com, Inc. or its affiliates.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#ifndef ENA_ETH_COM_H_
#define ENA_ETH_COM_H_
#include "ena_com.h"
/* head update threshold in units of (queue size / ENA_COMP_HEAD_THRESH) */
#define ENA_COMP_HEAD_THRESH 4
struct ena_com_tx_ctx {
struct ena_com_tx_meta ena_meta;
struct ena_com_buf *ena_bufs;
/* For LLQ, header buffer - pushed to the device mem space */
void *push_header;
enum ena_eth_io_l3_proto_index l3_proto;
enum ena_eth_io_l4_proto_index l4_proto;
u16 num_bufs;
u16 req_id;
/* For regular queue, indicate the size of the header
* For LLQ, indicate the size of the pushed buffer
*/
u16 header_len;
u8 meta_valid;
u8 tso_enable;
u8 l3_csum_enable;
u8 l4_csum_enable;
u8 l4_csum_partial;
u8 df; /* Don't fragment */
};
struct ena_com_rx_ctx {
struct ena_com_rx_buf_info *ena_bufs;
enum ena_eth_io_l3_proto_index l3_proto;
enum ena_eth_io_l4_proto_index l4_proto;
bool l3_csum_err;
bool l4_csum_err;
/* fragmented packet */
bool frag;
u32 hash;
u16 descs;
int max_bufs;
};
int ena_com_prepare_tx(struct ena_com_io_sq *io_sq,
struct ena_com_tx_ctx *ena_tx_ctx,
int *nb_hw_desc);
int ena_com_rx_pkt(struct ena_com_io_cq *io_cq,
struct ena_com_io_sq *io_sq,
struct ena_com_rx_ctx *ena_rx_ctx);
int ena_com_add_single_rx_desc(struct ena_com_io_sq *io_sq,
struct ena_com_buf *ena_buf,
u16 req_id);
int ena_com_tx_comp_req_id_get(struct ena_com_io_cq *io_cq, u16 *req_id);
static inline void ena_com_unmask_intr(struct ena_com_io_cq *io_cq,
struct ena_eth_io_intr_reg *intr_reg)
{
writel(intr_reg->intr_control, io_cq->unmask_reg);
}
static inline int ena_com_sq_empty_space(struct ena_com_io_sq *io_sq)
{
u16 tail, next_to_comp, cnt;
next_to_comp = io_sq->next_to_comp;
tail = io_sq->tail;
cnt = tail - next_to_comp;
return io_sq->q_depth - 1 - cnt;
}
static inline int ena_com_write_sq_doorbell(struct ena_com_io_sq *io_sq)
{
u16 tail;
tail = io_sq->tail;
pr_debug("write submission queue doorbell for queue: %d tail: %d\n",
io_sq->qid, tail);
writel(tail, io_sq->db_addr);
return 0;
}
static inline int ena_com_update_dev_comp_head(struct ena_com_io_cq *io_cq)
{
u16 unreported_comp, head;
bool need_update;
head = io_cq->head;
unreported_comp = head - io_cq->last_head_update;
need_update = unreported_comp > (io_cq->q_depth / ENA_COMP_HEAD_THRESH);
if (io_cq->cq_head_db_reg && need_update) {
pr_debug("Write completion queue doorbell for queue %d: head: %d\n",
io_cq->qid, head);
writel(head, io_cq->cq_head_db_reg);
io_cq->last_head_update = head;
}
return 0;
}
static inline void ena_com_update_numa_node(struct ena_com_io_cq *io_cq,
u8 numa_node)
{
struct ena_eth_io_numa_node_cfg_reg numa_cfg;
if (!io_cq->numa_node_cfg_reg)
return;
numa_cfg.numa_cfg = (numa_node & ENA_ETH_IO_NUMA_NODE_CFG_REG_NUMA_MASK)
| ENA_ETH_IO_NUMA_NODE_CFG_REG_ENABLED_MASK;
writel(numa_cfg.numa_cfg, io_cq->numa_node_cfg_reg);
}
static inline void ena_com_comp_ack(struct ena_com_io_sq *io_sq, u16 elem)
{
io_sq->next_to_comp += elem;
}
#endif /* ENA_ETH_COM_H_ */
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
/*
* Copyright 2015 Amazon.com, Inc. or its affiliates.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#ifndef ENA_PCI_ID_TBL_H_
#define ENA_PCI_ID_TBL_H_
#ifndef PCI_VENDOR_ID_AMAZON
#define PCI_VENDOR_ID_AMAZON 0x1d0f
#endif
#ifndef PCI_DEV_ID_ENA_PF
#define PCI_DEV_ID_ENA_PF 0x0ec2
#endif
#ifndef PCI_DEV_ID_ENA_LLQ_PF
#define PCI_DEV_ID_ENA_LLQ_PF 0x1ec2
#endif
#ifndef PCI_DEV_ID_ENA_VF
#define PCI_DEV_ID_ENA_VF 0xec20
#endif
#ifndef PCI_DEV_ID_ENA_LLQ_VF
#define PCI_DEV_ID_ENA_LLQ_VF 0xec21
#endif
#define ENA_PCI_ID_TABLE_ENTRY(devid) \
{PCI_DEVICE(PCI_VENDOR_ID_AMAZON, devid)},
static const struct pci_device_id ena_pci_tbl[] = {
ENA_PCI_ID_TABLE_ENTRY(PCI_DEV_ID_ENA_PF)
ENA_PCI_ID_TABLE_ENTRY(PCI_DEV_ID_ENA_LLQ_PF)
ENA_PCI_ID_TABLE_ENTRY(PCI_DEV_ID_ENA_VF)
ENA_PCI_ID_TABLE_ENTRY(PCI_DEV_ID_ENA_LLQ_VF)
{ }
};
#endif /* ENA_PCI_ID_TBL_H_ */
/*
* Copyright 2015 - 2016 Amazon.com, Inc. or its affiliates.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#ifndef _ENA_REGS_H_
#define _ENA_REGS_H_
/* ena_registers offsets */
#define ENA_REGS_VERSION_OFF 0x0
#define ENA_REGS_CONTROLLER_VERSION_OFF 0x4
#define ENA_REGS_CAPS_OFF 0x8
#define ENA_REGS_CAPS_EXT_OFF 0xc
#define ENA_REGS_AQ_BASE_LO_OFF 0x10
#define ENA_REGS_AQ_BASE_HI_OFF 0x14
#define ENA_REGS_AQ_CAPS_OFF 0x18
#define ENA_REGS_ACQ_BASE_LO_OFF 0x20
#define ENA_REGS_ACQ_BASE_HI_OFF 0x24
#define ENA_REGS_ACQ_CAPS_OFF 0x28
#define ENA_REGS_AQ_DB_OFF 0x2c
#define ENA_REGS_ACQ_TAIL_OFF 0x30
#define ENA_REGS_AENQ_CAPS_OFF 0x34
#define ENA_REGS_AENQ_BASE_LO_OFF 0x38
#define ENA_REGS_AENQ_BASE_HI_OFF 0x3c
#define ENA_REGS_AENQ_HEAD_DB_OFF 0x40
#define ENA_REGS_AENQ_TAIL_OFF 0x44
#define ENA_REGS_INTR_MASK_OFF 0x4c
#define ENA_REGS_DEV_CTL_OFF 0x54
#define ENA_REGS_DEV_STS_OFF 0x58
#define ENA_REGS_MMIO_REG_READ_OFF 0x5c
#define ENA_REGS_MMIO_RESP_LO_OFF 0x60
#define ENA_REGS_MMIO_RESP_HI_OFF 0x64
#define ENA_REGS_RSS_IND_ENTRY_UPDATE_OFF 0x68
/* version register */
#define ENA_REGS_VERSION_MINOR_VERSION_MASK 0xff
#define ENA_REGS_VERSION_MAJOR_VERSION_SHIFT 8
#define ENA_REGS_VERSION_MAJOR_VERSION_MASK 0xff00
/* controller_version register */
#define ENA_REGS_CONTROLLER_VERSION_SUBMINOR_VERSION_MASK 0xff
#define ENA_REGS_CONTROLLER_VERSION_MINOR_VERSION_SHIFT 8
#define ENA_REGS_CONTROLLER_VERSION_MINOR_VERSION_MASK 0xff00
#define ENA_REGS_CONTROLLER_VERSION_MAJOR_VERSION_SHIFT 16
#define ENA_REGS_CONTROLLER_VERSION_MAJOR_VERSION_MASK 0xff0000
#define ENA_REGS_CONTROLLER_VERSION_IMPL_ID_SHIFT 24
#define ENA_REGS_CONTROLLER_VERSION_IMPL_ID_MASK 0xff000000
/* caps register */
#define ENA_REGS_CAPS_CONTIGUOUS_QUEUE_REQUIRED_MASK 0x1
#define ENA_REGS_CAPS_RESET_TIMEOUT_SHIFT 1
#define ENA_REGS_CAPS_RESET_TIMEOUT_MASK 0x3e
#define ENA_REGS_CAPS_DMA_ADDR_WIDTH_SHIFT 8
#define ENA_REGS_CAPS_DMA_ADDR_WIDTH_MASK 0xff00
/* aq_caps register */
#define ENA_REGS_AQ_CAPS_AQ_DEPTH_MASK 0xffff
#define ENA_REGS_AQ_CAPS_AQ_ENTRY_SIZE_SHIFT 16
#define ENA_REGS_AQ_CAPS_AQ_ENTRY_SIZE_MASK 0xffff0000
/* acq_caps register */
#define ENA_REGS_ACQ_CAPS_ACQ_DEPTH_MASK 0xffff
#define ENA_REGS_ACQ_CAPS_ACQ_ENTRY_SIZE_SHIFT 16
#define ENA_REGS_ACQ_CAPS_ACQ_ENTRY_SIZE_MASK 0xffff0000
/* aenq_caps register */
#define ENA_REGS_AENQ_CAPS_AENQ_DEPTH_MASK 0xffff
#define ENA_REGS_AENQ_CAPS_AENQ_ENTRY_SIZE_SHIFT 16
#define ENA_REGS_AENQ_CAPS_AENQ_ENTRY_SIZE_MASK 0xffff0000
/* dev_ctl register */
#define ENA_REGS_DEV_CTL_DEV_RESET_MASK 0x1
#define ENA_REGS_DEV_CTL_AQ_RESTART_SHIFT 1
#define ENA_REGS_DEV_CTL_AQ_RESTART_MASK 0x2
#define ENA_REGS_DEV_CTL_QUIESCENT_SHIFT 2
#define ENA_REGS_DEV_CTL_QUIESCENT_MASK 0x4
#define ENA_REGS_DEV_CTL_IO_RESUME_SHIFT 3
#define ENA_REGS_DEV_CTL_IO_RESUME_MASK 0x8
/* dev_sts register */
#define ENA_REGS_DEV_STS_READY_MASK 0x1
#define ENA_REGS_DEV_STS_AQ_RESTART_IN_PROGRESS_SHIFT 1
#define ENA_REGS_DEV_STS_AQ_RESTART_IN_PROGRESS_MASK 0x2
#define ENA_REGS_DEV_STS_AQ_RESTART_FINISHED_SHIFT 2
#define ENA_REGS_DEV_STS_AQ_RESTART_FINISHED_MASK 0x4
#define ENA_REGS_DEV_STS_RESET_IN_PROGRESS_SHIFT 3
#define ENA_REGS_DEV_STS_RESET_IN_PROGRESS_MASK 0x8
#define ENA_REGS_DEV_STS_RESET_FINISHED_SHIFT 4
#define ENA_REGS_DEV_STS_RESET_FINISHED_MASK 0x10
#define ENA_REGS_DEV_STS_FATAL_ERROR_SHIFT 5
#define ENA_REGS_DEV_STS_FATAL_ERROR_MASK 0x20
#define ENA_REGS_DEV_STS_QUIESCENT_STATE_IN_PROGRESS_SHIFT 6
#define ENA_REGS_DEV_STS_QUIESCENT_STATE_IN_PROGRESS_MASK 0x40
#define ENA_REGS_DEV_STS_QUIESCENT_STATE_ACHIEVED_SHIFT 7
#define ENA_REGS_DEV_STS_QUIESCENT_STATE_ACHIEVED_MASK 0x80
/* mmio_reg_read register */
#define ENA_REGS_MMIO_REG_READ_REQ_ID_MASK 0xffff
#define ENA_REGS_MMIO_REG_READ_REG_OFF_SHIFT 16
#define ENA_REGS_MMIO_REG_READ_REG_OFF_MASK 0xffff0000
/* rss_ind_entry_update register */
#define ENA_REGS_RSS_IND_ENTRY_UPDATE_INDEX_MASK 0xffff
#define ENA_REGS_RSS_IND_ENTRY_UPDATE_CQ_IDX_SHIFT 16
#define ENA_REGS_RSS_IND_ENTRY_UPDATE_CQ_IDX_MASK 0xffff0000
#endif /*_ENA_REGS_H_ */
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册